Science.gov

Sample records for 3-d retinal imaging

  1. 3D Reconstruction of the Retinal Arterial Tree Using Subject-Specific Fundus Images

    NASA Astrophysics Data System (ADS)

    Liu, D.; Wood, N. B.; Xu, X. Y.; Witt, N.; Hughes, A. D.; Samcg, Thom

    Systemic diseases, such as hypertension and diabetes, are associated with changes in the retinal microvasculature. Although a number of studies have been performed on the quantitative assessment of the geometrical patterns of the retinal vasculature, previous work has been confined to 2 dimensional (2D) analyses. In this paper, we present an approach to obtain a 3D reconstruction of the retinal arteries from a pair of 2D retinal images acquired in vivo. A simple essential matrix based self-calibration approach was employed for the "fundus camera-eye" system. Vessel segmentation was performed using a semi-automatic approach and correspondence between points from different images was calculated. The results of 3D reconstruction show the centreline of retinal vessels and their 3D curvature clearly. Three-dimensional reconstruction of the retinal vessels is feasible and may be useful in future studies of the retinal vasculature in disease.

  2. 2-D Registration and 3-D Shape Inference of the Retinal Fundus from Fluorescein Images

    PubMed Central

    Choe, Tae Eun; Medioni, Gerard; Cohen, Isaac; Walsh, Alexander C.; Sadda, SriniVas R.

    2008-01-01

    This study presents methods to 2-D registration of retinal image sequences and 3-D shape inference from fluorescein images. The Y-feature is a robust geometric entity that is largely invariant across modalities as well as across the temporal grey level variations induced by the propagation of the dye in the vessels. We first present a Y-feature extraction method that finds a set of Y-feature candidates using local image gradient information. A gradient-based approach is then used to align an articulated model of the Y-feature to the candidates more accurately while optimizing a cost function. Using mutual information, fitted Y-features are subsequently matched across images, including colors and fluorescein angiographic frames, for registration. To reconstruct the retinal fundus in 3-D, the extracted Y-features are used to estimate the epipolar geometry with a plane-and-parallax approach. The proposed solution provides a robust estimation of the fundamental matrix suitable for plane-like surfaces, such as the retinal fundus. The mutual information criterion is used to accurately estimate the dense disparity map, while the Y-features are used to estimate the bounds of the range space. Our experimental results validate the proposed method on a set of difficult fluorescein image pairs. PMID:18060827

  3. Exact surface registration of retinal surfaces from 3-D optical coherence tomography images.

    PubMed

    Lee, Sieun; Lebed, Evgeniy; Sarunic, Marinko V; Beg, Mirza Faisal

    2015-02-01

    Nonrigid registration of optical coherence tomography (OCT) images is an important problem in studying eye diseases, evaluating the effect of pharmaceuticals in treating vision loss, and performing group-wise cross-sectional analysis. High dimensional nonrigid registration algorithms required for cross-sectional and longitudinal analysis are still being developed for accurate registration of OCT image volumes, with the speckle noise in images presenting a challenge for registration. Development of algorithms for segmentation of OCT images to generate surface models of retinal layers has advanced considerably and several algorithms are now available that can segment retinal OCT images into constituent retinal surfaces. Important morphometric measurements can be extracted if accurate surface registration algorithm for registering retinal surfaces onto corresponding template surfaces were available. In this paper, we present a novel method to perform multiple and simultaneous retinal surface registration, targeted to registering surfaces extracted from ocular volumetric OCT images. This enables a point-to-point correspondence (homology) between template and subject surfaces, allowing for a direct, vertex-wise comparison of morphometric measurements across subject groups. We demonstrate that this approach can be used to localize and analyze regional changes in choroidal and nerve fiber layer thickness among healthy and glaucomatous subjects, allowing for cross-sectional population wise analysis. We also demonstrate the method's ability to track longitudinal changes in optic nerve head morphometry, allowing for within-individual tracking of morphometric changes. This method can also, in the future, be used as a precursor to 3-D OCT image registration to better initialize nonrigid image registration algorithms closer to the desired solution. PMID:25312906

  4. Automated 3-D retinal layer segmentation of macular optical coherence tomography images with serous pigment epithelial detachments.

    PubMed

    Shi, Fei; Chen, Xinjian; Zhao, Heming; Zhu, Weifang; Xiang, Dehui; Gao, Enting; Sonka, Milan; Chen, Haoyu

    2015-02-01

    Automated retinal layer segmentation of optical coherence tomography (OCT) images has been successful for normal eyes but becomes challenging for eyes with retinal diseases if the retinal morphology experiences critical changes. We propose a method to automatically segment the retinal layers in 3-D OCT data with serous retinal pigment epithelial detachments (PED), which is a prominent feature of many chorioretinal disease processes. The proposed framework consists of the following steps: fast denoising and B-scan alignment, multi-resolution graph search based surface detection, PED region detection and surface correction above the PED region. The proposed technique was evaluated on a dataset with OCT images from 20 subjects diagnosed with PED. The experimental results showed the following. 1) The overall mean unsigned border positioning error for layer segmentation is 7.87±3.36 μm , and is comparable to the mean inter-observer variability ( 7.81±2.56 μm). 2) The true positive volume fraction (TPVF), false positive volume fraction (FPVF) and positive predicative value (PPV) for PED volume segmentation are 87.1%, 0.37%, and 81.2%, respectively. 3) The average running time is 220 s for OCT data of 512 × 64 × 480 voxels. PMID:25265605

  5. A statistical model for 3D segmentation of retinal choroid in optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Ghasemi, F.; Rabbani, H.

    2014-03-01

    The choroid is a densely layer under the retinal pigment epithelium (RPE). Its deeper boundary is formed by the sclera, the outer fibrous shell of the eye. However, the inhomogeneity within the layers of choroidal Optical Coherence Tomography (OCT)-tomograms presents a significant challenge to existing segmentation algorithms. In this paper, we performed a statistical study of retinal OCT data to extract the choroid. This model fits a Gaussian mixture model (GMM) to image intensities with Expectation Maximization (EM) algorithm. The goodness of fit for proposed GMM model is computed using Chi-square measure and is obtained lower than 0.04 for our dataset. After fitting GMM model on OCT data, Bayesian classification method is employed for segmentation of the upper and lower border of boundary of retinal choroid. Our simulations show the signed and unsigned error of -1.44 +/- 0.5 and 1.6 +/- 0.53 for upper border, and -5.7 +/- 13.76 and 6.3 +/- 13.4 for lower border, respectively.

  6. Automated Foveola Localization in Retinal 3D-OCT Images Using Structural Support Vector Machine Prediction

    PubMed Central

    Liu, Yu-Ying; Ishikawa, Hiroshi; Chen, Mei; Wollstein, Gadi; Schuman, Joel S.; Rehg, James M.

    2013-01-01

    We develop an automated method to determine the foveola location in macular 3D-OCT images in either healthy or pathological conditions. Structural Support Vector Machine (S-SVM) is trained to directly predict the location of the foveola, such that the score at the ground truth position is higher than that at any other position by a margin scaling with the associated localization loss. This S-SVM formulation directly minimizes the empirical risk of localization error, and makes efficient use of all available training data. It deals with the localization problem in a more principled way compared to the conventional binary classifier learning that uses zero-one loss and random sampling of negative examples. A total of 170 scans were collected for the experiment. Our method localized 95.1% of testing scans within the anatomical area of the foveola. Our experimental results show that the proposed method can effectively identify the location of the foveola, facilitating diagnosis around this important landmark. PMID:23285565

  7. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  8. Retinal Imaging and Image Analysis

    PubMed Central

    Abràmoff, Michael D.; Garvin, Mona K.; Sonka, Milan

    2011-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships. PMID:21743764

  9. 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.

    2010-06-01

    Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of a 3D photoacoustic imaging system, and (ii) that reconstruction algorithms which favor sparseness can significantly improve imaging performance. These methodologies should provide a means to optimize detector count and geometry for a multitude of 3D photoacoustic imaging applications.

  10. A framework for retinal layer intensity analysis for retinal artery occlusion patient based on 3D OCT

    NASA Astrophysics Data System (ADS)

    Liao, Jianping; Chen, Haoyu; Zhou, Chunlei; Chen, Xinjian

    2014-03-01

    Occlusion of retinal artery leads to severe ischemia and dysfunction of retina. Quantitative analysis of the reflectivity in the retina is very needed to quantitative assessment of the severity of retinal ischemia. In this paper, we proposed a framework for retinal layer intensity analysis for retinal artery occlusion patient based on 3D OCT images. The proposed framework consists of five main steps. First, a pre-processing step is applied to the input OCT images. Second, the graph search method was applied to segment multiple surfaces in OCT images. Third, the RAO region was detected based on texture classification method. Fourth, the layer segmentation was refined using the detected RAO regions. Finally, the retinal layer intensity analysis was performed. The proposed method was tested on tested on 27 clinical Spectral domain OCT images. The preliminary results show the feasibility and efficacy of the proposed method.

  11. Heterodyne 3D ghost imaging

    NASA Astrophysics Data System (ADS)

    Yang, Xu; Zhang, Yong; Yang, Chenghua; Xu, Lu; Wang, Qiang; Zhao, Yuan

    2016-06-01

    Conventional three dimensional (3D) ghost imaging measures range of target based on pulse fight time measurement method. Due to the limit of data acquisition system sampling rate, range resolution of the conventional 3D ghost imaging is usually low. In order to take off the effect of sampling rate to range resolution of 3D ghost imaging, a heterodyne 3D ghost imaging (HGI) system is presented in this study. The source of HGI is a continuous wave laser instead of pulse laser. Temporal correlation and spatial correlation of light are both utilized to obtain the range image of target. Through theory analysis and numerical simulations, it is demonstrated that HGI can obtain high range resolution image with low sampling rate.

  12. Autofocus for 3D imaging

    NASA Astrophysics Data System (ADS)

    Lee-Elkin, Forest

    2008-04-01

    Three dimensional (3D) autofocus remains a significant challenge for the development of practical 3D multipass radar imaging. The current 2D radar autofocus methods are not readily extendable across sensor passes. We propose a general framework that allows a class of data adaptive solutions for 3D auto-focus across passes with minimal constraints on the scene contents. The key enabling assumption is that portions of the scene are sparse in elevation which reduces the number of free variables and results in a system that is simultaneously solved for scatterer heights and autofocus parameters. The proposed method extends 2-pass interferometric synthetic aperture radar (IFSAR) methods to an arbitrary number of passes allowing the consideration of scattering from multiple height locations. A specific case from the proposed autofocus framework is solved and demonstrates autofocus and coherent multipass 3D estimation across the 8 passes of the "Gotcha Volumetric SAR Data Set" X-Band radar data.

  13. 3-D segmentation of retinal blood vessels in spectral-domain OCT volumes of the optic nerve head

    NASA Astrophysics Data System (ADS)

    Lee, Kyungmoo; Abràmoff, Michael D.; Niemeijer, Meindert; Garvin, Mona K.; Sonka, Milan

    2010-03-01

    Segmentation of retinal blood vessels can provide important information for detecting and tracking retinal vascular diseases including diabetic retinopathy, arterial hypertension, arteriosclerosis and retinopathy of prematurity (ROP). Many studies on 2-D segmentation of retinal blood vessels from a variety of medical images have been performed. However, 3-D segmentation of retinal blood vessels from spectral-domain optical coherence tomography (OCT) volumes, which is capable of providing geometrically accurate vessel models, to the best of our knowledge, has not been previously studied. The purpose of this study is to develop and evaluate a method that can automatically detect 3-D retinal blood vessels from spectral-domain OCT scans centered on the optic nerve head (ONH). The proposed method utilized a fast multiscale 3-D graph search to segment retinal surfaces as well as a triangular mesh-based 3-D graph search to detect retinal blood vessels. An experiment on 30 ONH-centered OCT scans (15 right eye scans and 15 left eye scans) from 15 subjects was performed, and the mean unsigned error in 3-D of the computer segmentations compared with the independent standard obtained from a retinal specialist was 3.4 +/- 2.5 voxels (0.10 +/- 0.07 mm).

  14. True 3d Images and Their Applications

    NASA Astrophysics Data System (ADS)

    Wang, Z.; wang@hzgeospace., zheng.

    2012-07-01

    A true 3D image is a geo-referenced image. Besides having its radiometric information, it also has true 3Dground coordinates XYZ for every pixels of it. For a true 3D image, especially a true 3D oblique image, it has true 3D coordinates not only for building roofs and/or open grounds, but also for all other visible objects on the ground, such as visible building walls/windows and even trees. The true 3D image breaks the 2D barrier of the traditional orthophotos by introducing the third dimension (elevation) into the image. From a true 3D image, for example, people will not only be able to read a building's location (XY), but also its height (Z). true 3D images will fundamentally change, if not revolutionize, the way people display, look, extract, use, and represent the geospatial information from imagery. In many areas, true 3D images can make profound impacts on the ways of how geospatial information is represented, how true 3D ground modeling is performed, and how the real world scenes are presented. This paper first gives a definition and description of a true 3D image and followed by a brief review of what key advancements of geospatial technologies have made the creation of true 3D images possible. Next, the paper introduces what a true 3D image is made of. Then, the paper discusses some possible contributions and impacts the true 3D images can make to geospatial information fields. At the end, the paper presents a list of the benefits of having and using true 3D images and the applications of true 3D images in a couple of 3D city modeling projects.

  15. Mapping the 3D Connectivity of the Rat Inner Retinal Vascular Network Using OCT Angiography

    PubMed Central

    Leahy, Conor; Radhakrishnan, Harsha; Weiner, Geoffrey; Goldberg, Jeffrey L.; Srinivasan, Vivek J.

    2015-01-01

    Purpose The purpose of this study is to demonstrate three-dimensional (3D) graphing based on optical coherence tomography (OCT) angiography for characterization of the inner retinal vascular architecture and determination of its topologic principles. Methods Rat eyes (N = 3) were imaged with a 1300-nm spectral/Fourier domain OCT microscope. A topologic model of the inner retinal vascular network was obtained from OCT angiography data using a combination of automated and manually-guided image processing techniques. Using a resistive network model, with experimentally-quantified flow in major retinal vessels near the optic nerve head as boundary conditions, theoretical changes in the distribution of flow induced by vessel dilations were inferred. Results A topologically-representative 3D vectorized graph of the inner retinal vasculature, derived from OCT angiography data, is presented. The laminar and compartmental connectivity of the vasculature are characterized. In contrast to sparse connectivity between the superficial vitreal vasculature and capillary plexuses of the inner retina, connectivity between the two capillary plexus layers is dense. Simulated dilation of single arterioles is shown to produce both localized and lamina-specific changes in blood flow, while dilation of capillaries in a given retinal vascular layer is shown to lead to increased total flow in that layer. Conclusions Our graphing and modeling data suggest that vascular architecture enables both local and lamina-specific control of blood flow in the inner retina. The imaging, graph analysis, and modeling approach presented here will help provide a detailed characterization of vascular changes in a variety of retinal diseases, both in experimental preclinical models and human subjects. PMID:26325417

  16. Transplantation of Embryonic and Induced Pluripotent Stem Cell-Derived 3D Retinal Sheets into Retinal Degenerative Mice

    PubMed Central

    Assawachananont, Juthaporn; Mandai, Michiko; Okamoto, Satoshi; Yamada, Chikako; Eiraku, Mototsugu; Yonemura, Shigenobu; Sasai, Yoshiki; Takahashi, Masayo

    2014-01-01

    Summary In this article, we show that mouse embryonic stem cell- or induced pluripotent stem cell-derived 3D retinal tissue developed a structured outer nuclear layer (ONL) with complete inner and outer segments even in an advanced retinal degeneration model (rd1) that lacked ONL. We also observed host-graft synaptic connections by immunohistochemistry. This study provides a “proof of concept” for retinal sheet transplantation therapy for advanced retinal degenerative diseases. PMID:24936453

  17. Synthetic 3D diamond-based electrodes for flexible retinal neuroprostheses: Model, production and in vivo biocompatibility.

    PubMed

    Bendali, Amel; Rousseau, Lionel; Lissorgues, Gaëlle; Scorsone, Emmanuel; Djilas, Milan; Dégardin, Julie; Dubus, Elisabeth; Fouquet, Stéphane; Benosman, Ryad; Bergonzo, Philippe; Sahel, José-Alain; Picaud, Serge

    2015-10-01

    Two retinal implants have recently received the CE mark and one has obtained FDA approval for the restoration of useful vision in blind patients. Since the spatial resolution of current vision prostheses is not sufficient for most patients to detect faces or perform activities of daily living, more electrodes with less crosstalk are needed to transfer complex images to the retina. In this study, we modelled planar and three-dimensional (3D) implants with a distant ground or a ground grid, to demonstrate greater spatial resolution with 3D structures. Using such flexible 3D implant prototypes, we showed that the degenerated retina could mould itself to the inside of the wells, thereby isolating bipolar neurons for specific, independent stimulation. To investigate the in vivo biocompatibility of diamond as an electrode or an isolating material, we developed a procedure for depositing diamond onto flexible 3D retinal implants. Taking polyimide 3D implants as a reference, we compared the number of neurones integrating the 3D diamond structures and their ratio to the numbers of all cells, including glial cells. Bipolar neurones were increased whereas there was no increase even a decrease in the total cell number. SEM examinations of implants confirmed the stability of the diamond after its implantation in vivo. This study further demonstrates the potential of 3D designs for increasing the resolution of retinal implants and validates the safety of diamond materials for retinal implants and neuroprostheses in general. PMID:26210174

  18. ATR for 3D medical imaging

    NASA Astrophysics Data System (ADS)

    Jannson, Tomasz; Kostrzewski, Andrew; Paki Amouzou, P.

    2007-09-01

    This paper presents a novel concept of Automatic Target Recognition (ATR) for 3D medical imaging. Such 3D imaging can be obtained from X-ray Computerized Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), Ultrasonography (USG), functional MRI, and others. In the case of CT, such 3D imaging can be derived from 3D-mapping of X-ray linear attenuation coefficients, related to 3D Fourier transform of Radon transform, starting from frame segmentation (or contour definition) into an object and background. Then, 3D template matching is provided, based on inertial tensor invariants, adopted from rigid body mechanics, by comparing the mammographic data base with a real object of interest, such as a malignant breast tumor. The method is more general than CAD breast mammography.

  19. 3D ultrafast ultrasound imaging in vivo

    NASA Astrophysics Data System (ADS)

    Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-10-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability.

  20. 3D Ultrafast Ultrasound Imaging In Vivo

    PubMed Central

    Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2014-01-01

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative real-time imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in three dimensions based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32×32 matrix-array probe. Its capability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3-D Shear-Wave Imaging, 3-D Ultrafast Doppler Imaging and finally 3D Ultrafast combined Tissue and Flow Doppler. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3-D Ultrafast Doppler was used to obtain 3-D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, for the first time, the complex 3-D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, and the 3-D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3-D Ultrafast Ultrasound Imaging for the 3-D real-time mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra- and inter-observer variability. PMID:25207828

  1. Accelerated 3D image registration

    NASA Astrophysics Data System (ADS)

    Vester-Christensen, Martin; Erbou, Sren G.; Darkner, Sune; Larsen, Rasmus

    2007-03-01

    Image registration is an important task in most medical imaging applications. Numerous algorithms have been proposed and some are widely used. However, due to the vast amount of data collected by eg. a computed tomography (CT) scanner, most registration algorithms are very slow and memory consuming. This is a huge problem especially in atlas building, where potentially hundreds of registrations are performed. This paper describes an approach for accelerated image registration. A grid-based warp function proposed by Cootes and Twining, parameterized by the displacement of the grid-nodes, is used. Using a coarse-to-fine approach, the composition of small diffeomorphic warps, results in a final diffeomorphic warp. Normally the registration is done using a standard gradient-based optimizer, but to obtain a fast algorithm the optimization is formulated in the inverse compositional framework proposed by Baker and Matthews. By switching the roles of the target and the input volume, the Jacobian and the Hessian can be pre-calculated resulting in a very efficient optimization algorithm. By exploiting the local nature of the grid-based warp, the storage requirements of the Jacobian and the Hessian can be minimized. Furthermore, it is shown that additional constraints on the registration, such as the location of markers, are easily embedded in the optimization. The method is applied on volumes built from CT-scans of pig-carcasses, and results show a two-fold increase in speed using the inverse compositional approach versus the traditional gradient-based method.

  2. Evaluation of 3D imaging.

    PubMed

    Vannier, M W

    2000-10-01

    Interactive computer-based simulation is gaining acceptance for craniofacial surgical planning. Subjective visualization without objective measurement capability, however, severely limits the value of simulation since spatial accuracy must be maintained. This study investigated the error sources involved in one method of surgical simulation evaluation. Linear and angular measurement errors were found to be within +/- 1 mm and 1 degree. Surface match of scanned objects was slightly less accurate, with errors up to 3 voxels and 4 degrees, and Boolean subtraction methods were 93 to 99% accurate. Once validated, these testing methods were applied to objectively compare craniofacial surgical simulations to post-operative outcomes, and verified that the form of simulation used in this study yields accurate depictions of surgical outcome. However, to fully evaluate surgical simulation, future work is still required to test the new methods in sufficient numbers of patients to achieve statistically significant results. Once completely validated, simulation cannot only be used in pre-operative surgical planning, but also as a post-operative descriptor of surgical and traumatic physical changes. Validated image comparison methods can also show discrepancy of surgical outcome to surgical plan, thus allowing evaluation of surgical technique. PMID:11098409

  3. [3-D imaging of the liver].

    PubMed

    Leppek, R; Klose, K J

    1995-10-01

    Three dimensional imaging of the liver is focused on the spatial visualization of focal lesions in relation to vascular landmarks to support surgical decision making. Additionally the volume estimation of metastases or liver segments is considered in the patient's oncologic follow-up and the planning of the surgical approach. Spiral-CT is destined for 3D imaging, as it represents a standardized, minor invasive, time and cost saving method. Current developments of 3D sonography of the liver are reported. While a special application of 3D imaging, i.e. CT angiography (CTA), replaces preoperative arterial angiography in liver transplantation candidates the diagnostic gold standard for lesion detection still remains by reason of missing algorithms the interpretation of a 2D data set. Validation of 3D versus 2D imaging of the liver demands controlled trials to evaluate the diagnostic potential, time and cost savings of dedicated acquisition techniques, postprocessing algorithms and display modalities. PMID:7501806

  4. 3D holoscopic video imaging system

    NASA Astrophysics Data System (ADS)

    Steurer, Johannes H.; Pesch, Matthias; Hahne, Christopher

    2012-03-01

    Since many years, integral imaging has been discussed as a technique to overcome the limitations of standard still photography imaging systems where a three-dimensional scene is irrevocably projected onto two dimensions. With the success of 3D stereoscopic movies, a huge interest in capturing three-dimensional motion picture scenes has been generated. In this paper, we present a test bench integral imaging camera system aiming to tailor the methods of light field imaging towards capturing integral 3D motion picture content. We estimate the hardware requirements needed to generate high quality 3D holoscopic images and show a prototype camera setup that allows us to study these requirements using existing technology. The necessary steps that are involved in the calibration of the system as well as the technique of generating human readable holoscopic images from the recorded data are discussed.

  5. Small Animal Retinal Imaging

    NASA Astrophysics Data System (ADS)

    Choi, WooJhon; Drexler, Wolfgang; Fujimoto, James G.

    Developing and validating new techniques and methods for small animal imaging is an important research area because there are many small animal models of retinal diseases such as diabetic retinopathy, age-related macular degeneration, and glaucoma [1-6]. Because the retina is a multilayered structure with distinct abnormalities occurring in different intraretinal layers at different stages of disease progression, there is a need for imaging techniques that enable visualization of these layers individually at different time points. Although postmortem histology and ultrastructural analysis can be performed for investigating microscopic changes in the retina in small animal models, this requires sacrificing animals, which makes repeated assessment of the same animal at different time points impossible and increases the number of animals required. Furthermore, some retinal processes such as neurovascular coupling cannot be fully characterized postmortem.

  6. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J.

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  7. Miniaturized 3D microscope imaging system

    NASA Astrophysics Data System (ADS)

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  8. 3D NMR Imaging of Foam Structures

    NASA Astrophysics Data System (ADS)

    Kose, Katsumi

    Three-dimensional foam structures were measured using NMR imaging, and their 3D geometrical properties were analyzed. Eight bubble polyhedra of a polyurethane foam specimen were extracted from 3D NMR image data using a newly developed 3D geometrical structure analysis program, and quantitative geometrical data were measured for the first time for real foam systems. The results agreed well with the study by Matzke with soap bubbles but did not agree with the optimum solution by Weaire and Phelan for 3D space division into equal volume cells with minimum partitional area. The reason for this disagreement is not clear; however, improved foam preparation and more systematic measurements using the method developed here may clarify this difference.

  9. 3D EIT image reconstruction with GREIT.

    PubMed

    Grychtol, Bartłomiej; Müller, Beat; Adler, Andy

    2016-06-01

    Most applications of thoracic EIT use a single plane of electrodes on the chest from which a transverse image 'slice' is calculated. However, interpretation of EIT images is made difficult by the large region above and below the electrode plane to which EIT is sensitive. Volumetric EIT images using two (or more) electrode planes should help compensate, but are little used currently. The Graz consensus reconstruction algorithm for EIT (GREIT) has become popular in lung EIT. One shortcoming of the original formulation of GREIT is its restriction to reconstruction onto a 2D planar image. We present an extension of the GREIT algorithm to 3D and develop open-source tools to evaluate its performance as a function of the choice of stimulation and measurement pattern. Results show 3D GREIT using two electrode layers has significantly more uniform sensitivity profiles through the chest region. Overall, the advantages of 3D EIT are compelling. PMID:27203184

  10. Imaging irregularly sampled 3D prestack data

    NASA Astrophysics Data System (ADS)

    Chemingui, Nizar

    Imaging multichannel seismic data for amplitude inversion is a challenging task. The process seeks an inverse for a matrix of very high order that relates the data to a reflectivity model. Due to the irregular coverage of 3D surveys, the matrix is ill-conditioned and its coefficients are badly scaled. In this dissertation, I present a new approach for imaging irregularly sampled 3D data. The strategy is to reduce the size of the full matrix by reducing the size of 3D prestack data before imaging, and to balance the coefficients of the matrix by regularizing the coverage of 3D surveys. I tackle the case of Kirchhoff imaging operators because of their I/O flexibility and computational efficiency. However, after regularization, full-wave extrapolation techniques may become attractive and practical to implement on the regularly sampled prestack volume. For adequately sampled 3D data with varying surface coverage, I use an asymptotic approximate inverse to obtain a good image. I apply a new partial prestack operator named azimuth moveout (AMO) to reduce the size of the prestack. data and regularize its coverage by partial stacking. The effects of irregular coverage and varying illumination at depth are reduced by applying a diagonal transformation to the Kirchhoff operator. Problems arise in 3D reflection seismology where fine sampling is not possible and the sparse geometry of 3D surveys results in spatial aliasing. I develop a new dealaising technique which I refer to as inversion to common offset (ICO). Posing partial stacking as an optimization process, the inversion improves the stack when the data are spatially aliased. I present two formulations for ICO, namely data-space and model-space inversion and design an efficient implementation of the algorithm in the Log-stretch Fourier domain. To accelerate the convergence of the iterative solution I present a new technique for Preconditioning the inversion based on row and column scaling. Results from field marine and land surveys are presented to demonstrate the application of AMO and ICO for regularizing the coverage of 3D surveys and reducing the costs of 3D prestack imaging. The images obtained by prestack migration after regularization are superior to those obtained by migrating the irregularly sampled data. Furthermore, ICO provides a promising approach for reducing the costs of 3D acquisition.

  11. Body sway induced by 3D images

    NASA Astrophysics Data System (ADS)

    Hoshino, Miho; Takahashi, Minoru; Oyamada, Kenji; Ohmi, Masao; Yoshizawa, Tatuya

    1997-05-01

    We used body sway to evaluate a viewer's sense of presence with three kinds of 3D displays as follows: a head mounted display (HMD), a 70 inch 3D display and a consumer 3D television. This expedient used images with a fixed foreground and a rolling background as the visual stimuli to induce body sway. The images were taken from a boat rolling at five different frequencies (approx. 0.125, 0.20, 0.25, 0.33, 0.50 Hz). We examined eight healthy adults viewing each of the five images for three minutes on each display. We evaluated body sway using a motion analyzing system to measure the displacement of a marker placed on the head of the subjects. It was found that at all rolling frequencies of the image background, the HMD induced the greatest amount of body sway followed by the large 3D display and then the consumer 3D television. The amount of body sway was the greatest when the rolling frequency was 0.33 Hz. The results showed the amount of body sway depended on the type of display and the rolling frequency.

  12. ICER-3D Hyperspectral Image Compression Software

    NASA Technical Reports Server (NTRS)

    Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

    2010-01-01

    Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received prior to the loss can be used to reconstruct that partition at lower fidelity. By virtue of the compression improvement it achieves relative to previous means of onboard data compression, this software enables (1) increased return of hyperspectral scientific data in the presence of limits on the rates of transmission of data from spacecraft to Earth via radio communication links and/or (2) reduction in spacecraft radio-communication power and/or cost through reduction in the amounts of data required to be downlinked and stored onboard prior to downlink. The software is also suitable for compressing hyperspectral images for ground storage or archival purposes.

  13. Acquisition and applications of 3D images

    NASA Astrophysics Data System (ADS)

    Sterian, Paul; Mocanu, Elena

    2007-08-01

    The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

  14. 3D camera tracking from disparity images

    NASA Astrophysics Data System (ADS)

    Kim, Kiyoung; Woo, Woontack

    2005-07-01

    In this paper, we propose a robust camera tracking method that uses disparity images computed from known parameters of 3D camera and multiple epipolar constraints. We assume that baselines between lenses in 3D camera and intrinsic parameters are known. The proposed method reduces camera motion uncertainty encountered during camera tracking. Specifically, we first obtain corresponding feature points between initial lenses using normalized correlation method. In conjunction with matching features, we get disparity images. When the camera moves, the corresponding feature points, obtained from each lens of 3D camera, are robustly tracked via Kanade-Lukas-Tomasi (KLT) tracking algorithm. Secondly, relative pose parameters of each lens are calculated via Essential matrices. Essential matrices are computed from Fundamental matrix calculated using normalized 8-point algorithm with RANSAC scheme. Then, we determine scale factor of translation matrix by d-motion. This is required because the camera motion obtained from Essential matrix is up to scale. Finally, we optimize camera motion using multiple epipolar constraints between lenses and d-motion constraints computed from disparity images. The proposed method can be widely adopted in Augmented Reality (AR) applications, 3D reconstruction using 3D camera, and fine surveillance systems which not only need depth information, but also camera motion parameters in real-time.

  15. High definition 3D ultrasound imaging.

    PubMed

    Morimoto, A K; Krumm, J C; Kozlowski, D M; Kuhlmann, J L; Wilson, C; Little, C; Dickey, F M; Kwok, K S; Rogers, B; Walsh, N

    1997-01-01

    We have demonstrated high definition and improved resolution using a novel scanning system integrated with a commercial ultrasound machine. The result is a volumetric 3D ultrasound data set that can be visualized using standard techniques. Unlike other 3D ultrasound images, image quality is improved from standard 2D data. Image definition and bandwidth is improved using patent pending techniques. The system can be used to image patients or wounded soldiers for general imaging of anatomy such as abdominal organs, extremities, and the neck. Although the risks associated with x-ray carcinogenesis are relatively low at diagnostic dose levels, concerns remain for individuals in high risk categories. In addition, cost and portability of CT and MRI machines can be prohibitive. In comparison, ultrasound can provide portable, low-cost, non-ionizing imaging. Previous clinical trials comparing ultrasound to CT were used to demonstrate qualitative and quantitative improvements of ultrasound using the Sandia technologies. Transverse leg images demonstrated much higher clarity and lower noise than is seen in traditional ultrasound images. An x-ray CT scan was provided of the same cross-section for comparison. The results of our most recent trials demonstrate the advantages of 3D ultrasound and motion compensation compared with 2D ultrasound. Metal objects can also be observed within the anatomy. PMID:10168958

  16. Automated three-dimensional choroidal vessel segmentation of 3D 1060 nm OCT retinal data

    PubMed Central

    Kajić, Vedran; Esmaeelpour, Marieh; Glittenberg, Carl; Kraus, Martin F.; Honegger, Joachim; Othara, Richu; Binder, Susanne; Fujimoto, James G.; Drexler, Wolfgang

    2012-01-01

    A fully automated, robust vessel segmentation algorithm has been developed for choroidal OCT, employing multiscale 3D edge filtering and projection of “probability cones” to determine the vessel “core”, even in the tomograms with low signal-to-noise ratio (SNR). Based on the ideal vessel response after registration and multiscale filtering, with computed depth related SNR, the vessel core estimate is dilated to quantify the full vessel diameter. As a consequence, various statistics can be computed using the 3D choroidal vessel information, such as ratios of inner (smaller) to outer (larger) choroidal vessels or the absolute/relative volume of choroid vessels. Choroidal vessel quantification can be displayed in various forms, focused and averaged within a special region of interest, or analyzed as the function of image depth. In this way, the proposed algorithm enables unique visualization of choroidal watershed zones, as well as the vessel size reduction when investigating the choroid from the sclera towards the retinal pigment epithelium (RPE). To the best of our knowledge, this is the first time that an automatic choroidal vessel segmentation algorithm is successfully applied to 1060 nm 3D OCT of healthy and diseased eyes. PMID:23304653

  17. 3-D Adaptive Sparsity Based Image Compression With Applications to Optical Coherence Tomography.

    PubMed

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A; Farsiu, Sina

    2015-06-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  18. Volume-Rendering-Based Interactive 3D Measurement for Quantitative Analysis of 3D Medical Images

    PubMed Central

    Dai, Yakang; Yang, Yuetao; Kuai, Duojie; Yang, Xiaodong

    2013-01-01

    3D medical images are widely used to assist diagnosis and surgical planning in clinical applications, where quantitative measurement of interesting objects in the image is of great importance. Volume rendering is widely used for qualitative visualization of 3D medical images. In this paper, we introduce a volume-rendering-based interactive 3D measurement framework for quantitative analysis of 3D medical images. In the framework, 3D widgets and volume clipping are integrated with volume rendering. Specifically, 3D plane widgets are manipulated to clip the volume to expose interesting objects. 3D plane widgets, 3D line widgets, and 3D angle widgets are then manipulated to measure the areas, distances, and angles of interesting objects. The methodology of the proposed framework is described. Experimental results indicate the performance of the interactive 3D measurement framework. PMID:23762199

  19. Intra-retinal layer segmentation of 3D optical coherence tomography using coarse grained diffusion map

    PubMed Central

    Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D.; Sonka, Milan

    2013-01-01

    Optical coherence tomography (OCT) is a powerful and noninvasive method for retinal imaging. In this paper, we introduce a fast segmentation method based on a new variant of spectral graph theory named diffusion maps. The research is performed on spectral domain (SD) OCT images depicting macular and optic nerve head appearance. The presented approach does not require edge-based image information in localizing most of boundaries and relies on regional image texture. Consequently, the proposed method demonstrates robustness in situations of low image contrast or poor layer-to-layer image gradients. Diffusion mapping applied to 2D and 3D OCT datasets is composed of two steps, one for partitioning the data into important and less important sections, and another one for localization of internal layers. In the first step, the pixels/voxels are grouped in rectangular/cubic sets to form a graph node. The weights of the graph are calculated based on geometric distances between pixels/voxels and differences of their mean intensity. The first diffusion map clusters the data into three parts, the second of which is the area of interest. The other two sections are eliminated from the remaining calculations. In the second step, the remaining area is subjected to another diffusion map assessment and the internal layers are localized based on their textural similarities. The proposed method was tested on 23 datasets from two patient groups (glaucoma and normals). The mean unsigned border positioning errors (mean ± SD) was 8.52 ± 3.13 and 7.56 ± 2.95 μm for the 2D and 3D methods, respectively. PMID:23837966

  20. Metrological characterization of 3D imaging devices

    NASA Astrophysics Data System (ADS)

    Guidi, G.

    2013-04-01

    Manufacturers often express the performance of a 3D imaging device in various non-uniform ways for the lack of internationally recognized standard requirements for metrological parameters able to identify the capability of capturing a real scene. For this reason several national and international organizations in the last ten years have been developing protocols for verifying such performance. Ranging from VDI/VDE 2634, published by the Association of German Engineers and oriented to the world of mechanical 3D measurements (triangulation-based devices), to the ASTM technical committee E57, working also on laser systems based on direct range detection (TOF, Phase Shift, FM-CW, flash LADAR), this paper shows the state of the art about the characterization of active range devices, with special emphasis on measurement uncertainty, accuracy and resolution. Most of these protocols are based on special objects whose shape and size are certified with a known level of accuracy. By capturing the 3D shape of such objects with a range device, a comparison between the measured points and the theoretical shape they should represent is possible. The actual deviations can be directly analyzed or some derived parameters can be obtained (e.g. angles between planes, distances between barycenters of spheres rigidly connected, frequency domain parameters, etc.). This paper shows theoretical aspects and experimental results of some novel characterization methods applied to different categories of active 3D imaging devices based on both principles of triangulation and direct range detection.

  1. Signal subspace registration of 3D images

    NASA Astrophysics Data System (ADS)

    Soumekh, Mehrdad

    1998-06-01

    This paper addresses the problem of fusing the information content of two uncalibrated sensors. This problem arises in registering images of a scene when it is viewed via two different sensory systems, or detecting change in a scene when it is viewed at two different time points by a sensory system (or via two different sensory systems or observation channels). We are concerned with sensory systems which have not only a relative shift, scaling and rotational calibration error, but also an unknown point spread function (that is time-varying for a single sensor, or different for two sensors). By modeling one image in terms of an unknown linear combination of the other image, its powers and their spatially-transformed (shift, rotation and scaling) versions, a signal subspace processing is developed for fusing uncalibrated sensors. Numerical results with realistic 3D magnetic resonance images of a patient with multiple sclerosis, which are acquired at two different time points, are provided.

  2. 3D goes digital: from stereoscopy to modern 3D imaging techniques

    NASA Astrophysics Data System (ADS)

    Kerwien, N.

    2014-11-01

    In the 19th century, English physicist Charles Wheatstone discovered stereopsis, the basis for 3D perception. His construction of the first stereoscope established the foundation for stereoscopic 3D imaging. Since then, many optical instruments were influenced by these basic ideas. In recent decades, the advent of digital technologies revolutionized 3D imaging. Powerful readily available sensors and displays combined with efficient pre- or post-processing enable new methods for 3D imaging and applications. This paper draws an arc from basic concepts of 3D imaging to modern digital implementations, highlighting instructive examples from its 175 years of history.

  3. A 3D-continuum bidomain model of retinal electrical stimulation using an anatomically detailed mesh.

    PubMed

    Shalbaf, Farzaneh; Du, Peng; Lovell, Nigel H; Dokos, Socrates; Vaghefi, Ehsan

    2015-08-01

    A continuum bidomain model of sub-retinal electrical stimulation on an anatomically detailed mesh of retina is presented. The underlying geometry is made up of 256 B-scans of optical coherence tomography (OCT) images of a healthy human retina, covering approximately 6×2 mm(2) centered on the macula. The OCT images are initially segmented and digitized into five major retinal layers comprising passive and active retinal cell types. This computational mesh is then used to model a subretinal hexapolar biphasic electrical stimulation. Our results indicate that the ultra-structure of the retina results in an asymmetric spatial extracellular potential distribution, leading to an irregular pattern of retinal ganglion cell activation. This finding is in contrast to focal circular activation previously reported in retinal electrical stimulation modeling with a uniform mesh. PMID:26736750

  4. 3D Segmentation of Fluid-Associated Abnormalities in Retinal OCT: Probability Constrained Graph-Search–Graph-Cut

    PubMed Central

    Chen, Xinjian; Niemeijer, Meindert; Zhang, Li; Lee, Kyungmoo; Abràmoff, Michael D.; Sonka, Milan

    2013-01-01

    An automated method is reported for segmenting 3D fluid and fluid-associated abnormalities in the retina, so-called Symptomatic Exudate-Associated Derangements (SEAD), from 3D OCT retinal images of subjects suffering from exudative age-related macular degeneration. In the first stage of a two-stage approach, retinal layers are segmented, candidate SEAD regions identified, and the retinal OCT image is flattened using a candidate-SEAD aware approach. In the second stage, a probability constrained combined graph search – graph cut method refines the candidate SEADs by integrating the candidate volumes into the graph cut cost function as probability constraints. The proposed method was evaluated on 15 spectral domain OCT images from 15 subjects undergoing intravitreal anti-VEGF injection treatment. Leave-one-out evaluation resulted in a true positive volume fraction (TPVF), false positive volume fraction (FPVF) and relative volume difference ratio (RVDR) of 86.5%, 1.7% and 12.8%, respectively. The new graph cut – graph search method significantly outperformed both the traditional graph cut and traditional graph search approaches (p<0.01, p<0.04) and has the potential to improve clinical management of patients with choroidal neovascularization due to exudative age-related macular degeneration. PMID:22453610

  5. Filling in the retinal image

    NASA Technical Reports Server (NTRS)

    Larimer, James; Piantanida, Thomas

    1990-01-01

    The optics of the eye form an image on a surface at the back of the eyeball called the retina. The retina contains the photoreceptors that sample the image and convert it into a neural signal. The spacing of the photoreceptors in the retina is not uniform and varies with retinal locus. The central retinal field, called the macula, is densely packed with photoreceptors. The packing density falls off rapidly as a function of retinal eccentricity with respect to the macular region and there are regions in which there are no photoreceptors at all. The retinal regions without photoreceptors are called blind spots or scotomas. The neural transformations which convert retinal image signals into percepts fills in the gaps and regularizes the inhomogeneities of the retinal photoreceptor sampling mosaic. The filling-in mechamism plays an important role in understanding visual performance. The filling-in mechanism is not well understood. A systematic collaborative research program at the Ames Research Center and SRI in Menlo Park, California, was designed to explore this mechanism. It was shown that the perceived fields which are in fact different from the image on the retina due to filling-in, control some aspects of performance and not others. Researchers have linked these mechanisms to putative mechanisms of color coding and color constancy.

  6. Retinal Image Quality During Accommodation

    PubMed Central

    López-Gil, N.; Martin, J.; Liu, T.; Bradley, A.; Díaz-Muñoz, D.; Thibos, L.

    2013-01-01

    Purpose We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Methods Subjects viewed a monochromatic (552nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Results Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Conclusions Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye’s higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. PMID:23786386

  7. Automatic needle segmentation in 3D ultrasound images using 3D Hough transform

    NASA Astrophysics Data System (ADS)

    Zhou, Hua; Qiu, Wu; Ding, Mingyue; Zhang, Songgeng

    2007-12-01

    3D ultrasound (US) is a new technology that can be used for a variety of diagnostic applications, such as obstetrical, vascular, and urological imaging, and has been explored greatly potential in the applications of image-guided surgery and therapy. Uterine adenoma and uterine bleeding are the two most prevalent diseases in Chinese woman, and a minimally invasive ablation system using an RF button electrode which is needle-like is being used to destroy tumor cells or stop bleeding currently. Now a 3D US guidance system has been developed to avoid accidents or death of the patient by inaccurate localizations of the electrode and the tumor position during treatment. In this paper, we described two automated techniques, the 3D Hough Transform (3DHT) and the 3D Randomized Hough Transform (3DRHT), which is potentially fast, accurate, and robust to provide needle segmentation in 3D US image for use of 3D US imaging guidance. Based on the representation (Φ , θ , ρ , α ) of straight lines in 3D space, we used the 3DHT algorithm to segment needles successfully assumed that the approximate needle position and orientation are known in priori. The 3DRHT algorithm was developed to detect needles quickly without any information of the 3D US images. The needle segmentation techniques were evaluated using the 3D US images acquired by scanning water phantoms. The experiments demonstrated the feasibility of two 3D needle segmentation algorithms described in this paper.

  8. Electrical Characterization of 3D Au Microelectrodes for Use in Retinal Prostheses

    PubMed Central

    Lee, Sangmin; Ahn, Jae Hyun; Seo, Jong-Mo; Chung, Hum; Cho, Dong-Il “Dan”

    2015-01-01

    In order to provide high-quality visual information to patients who have implanted retinal prosthetic devices, the number of microelectrodes should be large. As the number of microelectrodes is increased, the dimensions of each microelectrode must be decreased, which in turn results in an increased microelectrode interface impedance and decreased injection current dynamic range. In order to improve the trade-off envelope between the number of microelectrodes and the current injection characteristics, a 3D microelectrode structure can be used as an alternative. In this paper, the electrical characteristics of 2D and 3D Au microelectrodes were investigated. In order to examine the effects of the structural difference, 2D and 3D Au microelectrodes with different base areas but similar effective surface areas were fabricated and evaluated. Interface impedances were measured and similar dynamic ranges were obtained for both 2D and 3D Au microelectrodes. These results indicate that more electrodes can be implemented in the same area if 3D designs are used. Furthermore, the 3D Au microelectrodes showed substantially enhanced electrical durability characteristics against over-injected stimulation currents, withstanding electrical currents that are much larger than the limit measured for 2D microelectrodes of similar area. This enhanced electrical durability property of 3D Au microelectrodes is a new finding in microelectrode research, and makes 3D microelectrodes very desirable devices. PMID:26091397

  9. Flat Panel Displays For 3-D Imaging

    NASA Astrophysics Data System (ADS)

    Perbet, Jean-Noel

    1983-12-01

    A large number of new display technologies are available for flat panel displays. Currently, each of the technologies has a unique set of features and limitations, which results in that technology being well suited for some applications and inappropriate for the others. After a brief explanation of the problems encounter in flat panel display systems, we will discuss the various categories of flat panel display technologies which can be divided into two groups : - Emissive displays such as flat cathode ray tubes electroluminescent display, plasma panel... - Non emissive displays such as the famous liquid cristal display. When considering any such display it is especially important that we consider the impact of the display on the whole system and particularly the electronic area as all flat panel displays are not driven with equal ease. In conclusion, the choise of flat panel display for 3D imaging will mainly depends on user's requirements and flat panel characteristics.

  10. Photogrammetric 3D reconstruction using mobile imaging

    NASA Astrophysics Data System (ADS)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  11. Imaging a Sustainable Future in 3D

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.; Kanngieser, E.

    2012-07-01

    It is the intention of this paper, to contribute to a sustainable future by providing objective object information based on 3D photography as well as promoting 3D photography not only for scientists, but also for amateurs. Due to the presentation of this article by CIPA Task Group 3 on "3D Photographs in Cultural Heritage", the presented samples are masterpieces of historic as well as of current 3D photography concentrating on cultural heritage. In addition to a report on exemplarily access to international archives of 3D photographs, samples for new 3D photographs taken with modern 3D cameras, as well as by means of a ground based high resolution XLITE staff camera and also 3D photographs taken from a captive balloon and the use of civil drone platforms are dealt with. To advise on optimum suited 3D methodology, as well as to catch new trends in 3D, an updated synoptic overview of the 3D visualization technology, even claiming completeness, has been carried out as a result of a systematic survey. In this respect, e.g., today's lasered crystals might be "early bird" products in 3D, which, due to lack in resolution, contrast and color, remember to the stage of the invention of photography.

  12. Ames Lab 101: Real-Time 3D Imaging

    ScienceCinema

    Zhang, Song

    2012-08-29

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  13. Ames Lab 101: Real-Time 3D Imaging

    SciTech Connect

    Zhang, Song

    2010-01-01

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  14. Retinal Optical Coherence Tomography Imaging

    NASA Astrophysics Data System (ADS)

    Drexler, Wolfgang; Fujimoto, James G.

    The eye is essentially transparent, transmitting light with only minimal optical attenuation and scattering providing easy optical access to the anterior segment as well as the retina. For this reason, ophthalmic and especially retinal imaging has been not only the first but also most successful clinical application for optical coherence tomography (OCT). This chapter focuses on the development of OCT technology for retinal imaging. OCT has significantly improved the potential for early diagnosis, understanding of retinal disease pathogenesis, as well as monitoring disease progression and response to therapy. Development of ultrabroad bandwidth light sources and high-speed detection techniques has enabled significant improvements in ophthalmic OCT imaging performance, demonstrating the potential of three-dimensional, ultrahigh-resolution OCT (UHR OCT) to perform noninvasive optical biopsy of the living human retina, i.e., the in vivo visualization of microstructural, intraretinal morphology in situ approaching the resolution of conventional histopathology. Significant improvements in axial resolution and speed not only enable three-dimensional rendering of retinal volumes but also high-definition, two-dimensional tomograms, topographic thickness maps of all major intraretinal layers, as well as volumetric quantification of pathologic intraretinal changes. These advances in OCT technology have also been successfully applied in several animal models of retinal pathologies. The development of light sources emitting at alternative wavelengths, e.g., around #1,060 nm, not only enabled three-dimensional OCT imaging with enhanced choroidal visualization but also improved OCT performance in cataract patients due to reduced scattering losses in this wavelength region. Adaptive optics using deformable mirror technology, with unique high stroke to correct higher-order ocular aberrations, with specially designed optics to compensate chromatic aberration of the human eye, in combination with three-dimensional UHR OCT, recently enabled in vivo cellular resolution retinal imaging.

  15. Automatic needle segmentation in 3D ultrasound images using 3D improved Hough transform

    NASA Astrophysics Data System (ADS)

    Zhou, Hua; Qiu, Wu; Ding, Mingyue; Zhang, Songgen

    2008-03-01

    3D ultrasound (US) is a new technology that can be used for a variety of diagnostic applications, such as obstetrical, vascular, and urological imaging, and has been explored greatly potential in the applications of image-guided surgery and therapy. Uterine adenoma and uterine bleeding are the two most prevalent diseases in Chinese woman, and a minimally invasive ablation system using a needle-like RF button electrode is widely used to destroy tumor cells or stop bleeding. To avoid accidents or death of the patient by inaccurate localizations of the electrode and the tumor position during treatment, 3D US guidance system was developed. In this paper, a new automated technique, the 3D Improved Hough Transform (3DIHT) algorithm, which is potentially fast, accurate, and robust to provide needle segmentation in 3D US image for use of 3D US imaging guidance, was presented. Based on the coarse-fine search strategy and a four parameter representation of lines in 3D space, 3DIHT algorithm can segment needles quickly, accurately and robustly. The technique was evaluated using the 3D US images acquired by scanning a water phantom. The segmentation position deviation of the line was less than 2mm and angular deviation was much less than 2°. The average computational time measured on a Pentium IV 2.80GHz PC computer with a 381×381×250 image was less than 2s.

  16. Image Selection for 3d Measurement Based on Network Design

    NASA Astrophysics Data System (ADS)

    Fuse, T.; Harada, R.

    2015-05-01

    3D models have been widely used by spread of many available free-software. On the other hand, enormous images can be easily acquired, and images are utilized for creating the 3D models recently. However, the creation of 3D models by using huge amount of images takes a lot of time and effort, and then efficiency for 3D measurement are required. In the efficiency strategy, the accuracy of the measurement is also required. This paper develops an image selection method based on network design that means surveying network construction. The proposed method uses image connectivity graph. By this, the image selection problem is regarded as combinatorial optimization problem and the graph cuts technique can be applied. Additionally, in the process of 3D reconstruction, low quality images and similarity images are extracted and removed. Through the experiments, the significance of the proposed method is confirmed. Potential to efficient and accurate 3D measurement is implied.

  17. Glasses-free 3D viewing systems for medical imaging

    NASA Astrophysics Data System (ADS)

    Magalhães, Daniel S. F.; Serra, Rolando L.; Vannucci, André L.; Moreno, Alfredo B.; Li, Li M.

    2012-04-01

    In this work we show two different glasses-free 3D viewing systems for medical imaging: a stereoscopic system that employs a vertically dispersive holographic screen (VDHS) and a multi-autostereoscopic system, both used to produce 3D MRI/CT images. We describe how to obtain a VDHS in holographic plates optimized for this application, with field of view of 7 cm to each eye and focal length of 25 cm, showing images done with the system. We also describe a multi-autostereoscopic system, presenting how it can generate 3D medical imaging from viewpoints of a MRI or CT image, showing results of a 3D angioresonance image.

  18. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    SciTech Connect

    Morimoto, A.K.; Bow, W.J.; Strong, D.S.

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  19. 3D Imaging by Mass Spectrometry: A New Frontier

    PubMed Central

    Seeley, Erin H.; Caprioli, Richard M.

    2012-01-01

    Summary Imaging mass spectrometry can generate three-dimensional volumes showing molecular distributions in an entire organ or animal through registration and stacking of serial tissue sections. Here we review the current state of 3D imaging mass spectrometry as well as provide insights and perspectives on the process of generating 3D mass spectral data along with a discussion of the process necessary to generate a 3D image volume. PMID:22276611

  20. [3-D visualization of dose distributions in CT image volumes].

    PubMed

    Tabbert, E; Riemer, M; Schiemann, T; Tiede, U; Höhne, K H

    1992-01-01

    The 3D-visualization of the entire spatial radiation dosage in cooperation with the 3D-radiation volume requires several data volumes. The structure of the interface between the 3D-treatment planning program "ProPlan" and the 3D-imaging system "VOXEL-MAN" is explained. The first results in the radiological application point out the possibilities of the complex registration of dose distributions and the critical examination of the irradiation technique. PMID:1734588

  1. Progressive cross-section display of 3D medical images.

    PubMed

    Sigitani, T; Iiguni, Y; Maeda, H

    2000-03-01

    The paper presents a hierarchical coding algorithm for 3D medical images based upon hierarchical interpolation with radial basis function networks. By using the properties of the Kronecker product, the computation of the network parameters and the 3D image reconstruction are efficiently done in (L4) computation time and O(L3) storage space, when applied to 3D images of size (L x L x L). A further reduction in processing time is accomplished by using sparse matrix techniques. The salient features of the proposed coding method are that arbitrary cross-section images can be progressively displayed without reconstruction of the whole 3D image; the first image reconstruction starts as soon as the first data transmission has been completed; no expanding procedure is required in 3D image reconstruction, and the blocking effects are not apparent even in the lowest-resolution image. Experimental results using two 3D MRI images, of size (128 x 18 x 64) and with 8-bit grey levels, show that the coding performance is better than that of the 3D DCT coding by about 0.25 bits pixel-1 at higher bit rates, and that the new cross-section display method synthesises the coarsest (finest) section image about six (three) times faster than the standard method that requires the whole 3D image reconstruction. PMID:10829405

  2. Measurable realistic image-based 3D mapping

    NASA Astrophysics Data System (ADS)

    Liu, W.; Wang, J.; Wang, J. J.; Ding, W.; Almagbile, A.

    2011-12-01

    Maps with 3D visual models are becoming a remarkable feature of 3D map services. High-resolution image data is obtained for the construction of 3D visualized models.The3D map not only provides the capabilities of 3D measurements and knowledge mining, but also provides the virtual experienceof places of interest, such as demonstrated in the Google Earth. Applications of 3D maps are expanding into the areas of architecture, property management, and urban environment monitoring. However, the reconstruction of high quality 3D models is time consuming, and requires robust hardware and powerful software to handle the enormous amount of data. This is especially for automatic implementation of 3D models and the representation of complicated surfacesthat still need improvements with in the visualisation techniques. The shortcoming of 3D model-based maps is the limitation of detailed coverage since a user can only view and measure objects that are already modelled in the virtual environment. This paper proposes and demonstrates a 3D map concept that is realistic and image-based, that enables geometric measurements and geo-location services. Additionally, image-based 3D maps provide more detailed information of the real world than 3D model-based maps. The image-based 3D maps use geo-referenced stereo images or panoramic images. The geometric relationships between objects in the images can be resolved from the geometric model of stereo images. The panoramic function makes 3D maps more interactive with users but also creates an interesting immersive circumstance. Actually, unmeasurable image-based 3D maps already exist, such as Google street view, but only provide virtual experiences in terms of photos. The topographic and terrain attributes, such as shapes and heights though are omitted. This paper also discusses the potential for using a low cost land Mobile Mapping System (MMS) to implement realistic image 3D mapping, and evaluates the positioning accuracy that a measureable realistic image-based (MRI) system can produce. The major contribution here is the implementation of measurable images on 3D maps to obtain various measurements from real scenes.

  3. Diagnostic Capability of Peripapillary Retinal Thickness in Glaucoma Using 3D Volume Scans

    PubMed Central

    Simavli, Huseyin; Que, Christian John; Akduman, Mustafa; Rizzo, Jennifer L.; Tsikata, Edem; de Boer, Johannes F.; Chen, Teresa C.

    2015-01-01

    Purpose To determine the diagnostic capability of spectral domain optical coherence tomography (SD-OCT) peripapillary retinal thickness (RT) measurements from 3-dimensional (3D) volume scans for primary open angle glaucoma (POAG). Design Cross-sectional study. Methods Setting Institutional Study population 156 patients (89 POAG and 67 normal subjects) Observation procedures One eye of each subject was included. SD-OCT peripapillary RT values from 3D volume scans were calculated for four quadrants of three different sized annuli. Peripapillary retinal nerve fiber layer (RNFL) thickness values were also determined. Main outcome measures Area under the receiver operating characteristic curve (AUROC) values, sensitivity, specificity, positive and negative predictive values, and positive and negative likelihood ratios. Results The top five RT AUROCs for all glaucoma patients and for a subset of early glaucoma patients were for the inferior quadrant of outer circumpapillary annulus of circular grid (OCA) 1 (0.959, 0.939), inferior quadrant of OCA2 (0.945, 0.921), superior quadrant of OCA1 (0.890, 0.811), inferior quadrant of OCA3 (0.887, 0.854), and superior quadrant of OCA2 (0.879, 0.807). Smaller RT annuli OCA1 and OCA2 consistently showed better diagnostic performance than the larger RT annulus OCA3. For both RNFL and RT measurements, best AUROC values were found for inferior RT OCA1 and OCA2, followed by inferior and overall RNFL thickness. Conclusion Peripapillary RT measurements from 3D volume scans showed excellent diagnostic performance for detecting both glaucoma and early glaucoma patients. Peripapillary RT values have the same or better diagnostic capability compared to peripapillary RNFL thickness measurements, while also having fewer algorithm errors. PMID:25498354

  4. Serial 3D imaging mass spectrometry at its tipping point.

    PubMed

    Palmer, Andrew D; Alexandrov, Theodore

    2015-04-21

    Since biology is by and large a 3-dimensional phenomenon, it is hardly surprising that 3D imaging has had a significant impact on many challenges in the life sciences. Imaging mass spectrometry (MS) is a spatially resolved label-free analytical technique that recently maturated into a powerful tool for in situ localization of hundreds of molecular species. Serial 3D imaging MS reconstructs 3D molecular images from serial sections imaged with mass spectrometry. As such, it provides a novel 3D imaging modality inheriting the advantages of imaging MS. Serial 3D imaging MS has been steadily developing over the past decade, and many of the technical challenges have been met. Essential tools and protocols were developed, in particular to improve the reproducibility of sample preparation, speed up data acquisition, and enable computationally intensive analysis of the big data generated. As a result, experimental data is starting to emerge that takes advantage of the extra spatial dimension that 3D imaging MS offers. Most studies still focus on method development rather than on exploring specific biological problems. The future success of 3D imaging MS requires it to find its own niche alongside existing 3D imaging modalities through finding applications that benefit from 3D imaging and at the same time utilize the unique chemical sensitivity of imaging mass spectrometry. This perspective critically reviews the challenges encountered during the development of serial-sectioning 3D imaging MS and discusses the steps needed to tip it from being an academic curiosity into a tool of choice for answering biological and medical questions. PMID:25817912

  5. Super resolution for fundoscopy based on 3D image registration.

    PubMed

    Hernandez-Matas, Carlos; Zabulis, Xenophon

    2014-01-01

    An approach to the generation of super-resolution (SR) images from fundoscopy images is proposed that is based on the 3D registration of the original fundoscopy images. The proposed approach utilizes a simple 3D registration method to enable the application of conventional SR techniques which, otherwise, employ 2D image registration. Qualitative and quantitative comparative evaluation shows that the obtained results improve image definition and alleviate noise. PMID:25571445

  6. 3D Imaging with Structured Illumination for Advanced Security Applications

    SciTech Connect

    Birch, Gabriel Carisle; Dagel, Amber Lynn; Kast, Brian A.; Smith, Collin S.

    2015-09-01

    Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

  7. Fringe projection 3D microscopy with the general imaging model.

    PubMed

    Yin, Yongkai; Wang, Meng; Gao, Bruce Z; Liu, Xiaoli; Peng, Xiang

    2015-03-01

    Three-dimensional (3D) imaging and metrology of microstructures is a critical task for the design, fabrication, and inspection of microelements. Newly developed fringe projection 3D microscopy is presented in this paper. The system is configured according to camera-projector layout and long working distance lenses. The Scheimpflug principle is employed to make full use of the limited depth of field. For such a specific system, the general imaging model is introduced to reach a full 3D reconstruction. A dedicated calibration procedure is developed to realize quantitative 3D imaging. Experiments with a prototype demonstrate the accessibility of the proposed configuration, model, and calibration approach. PMID:25836904

  8. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  9. Automatic Detection, Segmentation and Classification of Retinal Horizontal Neurons in Large-scale 3D Confocal Imagery

    SciTech Connect

    Karakaya, Mahmut; Kerekes, Ryan A; Gleason, Shaun Scott; Martins, Rodrigo; Dyer, Michael

    2011-01-01

    Automatic analysis of neuronal structure from wide-field-of-view 3D image stacks of retinal neurons is essential for statistically characterizing neuronal abnormalities that may be causally related to neural malfunctions or may be early indicators for a variety of neuropathies. In this paper, we study classification of neuron fields in large-scale 3D confocal image stacks, a challenging neurobiological problem because of the low spatial resolution imagery and presence of intertwined dendrites from different neurons. We present a fully automated, four-step processing approach for neuron classification with respect to the morphological structure of their dendrites. In our approach, we first localize each individual soma in the image by using morphological operators and active contours. By using each soma position as a seed point, we automatically determine an appropriate threshold to segment dendrites of each neuron. We then use skeletonization and network analysis to generate the morphological structures of segmented dendrites, and shape-based features are extracted from network representations of each neuron to characterize the neuron. Based on qualitative results and quantitative comparisons, we show that we are able to automatically compute relevant features that clearly distinguish between normal and abnormal cases for postnatal day 6 (P6) horizontal neurons.

  10. Integration of retinal image sequences

    NASA Astrophysics Data System (ADS)

    Ballerini, Lucia

    1998-10-01

    In this paper a method for noise reduction in ocular fundus image sequences is described. The eye is the only part of the human body where the capillary network can be observed along with the arterial and venous circulation using a non invasive technique. The study of the retinal vessels is very important both for the study of the local pathology (retinal disease) and for the large amount of information it offers on systematic haemodynamics, such as hypertension, arteriosclerosis, and diabetes. In this paper a method for image integration of ocular fundus image sequences is described. The procedure can be divided in two step: registration and fusion. First we describe an automatic alignment algorithm for registration of ocular fundus images. In order to enhance vessel structures, we used a spatially oriented bank of filters designed to match the properties of the objects of interest. To evaluate interframe misalignment we adopted a fast cross-correlation algorithm. The performances of the alignment method have been estimated by simulating shifts between image pairs and by using a cross-validation approach. Then we propose a temporal integration technique of image sequences so as to compute enhanced pictures of the overall capillary network. Image registration is combined with image enhancement by fusing subsequent frames of a same region. To evaluate the attainable results, the signal-to-noise ratio was estimated before and after integration. Experimental results on synthetic images of vessel-like structures with different kind of Gaussian additive noise as well as on real fundus images are reported.

  11. 3D Cell Culture Imaging with Digital Holographic Microscopy

    NASA Astrophysics Data System (ADS)

    Dimiduk, Thomas; Nyberg, Kendra; Almeda, Dariela; Koshelva, Ekaterina; McGorty, Ryan; Kaz, David; Gardel, Emily; Auguste, Debra; Manoharan, Vinothan

    2011-03-01

    Cells in higher organisms naturally exist in a three dimensional (3D) structure, a fact sometimes ignored by in vitro biological research. Confinement to a two dimensional culture imposes significant deviations from the native 3D state. One of the biggest obstacles to wider use of 3D cultures is the difficulty of 3D imaging. The confocal microscope, the dominant 3D imaging instrument, is expensive, bulky, and light-intensive; live cells can be observed for only a short time before they suffer photodamage. We present an alternative 3D imaging techinque, digital holographic microscopy, which can capture 3D information with axial resolution better than 2 μm in a 100 μm deep volume. Capturing a 3D image requires only a single camera exposure with a sub-millisecond laser pulse, allowing us to image cell cultures using five orders of magnitude less light energy than with confocal. This can be done with hardware costing ~ 1000. We use the instrument to image growth of MCF7 breast cancer cells and p. pastoras yeast. We acknowledge support from NSF GRFP.

  12. Imaging hypoxia using 3D photoacoustic spectroscopy

    NASA Astrophysics Data System (ADS)

    Stantz, Keith M.

    2010-02-01

    Purpose: The objective is to develop a multivariate in vivo hemodynamic model of tissue oxygenation (MiHMO2) based on 3D photoacoustic spectroscopy. Introduction: Low oxygen levels, or hypoxia, deprives cancer cells of oxygen and confers resistance to irradiation, some chemotherapeutic drugs, and oxygen-dependent therapies (phototherapy) leading to treatment failure and poor disease-free and overall survival. For example, clinical studies of patients with breast carcinomas, cervical cancer, and head and neck carcinomas (HNC) are more likely to suffer local reoccurrence and metastasis if their tumors are hypoxic. A novel method to non invasively measure tumor hypoxia, identify its type, and monitor its heterogeneity is devised by measuring tumor hemodynamics, MiHMO2. Material and Methods: Simulations are performed to compare tumor pO2 levels and hypoxia based on physiology - perfusion, fractional plasma volume, fractional cellular volume - and its hemoglobin status - oxygen saturation and hemoglobin concentration - based on in vivo measurements of breast, prostate, and ovarian tumors. Simulations of MiHMO2 are performed to assess the influence of scanner resolutions and different mathematic models of oxygen delivery. Results: Sensitivity of pO2 and hypoxic fraction to photoacoustic scanner resolution and dependencies on model complexity will be presented using hemodynamic parameters for different tumors. Conclusions: Photoacoustic CT spectroscopy provides a unique ability to monitor hemodynamic and cellular physiology in tissue, which can be used to longitudinally monitor tumor oxygenation and its response to anti-angiogenic therapies.

  13. 3D laser scanner system using high dynamic range imaging

    NASA Astrophysics Data System (ADS)

    Zhongdong, Yang; Peng, Wang; Xiaohui, Li; Changku, Sun

    2014-03-01

    Because of its high measuring speed, moderate accuracy, low cost and robustness in the industrial field, 3D laser scanning has been widely used in a variety of applications. However, the measurement of a 3D profile of a high dynamic range (HDR) brightness surface such as a partially highlighted object or a partial specular reflection remains one of the most challenging problems. This difficulty has limited the adoption of such scanner systems. In this paper, an optical imaging system based on a high-resolution liquid crystal on silicon (LCoS) device and an image sensor (CCD or CMOS) was built to adjust the image's brightness pixel by pixel as required. The radiance value of the image captured by the image sensor is constrained to lie within the dynamic range of the sensor after an adaptive algorithm of pixel mapping between the LCoS mask plane and image plane through the HDR imaging system is added. Thus, an HDR image was reconstructed by the LCoS mask and the CCD image on this system. The significant difference between the proposed system and a traditional 3D laser scanner system is that the HDR image was used to calibrate and calculate the 3D profile coordinate. Experimental results show that HDR imaging can enhance 3D laser scanner system environmental adaptability and improve the accuracy of 3D profile measurement.

  14. Diffractive optical element for creating visual 3D images.

    PubMed

    Goncharsky, Alexander; Goncharsky, Anton; Durlevich, Svyatoslav

    2016-05-01

    A method is proposed to compute and synthesize the microrelief of a diffractive optical element to produce a new visual security feature - the vertical 3D/3D switch effect. The security feature consists in the alternation of two 3D color images when the diffractive element is tilted up/down. Optical security elements that produce the new security feature are synthesized using electron-beam technology. Sample optical security elements are manufactured that produce 3D to 3D visual switch effect when illuminated by white light. Photos and video records of the vertical 3D/3D switch effect of real optical elements are presented. The optical elements developed can be replicated using standard equipment employed for manufacturing security holograms. The new optical security feature is easy to control visually, safely protected against counterfeit, and designed to protect banknotes, documents, ID cards, etc. PMID:27137530

  15. 3D scene reconstruction from multi-aperture images

    NASA Astrophysics Data System (ADS)

    Mao, Miao; Qin, Kaihuai

    2014-04-01

    With the development of virtual reality, there is a growing demand for 3D modeling of real scenes. This paper proposes a novel 3D scene reconstruction framework based on multi-aperture images. Our framework consists of four parts. Firstly, images with different apertures are captured via programmable aperture. Secondly, we use SIFT method for feature point matching. Then we exploit binocular stereo vision to calculate camera parameters and 3D positions of matching points, forming a sparse 3D scene model. Finally, we apply patch-based multi-view stereo to obtain a dense 3D scene model. Experimental results show that our method is practical and effective to reconstruct dense 3D scene.

  16. Dedicated 3D photoacoustic breast imaging

    PubMed Central

    Kruger, Robert A.; Kuzmiak, Cherie M.; Lam, Richard B.; Reinecke, Daniel R.; Del Rio, Stephen P.; Steed, Doreen

    2013-01-01

    Purpose: To report the design and imaging methodology of a photoacoustic scanner dedicated to imaging hemoglobin distribution throughout a human breast. Methods: The authors developed a dedicated breast photoacoustic mammography (PAM) system using a spherical detector aperture based on our previous photoacoustic tomography scanner. The system uses 512 detectors with rectilinear scanning. The scan shape is a spiral pattern whose radius varies from 24 to 96 mm, thereby allowing a field of view that accommodates a wide range of breast sizes. The authors measured the contrast-to-noise ratio (CNR) using a target comprised of 1-mm dots printed on clear plastic. Each dot absorption coefficient was approximately the same as a 1-mm thickness of whole blood at 756 nm, the output wavelength of the Alexandrite laser used by this imaging system. The target was immersed in varying depths of an 8% solution of stock Liposyn II-20%, which mimics the attenuation of breast tissue (1.1 cm−1). The spatial resolution was measured using a 6 μm-diameter carbon fiber embedded in agar. The breasts of four healthy female volunteers, spanning a range of breast size from a brassiere C cup to a DD cup, were imaged using a 96-mm spiral protocol. Results: The CNR target was clearly visualized to a depth of 53 mm. Spatial resolution, which was estimated from the full width at half-maximum of a profile across the PAM image of a carbon fiber, was 0.42 mm. In the four human volunteers, the vasculature was well visualized throughout the breast tissue, including to the chest wall. Conclusions: CNR, lateral field-of-view and penetration depth of our dedicated PAM scanning system is sufficient to image breasts as large as 1335 mL, which should accommodate up to 90% of the women in the United States. PMID:24320471

  17. Generation of 3D retina-like structures from a human retinal cell line in a NASA bioreactor.

    PubMed

    Dutt, Kamla; Harris-Hooker, Sandra; Ellerson, Debra; Layne, Dione; Kumar, Ravindra; Hunt, Richard

    2003-01-01

    Replacement of damaged cells is a promising approach for treatment of age-related macular degeneration (AMD) and retinitis pigmentosa (RP); however, availability of donor tissue for transplantation remains a major obstacle. Key factors for successful engineering of a tissue include the identification of a neural cell line that is: homogeneous but can be expanded to give rise to multiple cells types; is nontumorigenic, yet capable of secreting neurotrophic factors; and is able to form three-dimensional (3D), differentiated structures. The goal of this study was to test the feasibility of tissue engineering from a multipotential human retinal cell line using a NASA-developed bioreactor. A multipotential human retinal precursor cell line was used to generate 3D structures. In addition, retinal pigment epithelium (RPE) cells were cocultured with neural cells to determine if 3D retinal structures could be generated in the bioreactor with cells grown on laminin-coated cytodex 3 beads. Cell growth, morphology, and differentiation were monitored by light and scanning electron microscopy, Western blot analysis, and analysis of glucose use and lactate production. The neuronal retinal precursor cell line cultured in a bioreactor gave rise to most retinal cell types seen in monolayer culture. They formed composite structures with cell-covered beads associated with one another in a tissue-like array. The beginning of layering and/or separation of cell types was observed. The neuronal cell types previously seen in monolayer cultures were also seen in the bioreactor. Some of the retinal cells differentiate into photoreceptors in the bioreactor with well-developed outer segment-like structures, a process that is critical for retinal function. Moreover, the neuronal cells that were generated resembled their in vivo phenotype more closely than those grown under other conditions. Outer segments were almost never seen in the monolayer cultures, even in the presence of photoreceptor-inducing growth factors such as basic fibroblast growth factor (bFGF) and transforming growth factor (TGF-alpha). Muller cells were occasionally seen when retinal, RPE cells were cocultured with retinal cells in the bioreactor. These have never been seen in this retinal cell line before. Cells grown in the bioreactor expressed several proteins specific for the retinal cell types: opsin, protein kinase C-alpha, dopamine receptor D4, tyrosine hydroxylase, and calbindin. PMID:14653619

  18. 3-D capacitance density imaging system

    DOEpatents

    Fasching, G.E.

    1988-03-18

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved. 7 figs.

  19. 3-D seismic imaging of complex geologies

    SciTech Connect

    Womble, D.E.; Dosanjh, S.S.; VanDyke, J.P.; Oldfield, R.A.; Greenberg, D.S.

    1995-02-01

    We present three codes for the Intel Paragon that address the problem of three-dimensional seismic imaging of complex geologies. The first code models acoustic wave propagation and can be used to generate data sets to calibrate and validate seismic imaging codes. This code reported the fastest timings for acoustic wave propagation codes at a recent SEG (Society of Exploration Geophysicists) meeting. The second code implements a Kirchhoff method for pre-stack depth migration. Development of this code is almost complete, and preliminary results are presented. The third code implements a wave equation approach to seismic migration and is a Paragon implementation of a code from the ARCO Seismic Benchmark Suite.

  20. Image performance evaluation of a 3D surgical imaging platform

    NASA Astrophysics Data System (ADS)

    Petrov, Ivailo E.; Nikolov, Hristo N.; Holdsworth, David W.; Drangova, Maria

    2011-03-01

    The O-arm (Medtronic Inc.) is a multi-dimensional surgical imaging platform. The purpose of this study was to perform a quantitative evaluation of the imaging performance of the O-arm in an effort to understand its potential for future nonorthopedic applications. Performance of the reconstructed 3D images was evaluated, using a custom-built phantom, in terms of resolution, linearity, uniformity and geometrical accuracy. Both the standard (SD, 13 s) and high definition (HD, 26 s) modes were evaluated, with the imaging parameters set to image the head (120 kVp, 100 mAs and 150 mAs, respectively). For quantitative noise characterization, the images were converted to Hounsfield units (HU) off-line. Measurement of the modulation transfer function revealed a limiting resolution (at 10% level) of 1.0 mm-1 in the axial dimension. Image noise varied between 15 and 19 HU for the HD and SD modes, respectively. Image intensities varied linearly over the measured range, up to 1300 HU. Geometric accuracy was maintained in all three dimensions over the field of view. The present study has evaluated the performance characteristics of the O-arm, and demonstrates feasibility for use in interventional applications and quantitative imaging tasks outside those currently targeted by the manufacturer. Further improvements to the reconstruction algorithms may further enhance performance for lower-contrast applications.

  1. 3D laser imaging for concealed object identification

    NASA Astrophysics Data System (ADS)

    Berechet, Ion; Berginc, Gérard; Berechet, Stefan

    2014-09-01

    This paper deals with new optical non-conventional 3D laser imaging. Optical non-conventional imaging explores the advantages of laser imaging to form a three-dimensional image of the scene. 3D laser imaging can be used for threedimensional medical imaging, topography, surveillance, robotic vision because of ability to detect and recognize objects. In this paper, we present a 3D laser imaging for concealed object identification. The objective of this new 3D laser imaging is to provide the user a complete 3D reconstruction of the concealed object from available 2D data limited in number and with low representativeness. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different interfaces of the scene of interest and from experimental results. We show the global 3D reconstruction procedures capable to separate objects from foliage and reconstruct a threedimensional image of the considered object. In this paper, we present examples of reconstruction and completion of three-dimensional images and we analyse the different parameters of the identification process such as resolution, the scenario of camouflage, noise impact and lacunarity degree.

  2. Improved 3D live-wire method with application to 3D CT chest image analysis

    NASA Astrophysics Data System (ADS)

    Lu, Kongkuo; Higgins, William E.

    2006-03-01

    The definition of regions of interests (ROIs), such as suspect cancer nodules or lymph nodes in 3D CT chest images, is often difficult because of the complexity of the phenomena that give rise to them. Manual slice tracing has been used widely for years for such problems, because it is easy to implement and guaranteed to work. But the manual method is extremely time-consuming, especially for high-solution 3D images which may have hundreds of slices, and it is subject to operator biases. Numerous automated image-segmentation methods have been proposed, but they are generally strongly application dependent, and even the "most robust" methods have difficulty in defining complex anatomical ROIs. To address this problem, the semi-automatic interactive paradigm referred to as "live wire" segmentation has been proposed by researchers. In live-wire segmentation, the human operator interactively defines an ROI's boundary guided by an active automated method which suggests what to define. This process in general is far faster, more reproducible and accurate than manual tracing, while, at the same time, permitting the definition of complex ROIs having ill-defined boundaries. We propose a 2D live-wire method employing an improved cost over previous works. In addition, we define a new 3D live-wire formulation that enables rapid definition of 3D ROIs. The method only requires the human operator to consider a few slices in general. Experimental results indicate that the new 2D and 3D live-wire approaches are efficient, allow for high reproducibility, and are reliable for 2D and 3D object segmentation.

  3. A 3D Level Set Method for Microwave Breast Imaging

    PubMed Central

    Colgan, Timothy J.; Hagness, Susan C.; Van Veen, Barry D.

    2015-01-01

    Objective Conventional inverse-scattering algorithms for microwave breast imaging result in moderate resolution images with blurred boundaries between tissues. Recent 2D numerical microwave imaging studies demonstrate that the use of a level set method preserves dielectric boundaries, resulting in a more accurate, higher resolution reconstruction of the dielectric properties distribution. Previously proposed level set algorithms are computationally expensive and thus impractical in 3D. In this paper we present a computationally tractable 3D microwave imaging algorithm based on level sets. Methods We reduce the computational cost of the level set method using a Jacobian matrix, rather than an adjoint method, to calculate Frechet derivatives. We demonstrate the feasibility of 3D imaging using simulated array measurements from 3D numerical breast phantoms. We evaluate performance by comparing full 3D reconstructions to those from a conventional microwave imaging technique. We also quantitatively assess the efficacy of our algorithm in evaluating breast density. Results Our reconstructions of 3D numerical breast phantoms improve upon those of a conventional microwave imaging technique. The density estimates from our level set algorithm are more accurate than those of conventional microwave imaging, and the accuracy is greater than that reported for mammographic density estimation. Conclusion Our level set method leads to a feasible level of computational complexity for full 3D imaging, and reconstructs the heterogeneous dielectric properties distribution of the breast more accurately than conventional microwave imaging methods. Significance 3D microwave breast imaging using a level set method is a promising low-cost, non-ionizing alternative to current breast imaging techniques. PMID:26011863

  4. Spatial compounding of large sets of 3D echocardiography images

    NASA Astrophysics Data System (ADS)

    Yao, Cheng; Simpson, John M.; Jansen, Christian H. P.; King, Andrew P.; Penney, Graeme P.

    2009-02-01

    We present novel methodologies for compounding large numbers of 3D echocardiography volumes. Our aim is to investigate the effect of using an increased number of images, and to compare the performance of different compounding methods on image quality. Three sets of 3D echocardiography images were acquired from three volunteers. Each set of data (containing 10+ images) were registered using external tracking followed by state-of-the-art image registration. Four compounding methods were investigated, mean, maximum, and two methods derived from phase-based compounding. The compounded images were compared by calculating signal-to-noise ratios and contrast at manually identified anatomical positions within the images, and by visual inspection by experienced echocardiographers. Our results indicate that signal-to-noise ratio and contrast can be improved using increased number of images, and that a coherent compounded image can be produced using large (10+) numbers of 3D volumes.

  5. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    SciTech Connect

    Dibildox, Gerardo Baka, Nora; Walsum, Theo van; Punt, Mark; Aben, Jean-Paul; Schultz, Carl; Niessen, Wiro

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  6. 3-D Imaging Based, Radiobiological Dosimetry

    PubMed Central

    Sgouros, George; Frey, Eric; Wahl, Richard; He, Bin; Prideaux, Andrew; Hobbs, Robert

    2008-01-01

    Targeted radionuclide therapy holds promise as a new treatment against cancer. Advances in imaging are making it possible to evaluate the spatial distribution of radioactivity in tumors and normal organs over time. Matched anatomical imaging such as combined SPECT/CT and PET/CT have also made it possible to obtain tissue density information in conjunction with the radioactivity distribution. Coupled with sophisticated iterative reconstruction algorithims, these advances have made it possible to perform highly patient-specific dosimetry that also incorporates radiobiological modeling. Such sophisticated dosimetry techniques are still in the research investigation phase. Given the attendant logistical and financial costs, a demonstrated improvement in patient care will be a prerequisite for the adoption of such highly-patient specific internal dosimetry methods. PMID:18662554

  7. Acoustic 3D imaging of dental structures

    SciTech Connect

    Lewis, D.K.; Hume, W.R.; Douglass, G.D.

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  8. MR image denoising method for brain surface 3D modeling

    NASA Astrophysics Data System (ADS)

    Zhao, De-xin; Liu, Peng-jie; Zhang, De-gan

    2014-11-01

    Three-dimensional (3D) modeling of medical images is a critical part of surgical simulation. In this paper, we focus on the magnetic resonance (MR) images denoising for brain modeling reconstruction, and exploit a practical solution. We attempt to remove the noise existing in the MR imaging signal and preserve the image characteristics. A wavelet-based adaptive curve shrinkage function is presented in spherical coordinates system. The comparative experiments show that the denoising method can preserve better image details and enhance the coefficients of contours. Using these denoised images, the brain 3D visualization is given through surface triangle mesh model, which demonstrates the effectiveness of the proposed method.

  9. Morphometrics, 3D Imaging, and Craniofacial Development.

    PubMed

    Hallgrimsson, Benedikt; Percival, Christopher J; Green, Rebecca; Young, Nathan M; Mio, Washington; Marcucio, Ralph

    2015-01-01

    Recent studies have shown how volumetric imaging and morphometrics can add significantly to our understanding of morphogenesis, the developmental basis for variation, and the etiology of structural birth defects. On the other hand, the complex questions and diverse imaging data in developmental biology present morphometrics with more complex challenges than applications in virtually any other field. Meeting these challenges is necessary in order to understand the mechanistic basis for variation in complex morphologies. This chapter reviews the methods and theory that enable the application of modern landmark-based morphometrics to developmental biology and craniofacial development, in particular. We discuss the theoretical foundations of morphometrics as applied to development and review the basic approaches to the quantification of morphology. Focusing on geometric morphometrics, we discuss the principal statistical methods for quantifying and comparing morphological variation and covariation structure within and among groups. Finally, we discuss the future directions for morphometrics in developmental biology that will be required for approaches that enable quantitative integration across the genotype-phenotype map. PMID:26589938

  10. 3D quantitative phase imaging of neural networks using WDT

    NASA Astrophysics Data System (ADS)

    Kim, Taewoo; Liu, S. C.; Iyer, Raj; Gillette, Martha U.; Popescu, Gabriel

    2015-03-01

    White-light diffraction tomography (WDT) is a recently developed 3D imaging technique based on a quantitative phase imaging system called spatial light interference microscopy (SLIM). The technique has achieved a sub-micron resolution in all three directions with high sensitivity granted by the low-coherence of a white-light source. Demonstrations of the technique on single cell imaging have been presented previously; however, imaging on any larger sample, including a cluster of cells, has not been demonstrated using the technique. Neurons in an animal body form a highly complex and spatially organized 3D structure, which can be characterized by neuronal networks or circuits. Currently, the most common method of studying the 3D structure of neuron networks is by using a confocal fluorescence microscope, which requires fluorescence tagging with either transient membrane dyes or after fixation of the cells. Therefore, studies on neurons are often limited to samples that are chemically treated and/or dead. WDT presents a solution for imaging live neuron networks with a high spatial and temporal resolution, because it is a 3D imaging method that is label-free and non-invasive. Using this method, a mouse or rat hippocampal neuron culture and a mouse dorsal root ganglion (DRG) neuron culture have been imaged in order to see the extension of processes between the cells in 3D. Furthermore, the tomogram is compared with a confocal fluorescence image in order to investigate the 3D structure at synapses.

  11. 3D photonic gratings for optical sensing and image processing

    NASA Astrophysics Data System (ADS)

    Mueller, Christian; Markoetter, Henning; Schloesser, Alexander; Orlic, Susanna

    2008-08-01

    A new approach for 4D spatial-spectral optical sensing is based on light diffraction at holographically recorded 3D refractive index gratings. At least four collimated laser beams overlapped in a photosensitive polymer layer are used to generate a periodic 3D structure. Diffraction properties are widely adjustable by varying the recording beam geometry. Different 3D photonic gratings are characterized with respect to their basic diffraction properties and the results are used in optimizing the exposure process and parameters. An optical imaging setup has been constructed to display the 4D optical filtering properties of recorded 3D gratings when illuminated by elementary intensity patterns of a white light source.

  12. Accommodation response measurements for integral 3D image

    NASA Astrophysics Data System (ADS)

    Hiura, H.; Mishina, T.; Arai, J.; Iwadate, Y.

    2014-03-01

    We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468 dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned 60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15 and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions. The results suggest that the IP image induced more natural accommodation responses compared to the binocular stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however, they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce accommodation to the depth positions of 3D images.

  13. Image processing techniques in 3-D foot shape measurement system

    NASA Astrophysics Data System (ADS)

    Liu, Guozhong; Li, Ping; Wang, Boxiong; Shi, Hui; Luo, Xiuzhi

    2008-10-01

    The 3-D foot-shape measurement system based on laser-line-scanning principle was designed and 3-D foot-shape measurements without blind areas and the automatic extraction of foot-parameters were achieved. The paper is focused on the study of the system structure and principle and image processing techniques. The key techniques related to the image processing for 3-D foot shape measurement system include laser stripe extraction, laser stripe coordinate transformation from CCD cameras image coordinates system to laser plane coordinates system, laser stripe assembly of eight CCD cameras and eliminating of image noise and disturbance. 3-D foot shape measurement makes it possible to realize custom-made shoe-making and shows great prosperity in shoe design, foot orthopaedic treatment, shoe size standardization and establishment of a feet database for consumers.

  14. Compression of M-FISH images using 3D SPIHT

    NASA Astrophysics Data System (ADS)

    Wu, Qiang; Xiong, Zixiang; Castleman, Kenneth R.

    2001-12-01

    With the recent development of the use of digital media for cytogenetic imaging applications, efficient compression techniques are highly desirable to accommodate the rapid growth of image data. This paper introduces a lossy to lossless coding technique for compression of multiplex fluorescence in situ hybridization (M-FISH) images, based on 3-D set partitioning in hierarchical trees (3-D SPIHT). Using a lifting-based integer wavelet decomposition, the 3-D SPIHT achieves both embedded coding and substantial improvement in lossless compression over the Lempel-Ziv (WinZip) coding which is the current method for archiving M-FISH images. The lossy compression performance of the 3-D SPIHT is also significantly better than that of the 2-D based JPEG-2000.

  15. Data Processing for 3D Mass Spectrometry Imaging

    NASA Astrophysics Data System (ADS)

    Xiong, Xingchuang; Xu, Wei; Eberlin, Livia S.; Wiseman, Justin M.; Fang, Xiang; Jiang, You; Huang, Zejian; Zhang, Yukui; Cooks, R. Graham; Ouyang, Zheng

    2012-06-01

    Data processing for three dimensional mass spectrometry (3D-MS) imaging was investigated, starting with a consideration of the challenges in its practical implementation using a series of sections of a tissue volume. The technical issues related to data reduction, 2D imaging data alignment, 3D visualization, and statistical data analysis were identified. Software solutions for these tasks were developed using functions in MATLAB. Peak detection and peak alignment were applied to reduce the data size, while retaining the mass accuracy. The main morphologic features of tissue sections were extracted using a classification method for data alignment. Data insertion was performed to construct a 3D data set with spectral information that can be used for generating 3D views and for data analysis. The imaging data previously obtained for a mouse brain using desorption electrospray ionization mass spectrometry (DESI-MS) imaging have been used to test and demonstrate the new methodology.

  16. Computational imaging: Machine learning for 3D microscopy

    NASA Astrophysics Data System (ADS)

    Waller, Laura; Tian, Lei

    2015-07-01

    Artificial neural networks have been combined with microscopy to visualize the 3D structure of biological cells. This could lead to solutions for difficult imaging problems, such as the multiple scattering of light.

  17. Image based 3D city modeling : Comparative study

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city reconstruction; CityEngine is a good product. Agisoft Photoscan software creates much better 3D model with good texture quality and automatic processing. So this image based comparative study is useful for 3D city user community. Thus this study will provide a good roadmap for geomatics user community to create photo-realistic virtual 3D city model by using image based techniques.

  18. Treatment Paradigms for Retinal and Macular Diseases Using 3-D Retina Cultures Derived From Human Reporter Pluripotent Stem Cell Lines.

    PubMed

    Kaewkhaw, Rossukon; Swaroop, Manju; Homma, Kohei; Nakamura, Jutaro; Brooks, Matthew; Kaya, Koray Dogan; Chaitankar, Vijender; Michael, Sam; Tawa, Gregory; Zou, Jizhong; Rao, Mahendra; Zheng, Wei; Cogliati, Tiziana; Swaroop, Anand

    2016-04-01

    We discuss the use of pluripotent stem cell lines carrying fluorescent reporters driven by retinal promoters to derive three-dimensional (3-D) retina in culture and how this system can be exploited for elucidating human retinal biology, creating disease models in a dish, and designing targeted drug screens for retinal and macular degeneration. Furthermore, we realize that stem cell investigations are labor-intensive and require extensive resources. To expedite scientific discovery by sharing of resources and to avoid duplication of efforts, we propose the formation of a Retinal Stem Cell Consortium. In the field of vision, such collaborative approaches have been enormously successful in elucidating genetic susceptibility associated with age-related macular degeneration. PMID:27116668

  19. Imaging fault zones using 3D seismic image processing techniques

    NASA Astrophysics Data System (ADS)

    Iacopini, David; Butler, Rob; Purves, Steve

    2013-04-01

    Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes and collecting these into "disturbance geobodies". These seismic image processing methods represents a first efficient step toward a construction of a robust technique to investigate sub-seismic strain, mapping noisy deformed zones and displacement within subsurface geology (Dutzer et al.,2011; Iacopini et al.,2012). In all these cases, accurate fault interpretation is critical in applied geology to building a robust and reliable reservoir model, and is essential for further study of fault seal behavior, and reservoir compartmentalization. They are also fundamental for understanding how deformation localizes within sedimentary basins, including the processes associated with active seismogenetic faults and mega-thrust systems in subduction zones. Dutzer, JF, Basford., H., Purves., S. 2009, Investigating fault sealing potential through fault relative seismic volume analysis. Petroleum Geology Conference series 2010, 7:509-515; doi:10.1144/0070509 Marfurt, K.J., Chopra, S., 2007, Seismic attributes for prospect identification and reservoir characterization. SEG Geophysical development Iacopini, D., Butler, RWH. & Purves, S. (2012). 'Seismic imaging of thrust faults and structural damage: a visualization workflow for deepwater thrust belts'. First Break, vol 5, no. 30, pp. 39-46.

  20. Optical 3D watermark based digital image watermarking for telemedicine

    NASA Astrophysics Data System (ADS)

    Li, Xiao Wei; Kim, Seok Tae

    2013-12-01

    Region of interest (ROI) of a medical image is an area including important diagnostic information and must be stored without any distortion. This algorithm for application of watermarking technique for non-ROI of the medical image preserving ROI. The paper presents a 3D watermark based medical image watermarking scheme. In this paper, a 3D watermark object is first decomposed into 2D elemental image array (EIA) by a lenslet array, and then the 2D elemental image array data is embedded into the host image. The watermark extraction process is an inverse process of embedding. The extracted EIA through the computational integral imaging reconstruction (CIIR) technique, the 3D watermark can be reconstructed. Because the EIA is composed of a number of elemental images possesses their own perspectives of a 3D watermark object. Even though the embedded watermark data badly damaged, the 3D virtual watermark can be successfully reconstructed. Furthermore, using CAT with various rule number parameters, it is possible to get many channels for embedding. So our method can recover the weak point having only one transform plane in traditional watermarking methods. The effectiveness of the proposed watermarking scheme is demonstrated with the aid of experimental results.

  1. EISCAT Aperture Synthesis Imaging (EASI _3D) for the EISCAT_3D Project

    NASA Astrophysics Data System (ADS)

    La Hoz, Cesar; Belyey, Vasyl

    2012-07-01

    Aperture Synthesis Imaging Radar (ASIR) is one of the technologies adopted by the EISCAT_3D project to endow it with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. This ability will open new research opportunities to map small structures associated with non-homogeneous, unstable processes such as aurora, summer and winter polar radar echoes (PMSE and PMWE), Natural Enhanced Ion Acoustic Lines (NEIALs), structures excited by HF ionospheric heating, meteors, space debris, and others. The underlying physico-mathematical principles of the technique are the same as the technique employed in radioastronomy to image stellar objects; both require sophisticated inversion techniques to obtain reliable images.

  2. In vivo imaging of retinal hemodynamics with OCT angiography and Doppler OCT.

    PubMed

    Huang, Shenghai; Shen, Meixiao; Zhu, Dexi; Chen, Qi; Shi, Ce; Chen, Zhongping; Lu, Fan

    2016-02-01

    Retinal hemodynamics is important for early diagnosis and precise monitoring in retinal vascular diseases. We propose a novel method for measuring absolute retinal blood flow in vivo using the combined techniques of optical coherence tomography (OCT) angiography and Doppler OCT. Doppler values can be corrected by Doppler angles extracted from OCT angiography images. A three-dimensional (3D) segmentation algorithm based on dynamic programming was developed to extract the 3D boundaries of optic disc vessels, and Doppler angles were calculated from 3D vessel geometry. The accuracy of blood flow from the Doppler OCT was validated using a flow phantom. The feasibility of the method was tested on a subject in vivo. The pulsatile retinal blood flow and the parameters for retinal hemodynamics were successfully obtained. PMID:26977370

  3. In vivo imaging of retinal hemodynamics with OCT angiography and Doppler OCT

    PubMed Central

    Huang, Shenghai; Shen, Meixiao; Zhu, Dexi; Chen, Qi; Shi, Ce; Chen, Zhongping; Lu, Fan

    2016-01-01

    Retinal hemodynamics is important for early diagnosis and precise monitoring in retinal vascular diseases. We propose a novel method for measuring absolute retinal blood flow in vivo using the combined techniques of optical coherence tomography (OCT) angiography and Doppler OCT. Doppler values can be corrected by Doppler angles extracted from OCT angiography images. A three-dimensional (3D) segmentation algorithm based on dynamic programming was developed to extract the 3D boundaries of optic disc vessels, and Doppler angles were calculated from 3D vessel geometry. The accuracy of blood flow from the Doppler OCT was validated using a flow phantom. The feasibility of the method was tested on a subject in vivo. The pulsatile retinal blood flow and the parameters for retinal hemodynamics were successfully obtained. PMID:26977370

  4. 3D computational imaging with single-pixel detectors.

    PubMed

    Sun, B; Edgar, M P; Bowman, R; Vittert, L E; Welsh, S; Bowman, A; Padgett, M J

    2013-05-17

    Computational imaging enables retrieval of the spatial information of an object with the use of single-pixel detectors. By projecting a series of known random patterns and measuring the backscattered intensity, it is possible to reconstruct a two-dimensional (2D) image. We used several single-pixel detectors in different locations to capture the 3D form of an object. From each detector we derived a 2D image that appeared to be illuminated from a different direction, even though only a single digital projector was used for illumination. From the shading of the images, the surface gradients could be derived and the 3D object reconstructed. We compare our result to that obtained from a stereophotogrammetric system using multiple cameras. Our simplified approach to 3D imaging can readily be extended to nonvisible wavebands. PMID:23687044

  5. Low Dose, Low Energy 3d Image Guidance during Radiotherapy

    NASA Astrophysics Data System (ADS)

    Moore, C. J.; Marchant, T.; Amer, A.; Sharrock, P.; Price, P.; Burton, D.

    2006-04-01

    Patient kilo-voltage X-ray cone beam volumetric imaging for radiotherapy was first demonstrated on an Elekta Synergy mega-voltage X-ray linear accelerator. Subsequently low dose, reduced profile reconstruction imaging was shown to be practical for 3D geometric setup registration to pre-treatment planning images without compromising registration accuracy. Reconstruction from X-ray profiles gathered between treatment beam deliveries was also introduced. The innovation of zonal cone beam imaging promises significantly reduced doses to patients and improved soft tissue contrast in the tumour target zone. These developments coincided with the first dynamic 3D monitoring of continuous body topology changes in patients, at the moment of irradiation, using a laser interferometer. They signal the arrival of low dose, low energy 3D image guidance during radiotherapy itself.

  6. Simulation of 3D image depth perception in a 3D display using two stereoscopic displays at different depths

    NASA Astrophysics Data System (ADS)

    Uehira, Kazutake

    2006-02-01

    We studied a new 3-D display that uses two stereoscopic displays instead of two 2-D displays in a depth-fused 3D display. We found that two 3-D images with the same shape displayed at different depths by the two stereoscopic displays were fused into one 3-D image when they were viewed as overlapping. Moreover, we found that the perceived depth of the fused 3-D image depends on both the luminance ratio of the two 3-D images and their original perceived depth. This paper presents the simulation results for the perceived depth of the fused 3-D image on the new 3-D display. We applied a model in which the human visual system uses a low-pass filter to perceive the fused image, the same as that used for a conventional DFD display. The simulation results revealed that the perceived depth of the fused image changed depending on both the luminance ratio of the two 3-D images and their original perceived depth, as in the subjective test results, and the low-pass filter model accurately presented the perception of a 3-D image on our 3-D display.

  7. Fundus autofluorescence applications in retinal imaging

    PubMed Central

    Gabai, Andrea; Veritti, Daniele; Lanzetta, Paolo

    2015-01-01

    Fundus autofluorescence (FAF) is a relatively new imaging technique that can be used to study retinal diseases. It provides information on retinal metabolism and health. Several different pathologies can be detected. Peculiar AF alterations can help the clinician to monitor disease progression and to better understand its pathogenesis. In the present article, we review FAF principles and clinical applications. PMID:26139802

  8. Unsupervised segmentation of 3D brain MR images

    NASA Astrophysics Data System (ADS)

    Lee, Chulhee; Huh, Shin

    1998-10-01

    In this paper, we propose an algorithm for unsupervised segmentation of 3D sagittal brain MR images. 3D images consist of sequences of 2D images. We start the 3D segmentation from mid-sagittal brain MR images. Once these mid-sagittal images are successfully segmented, we use the resulting images to simplify the processing of the more lateral sagittal slices. In order to segment mid-sagittal brain MR images, we first apply thresholding to obtain binary images. Then we find some landmarks in the binary images. The landmarks and anatomical information are used to preprocess the binary images. The preprocessing includes eliminating small regions and removing the skull, which substantially simplifies the subsequent operations. The strategy is to perform segmentation in the binary image as much as possible and then return to the original gray scale image to solve problematic areas. Once we accomplish the segmentation of the mid-sagittal brain MR image, the segmented brain area is used as a mask for adjacent slices. Experiments show promising results.

  9. A 3D surface imaging system for assessing human obesity

    NASA Astrophysics Data System (ADS)

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  10. 3D reconstruction of neurons in electron microscopy images.

    PubMed

    Ensafi, Shahab; Shijian Lu; Kassim, Ashraf A; Tan, Chew Lim

    2014-01-01

    With the prevalence of brain-related diseases like Alzheimer in an increasing ageing population, Connectomics, the study of connections between neurons of the human brain, has emerged as a novel and challenging research topic. Accurate and fully automatic algorithms are needed to deal with the increasing amount of data from the brain images. This paper presents an automatic 3D neuron reconstruction technique where neurons within each slice image are first segmented and then linked across multiple slices within the publicly available Electron Microscopy dataset (SNEMI3D). First, random Forest classifier is adapted on top of superpixels for the neuron segmentation within each slice image. The maximum overlap between two consecutive images is then calculated for neuron linking, where the adjacency matrix of two different labeling of the segments is used to distinguish neuron merging and splitting. Experiments over the SNEMI3D dataset show that the proposed technique is efficient and accurate. PMID:25571541

  11. Ballistic and 3-D holographic imaging of bone

    NASA Astrophysics Data System (ADS)

    Hofmann, Martin; Huyet, Guillaume; Jones, David; Meerholz, Klaus; Hummelen, Kees; Corbett, Brian; van Loon, Jack; Eckhard, Fir; Voges, Heinrich; Spahn, Jürgen; Thoma, Ralph

    2005-10-01

    This MAP project is investigating and realising an alternative technology to provide 3-D bone images at very high (μm) resolutions, without ionising radiation, and potentially much more cost effective than μCT. This new method uses time-resolved or coherence-gating imaging schemes with near-IR light sources, preferably compact solid-state lasers, cost-effective semiconductor lasers or light-emitting diodes. This system has the potential to resolve single osteoclast resorption pits and to allow 3-D biomedical imaging in significantly reduced time compared with μCT.

  12. 3D Image Display Courses for Information Media Students.

    PubMed

    Yanaka, Kazuhisa; Yamanouchi, Toshiaki

    2016-01-01

    Three-dimensional displays are used extensively in movies and games. These displays are also essential in mixed reality, where virtual and real spaces overlap. Therefore, engineers and creators should be trained to master 3D display technologies. For this reason, the Department of Information Media at the Kanagawa Institute of Technology has launched two 3D image display courses specifically designed for students who aim to become information media engineers and creators. PMID:26960028

  13. Imaging of retinal and choroidal vascular tumours

    PubMed Central

    Heimann, H; Jmor, F; Damato, B

    2013-01-01

    The most common intraocular vascular tumours are choroidal haemangiomas, vasoproliferative tumours, and retinal haemangioblastomas. Rarer conditions include cavernous retinal angioma and arteriovenous malformations. Options for ablating the tumour include photodynamic therapy, argon laser photocoagulation, trans-scleral diathermy, cryotherapy, anti-angiogenic agents, plaque radiotherapy, and proton beam radiotherapy. Secondary effects are common and include retinal exudates, macular oedema, epiretinal membranes, retinal fibrosis, as well as serous and tractional retinal detachment, which are treated using standard methods (ie, intravitreal anti-angiogenic agents or steroids as well as vitreoretinal procedures, such as epiretinal membrane peeling and release of retinal traction). The detection, diagnosis, and monitoring of vascular tumours and their complications have improved considerably thanks to advances in imaging. These include spectral domain and enhanced depth imaging optical coherence tomography (SD-OCT and EDI-OCT, respectively), wide-angle photography and angiography as well as wide-angle fundus autofluorescence. Such novel imaging has provided new diagnostic clues and has profoundly influenced therapeutic strategies so that vascular tumours and secondary effects are now treated concurrently instead of sequentially, enhancing any opportunities for conserving vision and the eye. In this review, we describe how SD-OCT, EDI-OCT, autofluorescence, wide-angle photography and wide-angle angiography have facilitated the evaluation of eyes with the more common vascular tumours, that is, choroidal haemangioma, retinal vasoproliferative tumours, and retinal haemangioblastoma. PMID:23196648

  14. 3D image analysis of abdominal aortic aneurysm

    NASA Astrophysics Data System (ADS)

    Subasic, Marko; Loncaric, Sven; Sorantin, Erich

    2001-07-01

    In this paper we propose a technique for 3-D segmentation of abdominal aortic aneurysm (AAA) from computed tomography angiography (CTA) images. Output data (3-D model) form the proposed method can be used for measurement of aortic shape and dimensions. Knowledge of aortic shape and size is very important in planning of minimally invasive procedure that is for selection of appropriate stent graft device for treatment of AAA. The technique is based on a 3-D deformable model and utilizes the level-set algorithm for implementation of the method. The method performs 3-D segmentation of CTA images and extracts a 3-D model of aortic wall. Once the 3-D model of aortic wall is available it is easy to perform all required measurements for appropriate stent graft selection. The method proposed in this paper uses the level-set algorithm for deformable models, instead of the classical snake algorithm. The main advantage of the level set algorithm is that it enables easy segmentation of complex structures, surpassing most of the drawbacks of the classical approach. We have extended the deformable model to incorporate the a priori knowledge about the shape of the AAA. This helps direct the evolution of the deformable model to correctly segment the aorta. The algorithm has been implemented in IDL and C languages. Experiments have been performed using real patient CTA images and have shown good results.

  15. Scalable 3D image conversion and ergonomic evaluation

    NASA Astrophysics Data System (ADS)

    Kishi, Shinsuke; Kim, Sang Hyun; Shibata, Takashi; Kawai, Takashi; Häkkinen, Jukka; Takatalo, Jari; Nyman, Göte

    2008-02-01

    Digital 3D cinema has recently become popular and a number of high-quality 3D films have been produced. However, in contrast with advances in 3D display technology, it has been pointed out that there is a lack of suitable 3D content and content creators. Since 3D display methods and viewing environments vary widely, there is expectation that high-quality content will be multi-purposed. On the other hand, there is increasing interest in the bio-medical effects of image content of various types and there are moves toward international standardization, so 3D content production needs to take into consideration safety and conformity with international guidelines. The aim of the authors' research is to contribute to the production and application of 3D content that is safe and comfortable to watch by developing a scalable 3D conversion technology. In this paper, the authors focus on the process of changing the screen size, examining a conversion algorithm and its effectiveness. The authors evaluated the visual load imposed during the viewing of various 3D content converted by the prototype algorithm as compared with ideal conditions and with content expanded without conversion. Sheffe's paired comparison method was used for evaluation. To examine the effects of screen size reduction on viewers, changes in user impression and experience were elucidated using the IBQ methodology. The results of the evaluation are presented along with a discussion of the effectiveness and potential of the developed scalable 3D conversion algorithm and future research tasks.

  16. 3-D Terahertz Synthetic-Aperture Imaging and Spectroscopy

    NASA Astrophysics Data System (ADS)

    Henry, Samuel C.

    Terahertz (THz) wavelengths have attracted recent interest in multiple disciplines within engineering and science. Situated between the infrared and the microwave region of the electromagnetic spectrum, THz energy can propagate through non-polar materials such as clothing or packaging layers. Moreover, many chemical compounds, including explosives and many drugs, reveal strong absorption signatures in the THz range. For these reasons, THz wavelengths have great potential for non-destructive evaluation and explosive detection. Three-dimensional (3-D) reflection imaging with considerable depth resolution is also possible using pulsed THz systems. While THz imaging (especially 3-D) systems typically operate in transmission mode, reflection offers the most practical configuration for standoff detection, especially for objects with high water content (like human tissue) which are opaque at THz frequencies. In this research, reflection-based THz synthetic-aperture (SA) imaging is investigated as a potential imaging solution. THz SA imaging results presented in this dissertation are unique in that a 2-D planar synthetic array was used to generate a 3-D image without relying on a narrow time-window for depth isolation cite [Shen 2005]. Novel THz chemical detection techniques are developed and combined with broadband THz SA capabilities to provide concurrent 3-D spectral imaging. All algorithms are tested with various objects and pressed pellets using a pulsed THz time-domain system in the Northwest Electromagnetics and Acoustics Research Laboratory (NEAR-Lab).

  17. Computerized analysis of pelvic incidence from 3D images

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaž; Janssen, Michiel M. A.; Pernuš, Franjo; Castelein, René M.; Viergever, Max A.

    2012-02-01

    The sagittal alignment of the pelvis can be evaluated by the angle of pelvic incidence (PI), which is constant for an arbitrary subject position and orientation and can be therefore compared among subjects in standing, sitting or supine position. In this study, PI was measured from three-dimensional (3D) computed tomography (CT) images of normal subjects that were acquired in supine position. A novel computerized method, based on image processing techniques, was developed to automatically determine the anatomical references required to measure PI, i.e. the centers of the femoral heads in 3D, and the center and inclination of the sacral endplate in 3D. Multiplanar image reformation was applied to obtain perfect sagittal views with all anatomical structures completely in line with the hip axis, from which PI was calculated. The resulting PI (mean+/-standard deviation) was equal to 46.6°+/-9.2° for male subjects (N = 189), 47.6°+/-10.7° for female subjects (N = 181), and 47.1°+/-10.0° for all subjects (N = 370). The obtained measurements of PI from 3D images were not biased by acquisition projection or structure orientation, because all anatomical structures were completely in line with the hip axis. The performed measurements in 3D therefore represent PI according to the actual geometrical relationships among anatomical structures of the sacrum, pelvis and hips, as observed from the perfect sagittal views.

  18. 3D image analysis of abdominal aortic aneurysm

    NASA Astrophysics Data System (ADS)

    Subasic, Marko; Loncaric, Sven; Sorantin, Erich

    2002-05-01

    This paper presents a method for 3-D segmentation of abdominal aortic aneurysm from computed tomography angiography images. The proposed method is automatic and requires minimal user assistance. Segmentation is performed in two steps. First inner and then outer aortic border is segmented. Those two steps are different due to different image conditions on two aortic borders. Outputs of these two segmentations give a complete 3-D model of abdominal aorta. Such a 3-D model is used in measurements of aneurysm area. The deformable model is implemented using the level-set algorithm due to its ability to describe complex shapes in natural manner which frequently occur in pathology. In segmentation of outer aortic boundary we introduced some knowledge based preprocessing to enhance and reconstruct low contrast aortic boundary. The method has been implemented in IDL and C languages. Experiments have been performed using real patient CTA images and have shown good results.

  19. Integrated optical 3D digital imaging based on DSP scheme

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.

    2008-03-01

    We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.

  20. Automatic 3d Mapping Using Multiple Uncalibrated Close Range Images

    NASA Astrophysics Data System (ADS)

    Rafiei, M.; Saadatseresht, M.

    2013-09-01

    Automatic three-dimensions modeling of the real world is an important research topic in the geomatics and computer vision fields for many years. By development of commercial digital cameras and modern image processing techniques, close range photogrammetry is vastly utilized in many fields such as structure measurements, topographic surveying, architectural and archeological surveying, etc. A non-contact photogrammetry provides methods to determine 3D locations of objects from two-dimensional (2D) images. Problem of estimating the locations of 3D points from multiple images, often involves simultaneously estimating both 3D geometry (structure) and camera pose (motion), it is commonly known as structure from motion (SfM). In this research a step by step approach to generate the 3D point cloud of a scene is considered. After taking images with a camera, we should detect corresponding points in each two views. Here an efficient SIFT method is used for image matching for large baselines. After that, we must retrieve the camera motion and 3D position of the matched feature points up to a projective transformation (projective reconstruction). Lacking additional information on the camera or the scene makes the parallel lines to be unparalleled. The results of SfM computation are much more useful if a metric reconstruction is obtained. Therefor multiple views Euclidean reconstruction applied and discussed. To refine and achieve the precise 3D points we use more general and useful approach, namely bundle adjustment. At the end two real cases have been considered to reconstruct (an excavation and a tower).

  1. 3D Image Segmentation Using the Bounded Irregular Pyramid

    NASA Astrophysics Data System (ADS)

    Torres, Fuensanta; Marfil, Rebeca; Bandera, Antonio

    This paper presents a novel pyramid approach for fast segmentation of 3D images. A pyramid is a hierarchy of successively reduced graphs whose efficiency is strongly influenced by the data structure that codes the information within the pyramid and the decimation process used to build a graph from the graph below. Depending on these two features, pyramids have been classified as regular and irregular ones. The proposed approach extends the idea of the Bounded Irregular Pyramid (BIP) [5] to 3D images. Thus, the 3D-BIP is a mixture of both types of pyramids whose goal is to combine their advantages: the low computational cost of regular pyramids with the consistent and useful results provided by the irregular ones. Specifically, its data structure combines a regular decimation process with an union-find strategy to build the successive 3D levels of the structure. Experimental results show that this approach is able to provide a low-level segmentation of 3D images at a low computational cost.

  2. Laboratory 3D Micro-XRF/Micro-CT Imaging System

    NASA Astrophysics Data System (ADS)

    Bruyndonckx, P.; Sasov, A.; Liu, X.

    2011-09-01

    A prototype micro-XRF laboratory system based on pinhole imaging was developed to produce 3D elemental maps. The fluorescence x-rays are detected by a deep-depleted CCD camera operating in photon-counting mode. A charge-clustering algorithm, together with dynamically adjusted exposure times, ensures a correct energy measurement. The XRF component has a spatial resolution of 70 μm and an energy resolution of 180 eV at 6.4 keV. The system is augmented by a micro-CT imaging modality. This is used for attenuation correction of the XRF images and to co-register features in the 3D XRF images with morphological structures visible in the volumetric CT images of the object.

  3. AUTOMATED CELL SEGMENTATION WITH 3D FLUORESCENCE MICROSCOPY IMAGES

    PubMed Central

    Kong, Jun; Wang, Fusheng; Teodoro, George; Liang, Yanhui; Zhu, Yangyang; Tucker-Burden, Carol; Brat, Daniel J.

    2015-01-01

    A large number of cell-oriented cancer investigations require an effective and reliable cell segmentation method on three dimensional (3D) fluorescence microscopic images for quantitative analysis of cell biological properties. In this paper, we present a fully automated cell segmentation method that can detect cells from 3D fluorescence microscopic images. Enlightened by fluorescence imaging techniques, we regulated the image gradient field by gradient vector flow (GVF) with interpolated and smoothed data volume, and grouped voxels based on gradient modes identified by tracking GVF field. Adaptive thresholding was then applied to voxels associated with the same gradient mode where voxel intensities were enhanced by a multiscale cell filter. We applied the method to a large volume of 3D fluorescence imaging data of human brain tumor cells with (1) small cell false detection and missing rates for individual cells; and (2) trivial over and under segmentation incidences for clustered cells. Additionally, the concordance of cell morphometry structure between automated and manual segmentation was encouraging. These results suggest a promising 3D cell segmentation method applicable to cancer studies. PMID:26405506

  4. A miniature high resolution 3-D imaging sonar.

    PubMed

    Josserand, Tim; Wolley, Jason

    2011-04-01

    This paper discusses the design and development of a miniature, high resolution 3-D imaging sonar. The design utilizes frequency steered phased arrays (FSPA) technology. FSPAs present a small, low-power solution to the problem of underwater imaging sonars. The technology provides a method to build sonars with a large number of beams without the proportional power, circuitry and processing complexity. The design differs from previous methods in that the array elements are manufactured from a monolithic material. With this technique the arrays are flat and considerably smaller element dimensions are achievable which allows for higher frequency ranges and smaller array sizes. In the current frequency range, the demonstrated array has ultra high image resolution (1″ range×1° azimuth×1° elevation) and small size (<3″×3″). The design of the FSPA utilizes the phasing-induced frequency-dependent directionality of a linear phased array to produce multiple beams in a forward sector. The FSPA requires only two hardware channels per array and can be arranged in single and multiple array configurations that deliver wide sector 2-D images. 3-D images can be obtained by scanning the array in a direction perpendicular to the 2-D image field and applying suitable image processing to the multiple scanned 2-D images. This paper introduces the 3-D FSPA concept, theory and design methodology. Finally, results from a prototype array are presented and discussed. PMID:21112066

  5. 3-D Display Of Magnetic Resonance Imaging Of The Spine

    NASA Astrophysics Data System (ADS)

    Nelson, Alan C.; Kim, Yongmin; Haralick, Robert M.; Anderson, Paul A.; Johnson, Roger H.; DeSoto, Larry A.

    1988-06-01

    The original data is produced through standard magnetic resonance imaging (MRI) procedures with a surface coil applied to the lower back of a normal human subject. The 3-D spine image data consists of twenty-six contiguous slices with 256 x 256 pixels per slice. Two methods for visualization of the 3-D spine are explored. One method utilizes a verifocal mirror system which creates a true 3-D virtual picture of the object. Another method uses a standard high resolution monitor to simultaneously show the three orthogonal sections which intersect at any user-selected point within the object volume. We discuss the application of these systems in assessment of low back pain.

  6. 3-D Image Processing In Functional Stereotactic Neurosurgery

    NASA Astrophysics Data System (ADS)

    Garibotto, G.; Giorgi, c.; Cerchiari, U.

    1983-10-01

    The possibility of analyzing neuroanatomical images (such as in computerized tomography) in a three dimensional form represents a significant improvement in diagnostic accuracy over conventional analysis of 2-D images. Expecially when the task consists in finding correspondences and/or differences between different measures of the same organ, only a 3-D comparison allows reliable results avoiding misregistration errors. In the paper we refer an example of 3-D analysis and representation of cerebral structures in functional stereotactic neurosurgery. This problem is briefly introduced in section 1, with reference to the state of the art and future trends of the medical research. Digital image processing techniques are used to obtain a 3-D description of some diencephalic structures, primarily involved in stereotactic neurosurgery. The input data consist in parallel sections of suitably stained brain slices which are displayed in a stereotactic atlas 11] . The 3-D reconstruction has been accomplished in different steps : at first the contour lines of the cerebral structures have been digitized as a 1-D description of the cerebral structures. Then a contour filling procedure has been implemented to collect these elements in a finite number of regions,labelled by fixed amplitude values (easy discrimination using pseudocolor display).Due to the non-uniform spacing of the atlas sections, three-dimensional interpolation has been performed on such 2-D images. These processing techniques are described in section 2, with examples of application on actual stereotactic data. The main advantage of dealing with neuroanatomical data in 3-D form is the possibility to verify any probe trajector that does not lie in the atlas planes, which is usually the case. Very simple techniques have been developed to evaluate arbitrarily slant sections of the volume as well as to obtain approximate 3-D assonometric display, using standard minicomputer [21 . A brief discussion on these solution is referred in section 3.

  7. Reconstruction of 3D scenes from sequences of images

    NASA Astrophysics Data System (ADS)

    Niu, Bei; Sang, Xinzhu; Chen, Duo; Cai, Yuanfa

    2013-08-01

    Reconstruction of three-dimensional (3D) scenes is an active research topic in the field of computer vision and 3D display. It's a challenge to model 3D objects rapidly and effectively. A 3D model can be extracted from multiple images. The system only requires a sequence of images taken with cameras without knowing the parameters of camera, which provide flexibility to a high degree. We focus on quickly merging point cloud of the object from depth map sequences. The whole system combines algorithms of different areas in computer vision, such as camera calibration, stereo correspondence, point cloud splicing and surface reconstruction. The procedure of 3D reconstruction is decomposed into a number of successive steps. Firstly, image sequences are received by the camera freely moving around the object. Secondly, the scene depth is obtained by a non-local stereo matching algorithm. The pairwise is realized with the Scale Invariant Feature Transform (SIFT) algorithm. An initial matching is then made for the first two images of the sequence. For the subsequent image that is processed with previous image, the point of interest corresponding to ones in previous images are refined or corrected. The vertical parallax between the images is eliminated. The next step is to calibrate camera, and intrinsic parameters and external parameters of the camera are calculated. Therefore, The relative position and orientation of camera are gotten. A sequence of depth maps are acquired by using a non-local cost aggregation method for stereo matching. Then point cloud sequence is achieved by the scene depths, which consists of point cloud model using the external parameters of camera and the point cloud sequence. The point cloud model is then approximated by a triangular wire-frame mesh to reduce geometric complexity and to tailor the model to the requirements of computer graphics visualization systems. Finally, the texture is mapped onto the wire-frame model, which can also be used for 3D display. According to the experimental results, we can reconstruct a 3D point cloud model more quickly and efficiently than other methods.

  8. Automated curved planar reformation of 3D spine images

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo

    2005-10-01

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks.

  9. Practical pseudo-3D registration for large tomographic images

    NASA Astrophysics Data System (ADS)

    Liu, Xuan; Laperre, Kjell; Sasov, Alexander

    2014-09-01

    Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has been performed.

  10. Optimizing 3D image quality and performance for stereoscopic gaming

    NASA Astrophysics Data System (ADS)

    Flack, Julien; Sanderson, Hugh; Pegg, Steven; Kwok, Simon; Paterson, Daniel

    2009-02-01

    The successful introduction of stereoscopic TV systems, such as Samsung's 3D Ready Plasma, requires high quality 3D content to be commercially available to the consumer. Console and PC games provide the most readily accessible source of high quality 3D content. This paper describes innovative developments in a generic, PC-based game driver architecture that addresses the two key issues affecting 3D gaming: quality and speed. At the heart of the quality issue are the same considerations that studios face producing stereoscopic renders from CG movies: how best to perform the mapping from a geometric CG environment into the stereoscopic display volume. The major difference being that for game drivers this mapping cannot be choreographed by hand but must be automatically calculated in real-time without significant impact on performance. Performance is a critical issue when dealing with gaming. Stereoscopic gaming has traditionally meant rendering the scene twice with the associated performance overhead. An alternative approach is to render the scene from one virtual camera position and use information from the z-buffer to generate a stereo pair using Depth-Image-Based Rendering (DIBR). We analyze this trade-off in more detail and provide some results relating to both 3D image quality and render performance.

  11. 3D wavefront image formation for NIITEK GPR

    NASA Astrophysics Data System (ADS)

    Soumekh, Mehrdad; Ton, Tuan; Howard, Pete

    2009-05-01

    The U.S. Department of Defense Humanitarian Demining (HD) Research and Development Program focuses on developing, testing, demonstrating, and validating new technology for immediate use in humanitarian demining operations around the globe. Beginning in the late 1990's, the U.S. Army Countermine Division funded the development of the NIITEK ground penetrating radar (GPR) for detection of anti-tank (AT) landmines. This work is concerned with signal processing algorithms to suppress sources of artifacts in the NIITEK GPR, and formation of three-dimensional (3D) imagery from the resultant data. We first show that the NIITEK GPR data correspond to a 3D Synthetic Aperture Radar (SAR) database. An adaptive filtering method is utilized to suppress ground return and self-induced resonance (SIR) signals that are generated by the interaction of the radar-carrying platform and the transmitted radar signal. We examine signal processing methods to improve the fidelity of imagery for this 3D SAR system using pre-processing methods that suppress Doppler aliasing as well as other side lobe leakage artifacts that are introduced by the radar radiation pattern. The algorithm, known as digital spotlighting, imposes a filtering scheme on the azimuth-compressed SAR data, and manipulates the resultant spectral data to achieve a higher PRF to suppress the Doppler aliasing. We also present the 3D version of the Fourier-based wavefront reconstruction, a computationally-efficient and approximation-free SAR imaging method, for image formation with the NIITEK 3D SAR database.

  12. Gastric Contraction Imaging System Using a 3-D Endoscope

    PubMed Central

    Yamada, Kenji; Watabe, Kenji; Takeda, Maki; Nishimura, Takahiro; Kido, Michiko; Nagakura, Toshiaki; Takahashi, Hideya; Nishida, Tsutomu; Iijima, Hideki; Tsujii, Masahiko; Takehara, Tetsuo; Ohno, Yuko

    2014-01-01

    This paper presents a gastric contraction imaging system for assessment of gastric motility using a 3-D endoscope. Gastrointestinal diseases are mainly based on morphological abnormalities. However, gastrointestinal symptoms are sometimes apparent without visible abnormalities. One of the major factors for these diseases is abnormal gastrointestinal motility. For assessment of gastric motility, a gastric motility imaging system is needed. To assess the dynamic motility of the stomach, the proposed system measures 3-D gastric contractions derived from a 3-D profile of the stomach wall obtained with a developed 3-D endoscope. After obtaining contraction waves, their frequency, amplitude, and speed of propagation can be calculated using a Gaussian function. The proposed system was evaluated for 3-D measurements of several objects with known geometries. The results showed that the surface profiles could be obtained with an error of \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}${<}{10\\%}$\\end{document} of the distance between two different points on images. Subsequently, we evaluated the validity of a prototype system using a wave simulated model. In the experiment, the amplitude and position of waves could be measured with 1-mm accuracy. The present results suggest that the proposed system can measure the speed and amplitude of contractions. This system has low invasiveness and can assess the motility of the stomach wall directly in a 3-D manner. Our method can be used for examination of gastric morphological and functional abnormalities.

  13. Retinal imaging using adaptive optics technology☆

    PubMed Central

    Kozak, Igor

    2014-01-01

    Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effect of wave front distortions. Retinal imaging using AO aims to compensate for higher order aberrations originating from the cornea and the lens by using deformable mirror. The main application of AO retinal imaging has been to assess photoreceptor cell density, spacing, and mosaic regularity in normal and diseased eyes. Apart from photoreceptors, the retinal pigment epithelium, retinal nerve fiber layer, retinal vessel wall and lamina cribrosa can also be visualized with AO technology. Recent interest in AO technology in eye research has resulted in growing number of reports and publications utilizing this technology in both animals and humans. With the availability of first commercially available instruments we are making transformation of AO technology from a research tool to diagnostic instrument. The current challenges include imaging eyes with less than perfect optical media, formation of normative databases for acquired images such as cone mosaics, and the cost of the technology. The opportunities for AO will include more detailed diagnosis with description of some new findings in retinal diseases and glaucoma as well as expansion of AO into clinical trials which has already started. PMID:24843304

  14. Retinal imaging using adaptive optics technology.

    PubMed

    Kozak, Igor

    2014-04-01

    Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effect of wave front distortions. Retinal imaging using AO aims to compensate for higher order aberrations originating from the cornea and the lens by using deformable mirror. The main application of AO retinal imaging has been to assess photoreceptor cell density, spacing, and mosaic regularity in normal and diseased eyes. Apart from photoreceptors, the retinal pigment epithelium, retinal nerve fiber layer, retinal vessel wall and lamina cribrosa can also be visualized with AO technology. Recent interest in AO technology in eye research has resulted in growing number of reports and publications utilizing this technology in both animals and humans. With the availability of first commercially available instruments we are making transformation of AO technology from a research tool to diagnostic instrument. The current challenges include imaging eyes with less than perfect optical media, formation of normative databases for acquired images such as cone mosaics, and the cost of the technology. The opportunities for AO will include more detailed diagnosis with description of some new findings in retinal diseases and glaucoma as well as expansion of AO into clinical trials which has already started. PMID:24843304

  15. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  16. Refraction correction in 3D transcranial ultrasound imaging.

    PubMed

    Lindsey, Brooks D; Smith, Stephen W

    2014-01-01

    We present the first correction of refraction in three-dimensional (3D) ultrasound imaging using an iterative approach that traces propagation paths through a two-layer planar tissue model, applying Snell's law in 3D. This approach is applied to real-time 3D transcranial ultrasound imaging by precomputing delays offline for several skull thicknesses, allowing the user to switch between three sets of delays for phased array imaging at the push of a button. Simulations indicate that refraction correction may be expected to increase sensitivity, reduce beam steering errors, and partially restore lost spatial resolution, with the greatest improvements occurring at the largest steering angles. Distorted images of cylindrical lesions were created by imaging through an acrylic plate in a tissue-mimicking phantom. As a result of correcting for refraction, lesions were restored to 93.6% of their original diameter in the lateral direction and 98.1% of their original shape along the long axis of the cylinders. In imaging two healthy volunteers, the mean brightness increased by 8.3% and showed no spatial dependency. PMID:24275538

  17. Deblurring of tomosynthesis images using 3D anisotropic diffusion filtering

    NASA Astrophysics Data System (ADS)

    Sun, Xuejun; Land, Walker; Samala, Ravi

    2007-03-01

    Breast tomosynthesis is an emerging state-of-the-art three-dimensional (3D) imaging technology that demonstrates significant early promise in screening and diagnosing breast cancer. However, this kind of image has significant out-of-plane artifacts due to its limited tomography nature, which affects the image quality and further would interrupt interpretation. In this paper, we develop a robust deblurring method to remove or suppress blurry artifacts by applying three-dimensional (3D) nonlinear anisotropic diffusion filtering method. Differential equation of 3D anisotropic diffusion filtering is discretized using explicit and implicit numerical methods, respectively, combined by first (fixed grey value) and second (adiabatic) boundary conditions under ten nearest neighbor grids configuration of finite difference scheme. The discretized diffusion equation is applied in the breast volume reconstructed from the entire tomosynthetic images of breast. The proposed diffusion filtering method is evaluated qualitatively and quantitatively on clinical tomosynthesis images. Results indicate that the proposed diffusion filtering method is very powerful in suppressing the blurry artifacts, and the results also indicate that implicit numerical algorithm with fixed value boundary condition has better performance in enhancing the contrast of tomosynthesis image, demonstrating the effectiveness of the proposed filtering method in deblurring the out-of-plane artifacts.

  18. Physical modeling of 3D and 4D laser imaging

    NASA Astrophysics Data System (ADS)

    Anna, Guillaume; Hamoir, Dominique; Hespel, Laurent; Lafay, Fabien; Rivière, Nicolas; Tanguy, Bernard

    2010-04-01

    Laser imaging offers potential for observation, for 3D terrain-mapping and classification as well as for target identification, including behind vegetation, camouflage or glass windows, at day and night, and under all-weather conditions. First generation systems deliver 3D point clouds. The threshold detection is largely affected by the local opto-geometric characteristics of the objects, leading to inaccuracies in the distances measured, and by partial occultation, leading to multiple echos. Second generation systems circumvent these limitations by recording the temporal waveforms received by the system, so that data processing can improve the telemetry and the point cloud better match the reality. Future algorithms may exploit the full potential of the 4D full-waveform data. Hence, being able to simulate point-cloud (3D) and full-waveform (4D) laser imaging is key. We have developped a numerical model for predicting the output data of 3D or 4D laser imagers. The model does account for the temporal and transverse characteristics of the laser pulse (i.e. of the "laser bullet") emitted by the system, its propagation through turbulent and scattering atmosphere, its interaction with the objects present in the field of view, and the characteristics of the optoelectronic reception path of the system.

  19. 3-D segmentation of human sternum in lung MDCT images.

    PubMed

    Pazokifard, Banafsheh; Sowmya, Arcot

    2013-01-01

    A fully automatic novel algorithm is presented for accurate 3-D segmentation of the human sternum in lung multi detector computed tomography (MDCT) images. The segmentation result is refined by employing active contours to remove calcified costal cartilage that is attached to the sternum. For each dataset, costal notches (sternocostal joints) are localized in 3-D by using a sternum mask and positions of the costal notches on it as reference. The proposed algorithm for sternum segmentation was tested on 16 complete lung MDCT datasets and comparison of the segmentation results to the reference delineation provided by a radiologist, shows high sensitivity (92.49%) and specificity (99.51%) and small mean distance (dmean=1.07 mm). Total average of the Euclidean distance error for costal notches positioning in 3-D is 4.2 mm. PMID:24110446

  20. 1024 pixels single photon imaging array for 3D ranging

    NASA Astrophysics Data System (ADS)

    Bellisai, S.; Guerrieri, F.; Tisa, S.; Zappa, F.; Tosi, A.; Giudice, A.

    2011-01-01

    Three dimensions (3D) acquisition systems are driving applications in many research field. Nowadays 3D acquiring systems are used in a lot of applications, such as cinema industry or in automotive (for active security systems). Depending on the application, systems present different features, for example color sensitivity, bi-dimensional image resolution, distance measurement accuracy and acquisition frame rate. The system we developed acquires 3D movie using indirect Time of Flight (iTOF), starting from phase delay measurement of a sinusoidally modulated light. The system acquires live movie with a frame rate up to 50frame/s in a range distance between 10 cm up to 7.5 m.

  1. Superpixel-Based Segmentation for 3D Prostate MR Images.

    PubMed

    Tian, Zhiqiang; Liu, Lizhi; Zhang, Zhenfeng; Fei, Baowei

    2016-03-01

    This paper proposes a method for segmenting the prostate on magnetic resonance (MR) images. A superpixel-based 3D graph cut algorithm is proposed to obtain the prostate surface. Instead of pixels, superpixels are considered as the basic processing units to construct a 3D superpixel-based graph. The superpixels are labeled as the prostate or background by minimizing an energy function using graph cut based on the 3D superpixel-based graph. To construct the energy function, we proposed a superpixel-based shape data term, an appearance data term, and two superpixel-based smoothness terms. The proposed superpixel-based terms provide the effectiveness and robustness for the segmentation of the prostate. The segmentation result of graph cuts is used as an initialization of a 3D active contour model to overcome the drawback of the graph cut. The result of 3D active contour model is then used to update the shape model and appearance model of the graph cut. Iterations of the 3D graph cut and 3D active contour model have the ability to jump out of local minima and obtain a smooth prostate surface. On our 43 MR volumes, the proposed method yields a mean Dice ratio of 89.3 ±1.9%. On PROMISE12 test data set, our method was ranked at the second place; the mean Dice ratio and standard deviation is 87.0±3.2%. The experimental results show that the proposed method outperforms several state-of-the-art prostate MRI segmentation methods. PMID:26540678

  2. 3D robust digital image correlation for vibration measurement.

    PubMed

    Chen, Zhong; Zhang, Xianmin; Fatikow, Sergej

    2016-03-01

    Discrepancies of speckle images under dynamic measurement due to the different viewing angles will deteriorate the correspondence in 3D digital image correlation (3D-DIC) for vibration measurement. Facing this kind of bottleneck, this paper presents two types of robust 3D-DIC methods for vibration measurement, SSD-robust and SWD-robust, which use a sum of square difference (SSD) estimator plus a Geman-McClure regulating term and a Welch estimator plus a Geman-McClure regulating term, respectively. Because the regulating term with an adaptive rejecting bound can lessen the influence of the abnormal pixel data in the dynamical measuring process, the robustness of the algorithm is enhanced. The robustness and precision evaluation experiments using a dual-frequency laser interferometer are implemented. The experimental results indicate that the two presented robust estimators can suppress the effects of the abnormality in the speckle images and, meanwhile, keep higher precision in vibration measurement in contrast with the traditional SSD method; thus, the SWD-robust and SSD-robust methods are suitable for weak image noise and strong image noise, respectively. PMID:26974624

  3. Retinal fundus imaging in mouse models of retinal diseases.

    PubMed

    Alex, Anne F; Heiduschka, Peter; Eter, Nicole

    2013-01-01

    The development of in vivo retinal fundus imaging in mice has opened a new research horizon, not only in ophthalmic research. The ability to monitor the dynamics of vascular and cellular changes in pathological conditions, such as neovascularization or degeneration, longitudinally without the need to sacrifice the mouse, permits longer observation periods in the same animal. With the application of the high-resolution confocal scanning laser ophthalmoscopy in experimental mouse models, access to a large spectrum of imaging modalities in vivo is provided. PMID:23150359

  4. Advances in retinal ganglion cell imaging.

    PubMed

    Balendra, S I; Normando, E M; Bloom, P A; Cordeiro, M F

    2015-10-01

    Glaucoma is one of the leading causes of blindness worldwide and will affect 79.6 million people worldwide by 2020. It is caused by the progressive loss of retinal ganglion cells (RGCs), predominantly via apoptosis, within the retinal nerve fibre layer and the corresponding loss of axons of the optic nerve head. One of its most devastating features is its late diagnosis and the resulting irreversible visual loss that is often predictable. Current diagnostic tools require significant RGC or functional visual field loss before the threshold for detection of glaucoma may be reached. To propel the efficacy of therapeutics in glaucoma, an earlier diagnostic tool is required. Recent advances in retinal imaging, including optical coherence tomography, confocal scanning laser ophthalmoscopy, and adaptive optics, have propelled both glaucoma research and clinical diagnostics and therapeutics. However, an ideal imaging technique to diagnose and monitor glaucoma would image RGCs non-invasively with high specificity and sensitivity in vivo. It may confirm the presence of healthy RGCs, such as in transgenic models or retrograde labelling, or detect subtle changes in the number of unhealthy or apoptotic RGCs, such as detection of apoptosing retinal cells (DARC). Although many of these advances have not yet been introduced to the clinical arena, their successes in animal studies are enthralling. This review will illustrate the challenges of imaging RGCs, the main retinal imaging modalities, the in vivo techniques to augment these as specific RGC-imaging tools and their potential for translation to the glaucoma clinic. PMID:26293138

  5. 3D sound and 3D image interactions: a review of audio-visual depth perception

    NASA Astrophysics Data System (ADS)

    Berry, Jonathan S.; Roberts, David A. T.; Holliman, Nicolas S.

    2014-02-01

    There has been much research concerning visual depth perception in 3D stereoscopic displays and, to a lesser extent, auditory depth perception in 3D spatial sound systems. With 3D sound systems now available in a number of different forms, there is increasing interest in the integration of 3D sound systems with 3D displays. It therefore seems timely to review key concepts and results concerning depth perception in such display systems. We first present overviews of both visual and auditory depth perception, before focussing on cross-modal effects in audio-visual depth perception, which may be of direct interest to display and content designers.

  6. Interactive 2D to 3D stereoscopic image synthesis

    NASA Astrophysics Data System (ADS)

    Feldman, Mark H.; Lipton, Lenny

    2005-03-01

    Advances in stereoscopic display technologies, graphic card devices, and digital imaging algorithms have opened up new possibilities in synthesizing stereoscopic images. The power of today"s DirectX/OpenGL optimized graphics cards together with adapting new and creative imaging tools found in software products such as Adobe Photoshop, provide a powerful environment for converting planar drawings and photographs into stereoscopic images. The basis for such a creative process is the focus of this paper. This article presents a novel technique, which uses advanced imaging features and custom Windows-based software that utilizes the Direct X 9 API to provide the user with an interactive stereo image synthesizer. By creating an accurate and interactive world scene with moveable and flexible depth map altered textured surfaces, perspective stereoscopic cameras with both visible frustums and zero parallax planes, a user can precisely model a virtual three-dimensional representation of a real-world scene. Current versions of Adobe Photoshop provide a creative user with a rich assortment of tools needed to highlight elements of a 2D image, simulate hidden areas, and creatively shape them for a 3D scene representation. The technique described has been implemented as a Photoshop plug-in and thus allows for a seamless transition of these 2D image elements into 3D surfaces, which are subsequently rendered to create stereoscopic views.

  7. Large distance 3D imaging of hidden objects

    NASA Astrophysics Data System (ADS)

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  8. Automated reconstruction of 3D scenes from sequences of images

    NASA Astrophysics Data System (ADS)

    Pollefeys, M.; Koch, R.; Vergauwen, M.; Van Gool, L.

    Modelling of 3D objects from image sequences is a challenging problem and has been an important research topic in the areas of photogrammetry and computer vision for many years. In this paper, a system is presented which automatically extracts a textured 3D surface model from a sequence of images of a scene. The system can deal with unknown camera settings. In addition, the parameters of this camera are allowed to change during acquisition (e.g., by zooming or focusing). No prior knowledge about the scene is necessary to build the 3D models. Therefore, this system offers a high degree of flexibility. The system is based on state-of-the-art algorithms recently developed in computer vision. The 3D modelling task is decomposed into a number of successive steps. Gradually, more knowledge of the scene and the camera setup is retrieved. At this point, the obtained accuracy is not yet at the level required for most metrology applications, but the visual quality is very convincing. This system has been applied to a number of applications in archaeology. The Roman site of Sagalassos (southwest Turkey) was used as a test case to illustrate the potential of this new approach.

  9. 3D imaging of fetus vertebra by synchrotron radiation microtomography

    NASA Astrophysics Data System (ADS)

    Peyrin, Francoise; Pateyron-Salome, Murielle; Denis, Frederic; Braillon, Pierre; Laval-Jeantet, Anne-Marie; Cloetens, Peter

    1997-10-01

    A synchrotron radiation computed microtomography system allowing high resolution 3D imaging of bone samples has been developed at ESRF. The system uses a high resolution 2D detector based on a CCd camera coupled to a fluorescent screen through light optics. The spatial resolution of the device is particularly well adapted to the imaging of bone structure. In view of studying growth, vertebra samples of fetus with differential gestational ages were imaged. The first results show that fetus vertebra is quite different from adult bone both in terms of density and organization.

  10. 3D printed biomimetic vascular phantoms for assessment of hyperspectral imaging systems

    NASA Astrophysics Data System (ADS)

    Wang, Jianting; Ghassemi, Pejhman; Melchiorri, Anthony; Ramella-Roman, Jessica; Mathews, Scott A.; Coburn, James; Sorg, Brian; Chen, Yu; Pfefer, Joshua

    2015-03-01

    The emerging technique of three-dimensional (3D) printing provides a revolutionary way to fabricate objects with biologically realistic geometries. Previously we have performed optical and morphological characterization of basic 3D printed tissue-simulating phantoms and found them suitable for use in evaluating biophotonic imaging systems. In this study we assess the potential for printing phantoms with irregular, image-defined vascular networks that can be used to provide clinically-relevant insights into device performance. A previously acquired fundus camera image of the human retina was segmented, embedded into a 3D matrix, edited to incorporate the tubular shape of vessels and converted into a digital format suitable for printing. A polymer with biologically realistic optical properties was identified by spectrophotometer measurements of several commercially available samples. Phantoms were printed with the retinal vascular network reproduced as ~1.0 mm diameter channels at a range of depths up to ~3 mm. The morphology of the printed vessels was verified by volumetric imaging with μ-CT. Channels were filled with hemoglobin solutions at controlled oxygenation levels, and the phantoms were imaged by a near-infrared hyperspectral reflectance imaging system. The effect of vessel depth on hemoglobin saturation estimates was studied. Additionally, a phantom incorporating the vascular network at two depths was printed and filled with hemoglobin solution at two different saturation levels. Overall, results indicated that 3D printed phantoms are useful for assessing biophotonic system performance and have the potential to form the basis of clinically-relevant standardized test methods for assessment of medical imaging modalities.

  11. Combined registration of 3D tibia and femur implant models in 3D magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Englmeier, Karl-Hans; Siebert, Markus; von Eisenhart-Rothe, Ruediger; Graichen, Heiko

    2008-03-01

    The most frequent reasons for revision of total knee arthroplasty are loosening and abnormal axial alignment leading to an unphysiological kinematic of the knee implant. To get an idea about the postoperative kinematic of the implant, it is essential to determine the position and orientation of the tibial and femoral prosthesis. Therefore we developed a registration method for fitting 3D CAD-models of knee joint prostheses into an 3D MR image. This rigid registration is the basis for a quantitative analysis of the kinematics of knee implants. Firstly the surface data of the prostheses models are converted into a voxel representation; a recursive algorithm determines all boundary voxels of the original triangular surface data. Secondly an initial preconfiguration of the implants by the user is still necessary for the following step: The user has to perform a rough preconfiguration of both remaining prostheses models, so that the fine matching process gets a reasonable starting point. After that an automated gradient-based fine matching process determines the best absolute position and orientation: This iterative process changes all 6 parameters (3 rotational- and 3 translational parameters) of a model by a minimal amount until a maximum value of the matching function is reached. To examine the spread of the final solutions of the registration, the interobserver variability was measured in a group of testers. This variability, calculated by the relative standard deviation, improved from about 50% (pure manual registration) to 0.5% (rough manual preconfiguration and subsequent fine registration with the automatic fine matching process).

  12. Linear tracking for 3-D medical ultrasound imaging.

    PubMed

    Huang, Qing-Hua; Yang, Zhao; Hu, Wei; Jin, Lian-Wen; Wei, Gang; Li, Xuelong

    2013-12-01

    As the clinical application grows, there is a rapid technical development of 3-D ultrasound imaging. Compared with 2-D ultrasound imaging, 3-D ultrasound imaging can provide improved qualitative and quantitative information for various clinical applications. In this paper, we proposed a novel tracking method for a freehand 3-D ultrasound imaging system with improved portability, reduced degree of freedom, and cost. We designed a sliding track with a linear position sensor attached, and it transmitted positional data via a wireless communication module based on Bluetooth, resulting in a wireless spatial tracking modality. A traditional 2-D ultrasound probe fixed to the position sensor on the sliding track was used to obtain real-time B-scans, and the positions of the B-scans were simultaneously acquired when moving the probe along the track in a freehand manner. In the experiments, the proposed method was applied to ultrasound phantoms and real human tissues. The results demonstrated that the new system outperformed a previously developed freehand system based on a traditional six-degree-of-freedom spatial sensor in phantom and in vivo studies, indicating its merit in clinical applications for human tissues and organs. PMID:23757592

  13. 3D imaging: how to achieve highest accuracy

    NASA Astrophysics Data System (ADS)

    Luhmann, Thomas

    2011-07-01

    The generation of 3D information from images is a key technology in many different areas, e.g. in 3D modeling and representation of architectural or heritage objects, in human body motion tracking and scanning, in 3D scene analysis of traffic scenes, in industrial applications and many more. The basic concepts rely on mathematical representations of central perspective viewing as they are widely known from photogrammetry or computer vision approaches. The objectives of these methods differ, more or less, from high precision and well-structured measurements in (industrial) photogrammetry to fully-automated non-structured applications in computer vision. Accuracy and precision is a critical issue for the 3D measurement of industrial, engineering or medical objects. As state of the art, photogrammetric multi-view measurements achieve relative precisions in the order of 1:100000 to 1:200000, and relative accuracies with respect to retraceable lengths in the order of 1:50000 to 1:100000 of the largest object diameter. In order to obtain these figures a number of influencing parameters have to be optimized. These are, besides others: physical representation of object surface (targets, texture), illumination and light sources, imaging sensors, cameras and lenses, calibration strategies (camera model), orientation strategies (bundle adjustment), image processing of homologue features (target measurement, stereo and multi-image matching), representation of object or workpiece coordinate systems and object scale. The paper discusses the above mentioned parameters and offers strategies for obtaining highest accuracy in object space. Practical examples of high-quality stereo camera measurements and multi-image applications are used to prove the relevance of high accuracy in different applications, ranging from medical navigation to static and dynamic industrial measurements. In addition, standards for accuracy verifications are presented and demonstrated by practical examples and tests.

  14. Method for extracting the aorta from 3D CT images

    NASA Astrophysics Data System (ADS)

    Taeprasartsit, Pinyo; Higgins, William E.

    2007-03-01

    Bronchoscopic biopsy of the central-chest lymph nodes is vital in the staging of lung cancer. Three-dimensional multi-detector CT (MDCT) images provide vivid anatomical detail for planning bronchoscopy. Unfortunately, many lymph nodes are situated close to the aorta, and an inadvertent needle biopsy could puncture the aorta, causing serious harm. As an eventual aid for more complete planning of lymph-node biopsy, it is important to define the aorta. This paper proposes a method for extracting the aorta from a 3D MDCT chest image. The method has two main phases: (1) Off-line Model Construction, which provides a set of training cases for fitting new images, and (2) On-Line Aorta Construction, which is used for new incoming 3D MDCT images. Off-Line Model Construction is done once using several representative human MDCT images and consists of the following steps: construct a likelihood image, select control points of the medial axis of the aortic arch, and recompute the control points to obtain a constant-interval medial-axis model. On-Line Aorta Construction consists of the following operations: construct a likelihood image, perform global fitting of the precomputed models to the current case's likelihood image to find the best fitting model, perform local fitting to adjust the medial axis to local data variations, and employ a region recovery method to arrive at the complete constructed 3D aorta. The region recovery method consists of two steps: model-based and region-growing steps. This region growing method can recover regions outside the model coverage and non-circular tube structures. In our experiments, we used three models and achieved satisfactory results on twelve of thirteen test cases.

  15. Phantom image results of an optimized full 3D USCT

    NASA Astrophysics Data System (ADS)

    Ruiter, Nicole V.; Zapf, Michael; Hopp, Torsten; Dapp, Robin; Gemmeke, Hartmut

    2012-03-01

    A promising candidate for improved imaging of breast cancer is ultrasound computer tomography (USCT). Current experimental USCT systems are still focused in elevation dimension resulting in a large slice thickness, limited depth of field, loss of out-of-plane reflections, and a large number of movement steps to acquire a stack of images. 3DUSCT emitting and receiving spherical wave fronts overcomes these limitations. We built an optimized 3DUSCT with nearly isotropic 3DPSF, realizing for the first time the full benefits of a 3Dsystem. In this paper results of the 3D point spread function measured with a dedicated phantom and images acquired with a clinical breast phantom are presented. The point spread function could be shown to be nearly isotropic in 3D, to have very low spatial variability and fit the predicted values. The contrast of the phantom images is very satisfactory in spite of imaging with a sparse aperture. The resolution and imaged details of the reflectivity reconstruction are comparable to a 3TeslaMRI volume of the breast phantom. Image quality and resolution is isotropic in all three dimensions, confirming the successful optimization experimentally.

  16. Image Appraisal for 2D and 3D Electromagnetic Inversion

    SciTech Connect

    Alumbaugh, D.L.; Newman, G.A.

    1999-01-28

    Linearized methods are presented for appraising image resolution and parameter accuracy in images generated with two and three dimensional non-linear electromagnetic inversion schemes. When direct matrix inversion is employed, the model resolution and posterior model covariance matrices can be directly calculated. A method to examine how the horizontal and vertical resolution varies spatially within the electromagnetic property image is developed by examining the columns of the model resolution matrix. Plotting the square root of the diagonal of the model covariance matrix yields an estimate of how errors in the inversion process such as data noise and incorrect a priori assumptions about the imaged model map into parameter error. This type of image is shown to be useful in analyzing spatial variations in the image sensitivity to the data. A method is analyzed for statistically estimating the model covariance matrix when the conjugate gradient method is employed rather than a direct inversion technique (for example in 3D inversion). A method for calculating individual columns of the model resolution matrix using the conjugate gradient method is also developed. Examples of the image analysis techniques are provided on 2D and 3D synthetic cross well EM data sets, as well as a field data set collected at the Lost Hills Oil Field in Central California.

  17. Optimal point spread function design for 3D imaging.

    PubMed

    Shechtman, Yoav; Sahl, Steffen J; Backer, Adam S; Moerner, W E

    2014-09-26

    To extract from an image of a single nanoscale object maximum physical information about its position, we propose and demonstrate a framework for pupil-plane modulation for 3D imaging applications requiring precise localization, including single-particle tracking and superresolution microscopy. The method is based on maximizing the information content of the system, by formulating and solving the appropriate optimization problem--finding the pupil-plane phase pattern that would yield a point spread function (PSF) with optimal Fisher information properties. We use our method to generate and experimentally demonstrate two example PSFs: one optimized for 3D localization precision over a 3  μm depth of field, and another with an unprecedented 5  μm depth of field, both designed to perform under physically common conditions of high background signals. PMID:25302889

  18. Optimal Point Spread Function Design for 3D Imaging

    PubMed Central

    Shechtman, Yoav; Sahl, Steffen J.; Backer, Adam S.; Moerner, W. E.

    2015-01-01

    To extract from an image of a single nanoscale object maximum physical information about its position, we propose and demonstrate a framework for pupil-plane modulation for 3D imaging applications requiring precise localization, including single-particle tracking and super-resolution microscopy. The method is based on maximizing the information content of the system, by formulating and solving the appropriate optimization problem – finding the pupil-plane phase pattern that would yield a PSF with optimal Fisher information properties. We use our method to generate and experimentally demonstrate two example PSFs: one optimized for 3D localization precision over a 3 μm depth of field, and another with an unprecedented 5 μm depth of field, both designed to perform under physically common conditions of high background signals. PMID:25302889

  19. Automated Recognition of 3D Features in GPIR Images

    NASA Technical Reports Server (NTRS)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a directed-graph data structure. Relative to past approaches, this multiaxis approach offers the advantages of more reliable detections, better discrimination of objects, and provision of redundant information, which can be helpful in filling gaps in feature recognition by one of the component algorithms. The image-processing class also includes postprocessing algorithms that enhance identified features to prepare them for further scrutiny by human analysts (see figure). Enhancement of images as a postprocessing step is a significant departure from traditional practice, in which enhancement of images is a preprocessing step.

  20. 3D VSP imaging in the Deepwater GOM

    NASA Astrophysics Data System (ADS)

    Hornby, B. E.

    2005-05-01

    Seismic imaging challenges in the Deepwater GOM include surface and sediment related multiples and issues arising from complicated salt bodies. Frequently, wells encounter geologic complexity not resolved on conventional surface seismic section. To help address these challenges BP has been acquiring 3D VSP (Vertical Seismic Profile) surveys in the Deepwater GOM. The procedure involves placing an array of seismic sensors in the borehole and acquiring a 3D seismic dataset with a surface seismic gunboat that fires airguns in a spiral pattern around the wellbore. Placing the seismic geophones in the borehole provides a higher resolution and more accurate image near the borehole, as well as other advantages relating to the unique position of the sensors relative to complex structures. Technical objectives are to complement surface seismic with improved resolution (~2X seismic), better high dip structure definition (e.g. salt flanks) and to fill in "imaging holes" in complex sub-salt plays where surface seismic is blind. Business drivers for this effort are to reduce risk in well placement, improved reserve calculation and understanding compartmentalization and stratigraphic variation. To date, BP has acquired 3D VSP surveys in ten wells in the DW GOM. The initial results are encouraging and show both improved resolution and structural images in complex sub-salt plays where the surface seismic is blind. In conjunction with this effort BP has influenced both contractor borehole seismic tool design and developed methods to enable the 3D VSP surveys to be conducted offline thereby avoiding the high daily rig costs associated with a Deepwater drilling rig.

  1. Radiometric Quality Evaluation of INSAT-3D Imager Data

    NASA Astrophysics Data System (ADS)

    Prakash, S.; Jindal, D.; Badal, N.; Kartikeyan, B.; Gopala Krishna, B.

    2014-11-01

    INSAT-3D is an advanced meteorological satellite of ISRO which acquires imagery in optical and infra-red (IR) channels for study of weather dynamics in Indian sub-continent region. In this paper, methodology of radiometric quality evaluation for Level-1 products of Imager, one of the payloads onboard INSAT-3D, is described. Firstly, overall visual quality of scene in terms of dynamic range, edge sharpness or modulation transfer function (MTF), presence of striping and other image artefacts is computed. Uniform targets in Desert and Sea region are identified for which detailed radiometric performance evaluation for IR channels is carried out. Mean brightness temperature (BT) of targets is computed and validated with independently generated radiometric references. Further, diurnal/seasonal trends in target BT values and radiometric uncertainty or sensor noise are studied. Results of radiometric quality evaluation over duration of eight months (January to August 2014) and comparison of radiometric consistency pre/post yaw flip of satellite are presented. Radiometric Analysis indicates that INSAT-3D images have high contrast (MTF > 0.2) and low striping effects. A bias of <4K is observed in the brightness temperature values of TIR-1 channel measured during January-August 2014 indicating consistent radiometric calibration. Diurnal and seasonal analysis shows that Noise equivalent differential temperature (NEdT) for IR channels is consistent and well within specifications.

  2. Automated spatial alignment of 3D torso images.

    PubMed

    Bose, Arijit; Shah, Shishir K; Reece, Gregory P; Crosby, Melissa A; Beahm, Elisabeth K; Fingeret, Michelle C; Markey, Mia K; Merchant, Fatima A

    2011-01-01

    This paper describes an algorithm for automated spatial alignment of three-dimensional (3D) surface images in order to achieve a pre-defined orientation. Surface images of the torso are acquired from breast cancer patients undergoing reconstructive surgery to facilitate objective evaluation of breast morphology pre-operatively (for treatment planning) and/or post-operatively (for outcome assessment). Based on the viewing angle of the multiple cameras used for stereophotography, the orientation of the acquired torso in the images may vary from the normal upright position. Consequently, when translating this data into a standard 3D framework for visualization and analysis, the co-ordinate geometry differs from the upright position making robust and standardized comparison of images impractical. Moreover, manual manipulation and navigation of images to the desired upright position is subject to user bias. Automating the process of alignment and orientation removes operator bias and permits robust and repeatable adjustment of surface images to a pre-defined or desired spatial geometry. PMID:22256310

  3. Stereotactic mammography imaging combined with 3D US imaging for image guided breast biopsy

    SciTech Connect

    Surry, K. J. M.; Mills, G. R.; Bevan, K.; Downey, D. B.; Fenster, A.

    2007-11-15

    Stereotactic X-ray mammography (SM) and ultrasound (US) guidance are both commonly used for breast biopsy. While SM provides three-dimensional (3D) targeting information and US provides real-time guidance, both have limitations. SM is a long and uncomfortable procedure and the US guided procedure is inherently two dimensional (2D), requiring a skilled physician for both safety and accuracy. The authors developed a 3D US-guided biopsy system to be integrated with, and to supplement SM imaging. Their goal is to be able to biopsy a larger percentage of suspicious masses using US, by clarifying ambiguous structures with SM imaging. Features from SM and US guided biopsy were combined, including breast stabilization, a confined needle trajectory, and dual modality imaging. The 3D US guided biopsy system uses a 7.5 MHz breast probe and is mounted on an upright SM machine for preprocedural imaging. Intraprocedural targeting and guidance was achieved with real-time 2D and near real-time 3D US imaging. Postbiopsy 3D US imaging allowed for confirmation that the needle was penetrating the target. The authors evaluated 3D US-guided biopsy accuracy of their system using test phantoms. To use mammographic imaging information, they registered the SM and 3D US coordinate systems. The 3D positions of targets identified in the SM images were determined with a target localization error (TLE) of 0.49 mm. The z component (x-ray tube to image) of the TLE dominated with a TLE{sub z} of 0.47 mm. The SM system was then registered to 3D US, with a fiducial registration error (FRE) and target registration error (TRE) of 0.82 and 0.92 mm, respectively. Analysis of the FRE and TRE components showed that these errors were dominated by inaccuracies in the z component with a FRE{sub z} of 0.76 mm and a TRE{sub z} of 0.85 mm. A stereotactic mammography and 3D US guided breast biopsy system should include breast compression for stability and safety and dual modality imaging for target localization. The system will provide preprocedural x-ray mammography information in the form of SM imaging along with real-time US imaging for needle guidance to a target. 3D US imaging will also be available for targeting, guidance, and biopsy verification immediately postbiopsy.

  4. A hybrid framework for 3D medical image segmentation.

    PubMed

    Chen, Ting; Metaxas, Dimitris

    2005-12-01

    In this paper we propose a novel hybrid 3D segmentation framework which combines Gibbs models, marching cubes and deformable models. In the framework, first we construct a new Gibbs model whose energy function is defined on a high order clique system. The new model includes both region and boundary information during segmentation. Next we improve the original marching cubes method to construct 3D meshes from Gibbs models' output. The 3D mesh serves as the initial geometry of the deformable model. Then we deform the deformable model using external image forces so that the model converges to the object surface. We run the Gibbs model and the deformable model recursively by updating the Gibbs model's parameters using the region and boundary information in the deformable model segmentation result. In our approach, the hybrid combination of region-based methods and boundary-based methods results in improved segmentations of complex structures. The benefit of the methodology is that it produces high quality segmentations of 3D structures using little prior information and minimal user intervention. The modules in this segmentation methodology are developed within the context of the Insight ToolKit (ITK). We present experimental segmentation results of brain tumors and evaluate our method by comparing experimental results with expert manual segmentations. The evaluation results show that the methodology achieves high quality segmentation results with computational efficiency. We also present segmentation results of other clinical objects to illustrate the strength of the methodology as a generic segmentation framework. PMID:15896997

  5. Pavement cracking measurements using 3D laser-scan images

    NASA Astrophysics Data System (ADS)

    Ouyang, W.; Xu, B.

    2013-10-01

    Pavement condition surveying is vital for pavement maintenance programs that ensure ride quality and traffic safety. This paper first introduces an automated pavement inspection system which uses a three-dimensional (3D) camera and a structured laser light to acquire dense transverse profiles of a pavement lane surface when it carries a moving vehicle. After the calibration, the 3D system can yield a depth resolution of 0.5 mm and a transverse resolution of 1.56 mm pixel-1 at 1.4 m camera height from the ground. The scanning rate of the camera can be set to its maximum at 5000 lines s-1, allowing the density of scanned profiles to vary with the vehicle's speed. The paper then illustrates the algorithms that utilize 3D information to detect pavement distress, such as transverse, longitudinal and alligator cracking, and presents the field tests on the system's repeatability when scanning a sample pavement in multiple runs at the same vehicle speed, at different vehicle speeds and under different weather conditions. The results show that this dedicated 3D system can capture accurate pavement images that detail surface distress, and obtain consistent crack measurements in repeated tests and under different driving and lighting conditions.

  6. 3D imaging of soil pore network: two different approaches

    NASA Astrophysics Data System (ADS)

    Matrecano, M.; Di Matteo, B.; Mele, G.; Terribile, F.

    2009-04-01

    Pore geometry imaging and its quantitative description is a key factor for advances in the knowledge of physical, chemical and biological soil processes. For many years photos from flattened surfaces of undisturbed soil samples impregnated with fluorescent resin and from soil thin sections under microscope have been the only way available for exploring pore architecture at different scales. Earlier 3D representations of the internal structure of the soil based on not destructive methods have been obtained using medical tomographic systems (NMR and X-ray CT). However, images provided using such equipments, show strong limitations in terms of spatial resolution. In the last decade very good results have then been obtained using imaging from very expensive systems based on synchrotron radiation. More recently, X-ray Micro-Tomography has resulted the most widely applied being the technique showing the best compromise between costs, resolution and size of the images. Conversely, the conceptually simpler but destructive method of "serial sectioning" has been progressively neglected for technical problems in sample preparation and time consumption needed to obtain an adequate number of serial sections for correct 3D reconstruction of soil pore geometry. In this work a comparison between the two methods above has been carried out in order to define advantages, shortcomings and to point out their different potential. A cylindrical undisturbed soil sample 6.5cm in diameter and 6.5cm height of an Ap horizon of an alluvial soil showing vertic characteristics, has been reconstructed using both a desktop X-ray micro-tomograph Skyscan 1172 and the new automatic serial sectioning system SSAT (Sequential Section Automatic Tomography) set up at CNR ISAFOM in Ercolano (Italy) with the aim to overcome most of the typical limitations of such a technique. Image best resolution of 7.5 µm per voxel resulted using X-ray Micro CT while 20 µm was the best value using the serial sectioning system but on less noisy images. SSAT system showed more flexibility in terms of sample size although both techniques allowed investigation on REVs (Representative Elementary Volumes) for most of macroscopic properties describing soil processes. Morover, undoubted advantages of not destructivity and ease sample preparation for the Skysan 1172 are balanced by lower overall costs for the SSAT and its potential of producing 3D representation of soil features different from the simple solid/porous phases. Both approaches allow to use exactly the same image analysis procedures on the reconstructed 3D images although require some specific pre-processing treatments.

  7. Automatic structural matching of 3D image data

    NASA Astrophysics Data System (ADS)

    Ponomarev, Svjatoslav; Lutsiv, Vadim; Malyshev, Igor

    2015-10-01

    A new image matching technique is described. It is implemented as an object-independent hierarchical structural juxtaposition algorithm based on an alphabet of simple object-independent contour structural elements. The structural matching applied implements an optimized method of walking through a truncated tree of all possible juxtapositions of two sets of structural elements. The algorithm was initially developed for dealing with 2D images such as the aerospace photographs, and it turned out to be sufficiently robust and reliable for matching successfully the pictures of natural landscapes taken in differing seasons from differing aspect angles by differing sensors (the visible optical, IR, and SAR pictures, as well as the depth maps and geographical vector-type maps). At present (in the reported version), the algorithm is enhanced based on additional use of information on third spatial coordinates of observed points of object surfaces. Thus, it is now capable of matching the images of 3D scenes in the tasks of automatic navigation of extremely low flying unmanned vehicles or autonomous terrestrial robots. The basic principles of 3D structural description and matching of images are described, and the examples of image matching are presented.

  8. Towards magnetic 3D x-ray imaging

    NASA Astrophysics Data System (ADS)

    Fischer, Peter; Streubel, R.; Im, M.-Y.; Parkinson, D.; Hong, J.-I.; Schmidt, O. G.; Makarov, D.

    2014-03-01

    Mesoscale phenomena in magnetism will add essential parameters to improve speed, size and energy efficiency of spin driven devices. Multidimensional visualization techniques will be crucial to achieve mesoscience goals. Magnetic tomography is of large interest to understand e.g. interfaces in magnetic multilayers, the inner structure of magnetic nanocrystals, nanowires or the functionality of artificial 3D magnetic nanostructures. We have developed tomographic capabilities with magnetic full-field soft X-ray microscopy combining X-MCD as element specific magnetic contrast mechanism, high spatial and temporal resolution due to the Fresnel zone plate optics. At beamline 6.1.2 at the ALS (Berkeley CA) a new rotation stage allows recording an angular series (up to 360 deg) of high precision 2D projection images. Applying state-of-the-art reconstruction algorithms it is possible to retrieve the full 3D structure. We will present results on prototypic rolled-up Ni and Co/Pt tubes and glass capillaries coated with magnetic films and compare to other 3D imaging approaches e.g. in electron microscopy. Supported by BES MSD DOE Contract No. DE-AC02-05-CH11231 and ERC under the EU FP7 program (grant agreement No. 306277).

  9. 3D super-resolution imaging with blinking quantum dots

    PubMed Central

    Wang, Yong; Fruhwirth, Gilbert; Cai, En; Ng, Tony; Selvin, Paul R.

    2013-01-01

    Quantum dots are promising candidates for single molecule imaging due to their exceptional photophysical properties, including their intense brightness and resistance to photobleaching. They are also notorious for their blinking. Here we report a novel way to take advantage of quantum dot blinking to develop an imaging technique in three-dimensions with nanometric resolution. We first applied this method to simulated images of quantum dots, and then to quantum dots immobilized on microspheres. We achieved imaging resolutions (FWHM) of 8–17 nm in the x-y plane and 58 nm (on coverslip) or 81 nm (deep in solution) in the z-direction, approximately 3–7 times better than what has been achieved previously with quantum dots. This approach was applied to resolve the 3D distribution of epidermal growth factor receptor (EGFR) molecules at, and inside of, the plasma membrane of resting basal breast cancer cells. PMID:24093439

  10. Performance prediction for 3D filtering of multichannel images

    NASA Astrophysics Data System (ADS)

    Rubel, Oleksii; Kozhemiakin, Ruslan A.; Abramov, Sergey K.; Lukin, Vladimir V.; Vozel, Benoit; Chehdi, Kacem

    2015-10-01

    Performance of denoising based on discrete cosine transform applied to multichannel remote sensing images corrupted by additive white Gaussian noise is analyzed. Images obtained by satellite Earth Observing-1 (EO-1) mission using hyperspectral imager instrument (Hyperion) that have high input SNR are taken as test images. Denoising performance is characterized by improvement of PSNR. For hard-thresholding 3D DCT-based denoising, simple statistics (probabilities to be less than a certain threshold) are used to predict denoising efficiency using curves fitted into scatterplots. It is shown that the obtained curves (approximations) provide prediction of denoising efficiency with high accuracy. Analysis is carried out for different numbers of channels processed jointly. Universality of prediction for different number of channels is proven.

  11. 3D Lunar Terrain Reconstruction from Apollo Images

    NASA Technical Reports Server (NTRS)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  12. The 3D model control of image processing

    NASA Technical Reports Server (NTRS)

    Nguyen, An H.; Stark, Lawrence

    1989-01-01

    Telerobotics studies remote control of distant robots by a human operator using supervisory or direct control. Even if the robot manipulators has vision or other senses, problems arise involving control, communications, and delay. The communication delays that may be expected with telerobots working in space stations while being controlled from an Earth lab have led to a number of experiments attempting to circumvent the problem. This delay in communication is a main motivating factor in moving from well understood instantaneous hands-on manual control to less well understood supervisory control; the ultimate step would be the realization of a fully autonomous robot. The 3-D model control plays a crucial role in resolving many conflicting image processing problems that are inherent in resolving in the bottom-up approach of most current machine vision processes. The 3-D model control approach is also capable of providing the necessary visual feedback information for both the control algorithms and for the human operator.

  13. 3D range scan enhancement using image-based methods

    NASA Astrophysics Data System (ADS)

    Herbort, Steffen; Gerken, Britta; Schugk, Daniel; Wöhler, Christian

    2013-10-01

    This paper addresses the problem of 3D surface scan refinement, which is desirable due to noise, outliers, and missing measurements being present in the 3D surfaces obtained with a laser scanner. We present a novel algorithm for the fusion of absolute laser scanner depth profiles and photometrically estimated surface normal data, which yields a noise-reduced and highly detailed depth profile with large scale shape robustness. In contrast to other approaches published in the literature, the presented algorithm (1) regards non-Lambertian surfaces, (2) simultaneously computes surface reflectance (i.e. BRDF) parameters required for 3D reconstruction, (3) models pixelwise incident light and viewing directions, and (4) accounts for interreflections. The algorithm as such relies on the minimization of a three-component error term, which penalizes intensity deviations, integrability deviations, and deviations from the known large-scale surface shape. The solution of the error minimization is obtained iteratively based on a calculus of variations. BRDF parameters are estimated by initially reducing and then iteratively refining the optical resolution, which provides the required robust data basis. The 3D reconstruction of concave surface regions affected by interreflections is improved by compensating global illumination in the image data. The algorithm is evaluated based on eight objects with varying albedos and reflectance behaviors (diffuse, specular, metallic). The qualitative evaluation shows a removal of outliers and a strong reduction of noise, while the large scale shape is preserved. Fine surface details Which are previously not contained in the surface scans, are incorporated through using image data. The algorithm is evaluated with respect to its absolute accuracy using two caliper objects of known shape, and based on synthetically generated data. The beneficial effect of interreflection compensation on the reconstruction accuracy is evaluated quantitatively in a Photometric Stereo framework.

  14. Imaging PVC gas pipes using 3-D GPR

    SciTech Connect

    Bradford, J.; Ramaswamy, M.; Peddy, C.

    1996-11-01

    Over the years, many enhancements have been made by the oil and gas industry to improve the quality of seismic images. The GPR project at GTRI borrows heavily from these technologies in order to produce 3-D GPR images of PVC gas pipes. As will be demonstrated, improvements in GPR data acquisition, 3-D processing and visualization schemes yield good images of PVC pipes in the subsurface. Data have been collected in cooperation with the local gas company and at a test facility in Texas. Surveys were conducted over both a metal pipe and PVC pipes of diameters ranging from {1/2} in. to 4 in. at depths from 1 ft to 3 ft in different soil conditions. The metal pipe produced very good reflections and was used to fine tune and optimize the processing run stream. It was found that the following steps significantly improve the overall image: (1) Statics for drift and topography compensation, (2) Deconvolution, (3) Filtering and automatic gain control, (4) Migration for focusing and resolution, and (5) Visualization optimization. The processing flow implemented is relatively straightforward, simple to execute and robust under varying conditions. Future work will include testing resolution limits, effects of soil conditions, and leak detection.

  15. Fully automatic plaque segmentation in 3-D carotid ultrasound images.

    PubMed

    Cheng, Jieyu; Li, He; Xiao, Feng; Fenster, Aaron; Zhang, Xuming; He, Xiaoling; Li, Ling; Ding, Mingyue

    2013-12-01

    Automatic segmentation of the carotid plaques from ultrasound images has been shown to be an important task for monitoring progression and regression of carotid atherosclerosis. Considering the complex structure and heterogeneity of plaques, a fully automatic segmentation method based on media-adventitia and lumen-intima boundary priors is proposed. This method combines image intensity with structure information in both initialization and a level-set evolution process. Algorithm accuracy was examined on the common carotid artery part of 26 3-D carotid ultrasound images (34 plaques ranging in volume from 2.5 to 456 mm(3)) by comparing the results of our algorithm with manual segmentations of two experts. Evaluation results indicated that the algorithm yielded total plaque volume (TPV) differences of -5.3 ± 12.7 and -8.5 ± 13.8 mm(3) and absolute TPV differences of 9.9 ± 9.5 and 11.8 ± 11.1 mm(3). Moreover, high correlation coefficients in generating TPV (0.993 and 0.992) between algorithm results and both sets of manual results were obtained. The automatic method provides a reliable way to segment carotid plaque in 3-D ultrasound images and can be used in clinical practice to estimate plaque measurements for management of carotid atherosclerosis. PMID:24063959

  16. Calibration of an intensity ratio system for 3D imaging

    NASA Astrophysics Data System (ADS)

    Tsui, H. T.; Tang, K. C.

    1989-03-01

    An intensity ratio method for 3D imaging is proposed with error analysis given for assessment and future improvements. The method is cheap and reasonably fast as it requires no mechanical scanning or laborious correspondence computation. One drawback of the intensity ratio methods which hamper their widespread use is the undesirable change of image intensity. This is usually caused by the difference in reflection from different parts of an object surface and the automatic iris or gain control of the camera. In our method, gray-level patterns used include an uniform pattern, a staircase pattern and a sawtooth pattern to make the system more robust against errors in intensity ratio. 3D information of the surface points of an object can be derived from the intensity ratios of the images by triangulation. A reference back plane is put behind the object to monitor the change in image intensity. Errors due to camera calibration, projector calibration, variations in intensity, imperfection of the slides etc. are analyzed. Early experiments of the system using a newvicon CCTV camera with back plane intensity correction gives a mean-square range error of about 0.5 percent. Extensive analysis of various errors is expected to yield methods for improving the accuracy.

  17. Depth-controlled 3D TV image coding

    NASA Astrophysics Data System (ADS)

    Chiari, Armando; Ciciani, Bruno; Romero, Milton; Rossi, Ricardo

    1998-04-01

    Conventional 3D-TV codecs processing one down-compatible (either left, or right) channel may optionally include the extraction of the disparity field associated with the stereo-pairs to support the coding of the complementary channel. A two-fold improvement over such approaches is proposed in this paper by exploiting 3D features retained in the stereo-pairs to reduce the redundancies in both channels, and according to their visual sensitiveness. Through an a-priori disparity field analysis, our coding scheme separates a region of interest from the foreground/background in the volume space reproduced in order to code them selectively based on their visual relevance. Such a region of interest is here identified as the one which is focused by the shooting device. By suitably scaling the DCT coefficient n such a way that precision is reduced for the image blocks lying on less relevant areas, our approach aims at reducing the signal energy in the background/foreground patterns, while retaining finer details on the more relevant image portions. From an implementation point of view, it is worth noticing that the system proposed keeps its surplus processing power on the encoder side only. Simulation results show such improvements as a better image quality for a given transmission bit rate, or a graceful quality degradation of the reconstructed images with decreasing data-rates.

  18. 3D-imaging using micro-PIXE

    NASA Astrophysics Data System (ADS)

    Ishii, K.; Matsuyama, S.; Watanabe, Y.; Kawamura, Y.; Yamaguchi, T.; Oyama, R.; Momose, G.; Ishizaki, A.; Yamazaki, H.; Kikuchi, Y.

    2007-02-01

    We have developed a 3D-imaging system using characteristic X-rays produced by proton micro-beam bombardment. The 3D-imaging system consists of a micro-beam and an X-ray CCD camera of 1 mega pixels (Hamamatsu photonics C8800X), and has a spatial resolution of 4 μm by using characteristic Ti-K-X-rays (4.558 keV) produced by 3 MeV protons of beam spot size of ˜1 μm. We applied this system, namely, a micron-CT to observe the inside of a living small ant's head of ˜1 mm diameter. An ant was inserted into a small polyimide tube the inside diameter and the wall thickness of which are 1000 and 25 μm, respectively, and scanned by the micron-CT. Three dimensional images of the ant's heads were obtained with a spatial resolution of 4 μm. It was found that, in accordance with the strong dependence on atomic number of photo ionization cross-sections, the mandibular gland of ant contains heavier elements, and moreover, the CT-image of living ant anaesthetized by chloroform is quite different from that of a dead ant dipped in formalin.

  19. 3D remote sensing images data organization and web publication

    NASA Astrophysics Data System (ADS)

    Tong, Hengjian; Zhang, Yun; Shao, Zhenfeng

    2008-12-01

    This paper presents a method on how to organize 3D remote sensing images and how to publish these images quickly. We use two levels of grid-based spatial index to organize massive images. First, we divide a huge digital city image into many map sheets (big images). All of map sheets construct a matrix structure. We use row number and column number to encode every map sheet. Second, by using resample and bilinear interpolation method, we build pyramid for every map sheet to form multi-scale hierarchical structure. At the same time building pyramid, we adopt JPEG compression technology to produce JPEG image format files. The number of output image files equals to the number of pyramid layers. Third, divide every pyramid layer image into many small image tiles. The size of each tile image is 256*256 pixels. All of small tiles of each pyramid layer image also construct a matrix structure. We also use row number and column number to encode every small image tile. We create a file directory for each map sheet in order to store all of small image tiles. we neatly combine the spatial index structure with the file name of each tile, which make server be able to return tile to browser side very quickly without any query operation. With the proposed method, we can provide users with a fast and efficiently tool to publish their own spatial information without involving any programming work. The system performance is very good and the response time is almost identical for different size images.

  20. Retinal image quality assessment using generic features

    NASA Astrophysics Data System (ADS)

    Fasih, Mahnaz; Langlois, J. M. Pierre; Ben Tahar, Houssem; Cheriet, Farida

    2014-03-01

    Retinal image quality assessment is an important step in automated eye disease diagnosis. Diagnosis accuracy is highly dependent on the quality of retinal images, because poor image quality might prevent the observation of significant eye features and disease manifestations. A robust algorithm is therefore required in order to evaluate the quality of images in a large database. We developed an algorithm for retinal image quality assessment based on generic features that is independent from segmentation methods. It exploits the local sharpness and texture features by applying the cumulative probability of blur detection metric and run-length encoding algorithm, respectively. The quality features are combined to evaluate the image's suitability for diagnosis purposes. Based on the recommendations of medical experts and our experience, we compared a global and a local approach. A support vector machine with radial basis functions was used as a nonlinear classifier in order to classify images to gradable and ungradable groups. We applied our methodology to 65 images of size 2592×1944 pixels that had been graded by a medical expert. The expert evaluated 38 images as gradable and 27 as ungradable. The results indicate very good agreement between the proposed algorithm's predictions and the medical expert's judgment: the sensitivity and specificity for the local approach are respectively 92% and 94%. The algorithm demonstrates sufficient robustness to identify relevant images for automated diagnosis.

  1. Retinal Imaging Techniques for Diabetic Retinopathy Screening.

    PubMed

    Goh, James Kang Hao; Cheung, Carol Y; Sim, Shaun Sebastian; Tan, Pok Chien; Tan, Gavin Siew Wei; Wong, Tien Yin

    2016-01-01

    Due to the increasing prevalence of diabetes mellitus, demand for diabetic retinopathy (DR) screening platforms is steeply increasing. Early detection and treatment of DR are key public health interventions that can greatly reduce the likelihood of vision loss. Current DR screening programs typically employ retinal fundus photography, which relies on skilled readers for manual DR assessment. However, this is labor-intensive and suffers from inconsistency across sites. Hence, there has been a recent proliferation of automated retinal image analysis software that may potentially alleviate this burden cost-effectively. Furthermore, current screening programs based on 2-dimensional fundus photography do not effectively screen for diabetic macular edema (DME). Optical coherence tomography is becoming increasingly recognized as the reference standard for DME assessment and can potentially provide a cost-effective solution for improving DME detection in large-scale DR screening programs. Current screening techniques are also unable to image the peripheral retina and require pharmacological pupil dilation; ultra-widefield imaging and confocal scanning laser ophthalmoscopy, which address these drawbacks, possess great potential. In this review, we summarize the current DR screening methods using various retinal imaging techniques, and also outline future possibilities. Advances in retinal imaging techniques can potentially transform the management of patients with diabetes, providing savings in health care costs and resources. PMID:26830491

  2. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a manner similar to that of a baseline hyperspectral- image-compression method. The mean values are encoded in the compressed bit stream and added back to the data at the appropriate decompression step. The overhead incurred by encoding the mean values only a few bits per spectral band is negligible with respect to the huge size of a typical hyperspectral data set. The other method is denoted modified decomposition. This method is so named because it involves a modified version of a commonly used multiresolution wavelet decomposition, known in the art as the 3D Mallat decomposition, in which (a) the first of multiple stages of a 3D wavelet transform is applied to the entire dataset and (b) subsequent stages are applied only to the horizontally-, vertically-, and spectrally-low-pass subband from the preceding stage. In the modified decomposition, in stages after the first, not only is the spatially-low-pass, spectrally-low-pass subband further decomposed, but also spatially-low-pass, spectrally-high-pass subbands are further decomposed spatially. Either method can be used alone to improve the quality of a reconstructed image (see figure). Alternatively, the two methods can be combined by first performing modified decomposition, then subtracting the mean values from spatial planes of spatially-low-pass subbands.

  3. Single-shot retinal imaging with AO spectral OCT

    NASA Astrophysics Data System (ADS)

    Zhang, Yan; Rha, Jungtae; Jonnal, Ravi S.; Miller, Donald T.

    2005-04-01

    We demonstrate for the first time an adaptive optics (AO) spectral OCT retina camera that acquires with unprecedented 3D resolution (2.9 μm lateral; 5.5 μm axial) single shot B-scans of the living human retina. The camera centers on a Michelson interferometer that consists of a superluminescent diode for line illuminating the subject's retinal; voice coil translator for controlling the optical path length of the reference channel; and an imaging spectrometer that is cascaded with a 12-bit area CCD array. The imaging spectrometer was designed with negligible off-axis aberrations and was constructed from stock optical components. AO was integrated into the detector channel of the interferometer and dynamically compensated for most of the ocular aberration across a 6 mm pupil. Short bursts of B-scans, with 100 Ascans each, were successfully acquired at 1 msec intervals. Camera sensitivity was found sufficient to detect reflections from all major retinal layers. Individual outer segments of photoreceptors at different retinal eccentricities were observed in vivo. Periodicity of the outer segments matched cone spacing as measured from AO flood illuminated images of the same patches of retina.

  4. Development of 3D microwave imaging reflectometry in LHD (invited).

    PubMed

    Nagayama, Y; Kuwahara, D; Yoshinaga, T; Hamada, Y; Kogi, Y; Mase, A; Tsuchiya, H; Tsuji-Iio, S; Yamaguchi, S

    2012-10-01

    Three-dimensional (3D) microwave imaging reflectometry has been developed in the large helical device to visualize fluctuating reflection surface which is caused by the density fluctuations. The plasma is illuminated by the probe wave with four frequencies, which correspond to four radial positions. The imaging optics makes the image of cut-off surface onto the 2D (7 × 7 channels) horn antenna mixer arrays. Multi-channel receivers have been also developed using micro-strip-line technology to handle many channels at reasonable cost. This system is first applied to observe the edge harmonic oscillation (EHO), which is an MHD mode with many harmonics that appears in the edge plasma. A narrow structure along field lines is observed during EHO. PMID:23126965

  5. 3D-LZ helicopter ladar imaging system

    NASA Astrophysics Data System (ADS)

    Savage, James; Harrington, Walter; McKinley, R. Andrew; Burns, H. N.; Braddom, Steven; Szoboszlay, Zoltan

    2010-04-01

    A joint-service team led by the Air Force Research Laboratory's Munitions and Sensors Directorates completed a successful flight test demonstration of the 3D-LZ Helicopter LADAR Imaging System. This was a milestone demonstration in the development of technology solutions for a problem known as "helicopter brownout", the loss of situational awareness caused by swirling sand during approach and landing. The 3D-LZ LADAR was developed by H.N. Burns Engineering and integrated with the US Army Aeroflightdynamics Directorate's Brown-Out Symbology System aircraft state symbology aboard a US Army EH-60 Black Hawk helicopter. The combination of these systems provided an integrated degraded visual environment landing solution with landing zone situational awareness as well as aircraft guidance and obstacle avoidance information. Pilots from the U.S. Army, Air Force, Navy, and Marine Corps achieved a 77% landing rate in full brownout conditions at a test range at Yuma Proving Ground, Arizona. This paper will focus on the LADAR technology used in 3D-LZ and the results of this milestone demonstration.

  6. Effective classification of 3D image data using partitioning methods

    NASA Astrophysics Data System (ADS)

    Megalooikonomou, Vasileios; Pokrajac, Dragoljub; Lazarevic, Aleksandar; Obradovic, Zoran

    2002-03-01

    We propose partitioning-based methods to facilitate the classification of 3-D binary image data sets of regions of interest (ROIs) with highly non-uniform distributions. The first method is based on recursive dynamic partitioning of a 3-D volume into a number of 3-D hyper-rectangles. For each hyper-rectangle, we consider, as a potential attribute, the number of voxels (volume elements) that belong to ROIs. A hyper-rectangle is partitioned only if the corresponding attribute does not have high discriminative power, determined by statistical tests, but it is still sufficiently large for further splitting. The final discriminative hyper-rectangles form new attributes that are further employed in neural network classification models. The second method is based on maximum likelihood employing non-spatial (k-means) and spatial DBSCAN clustering algorithms to estimate the parameters of the underlying distributions. The proposed methods were experimentally evaluated on mixtures of Gaussian distributions, on realistic lesion-deficit data generated by a simulator conforming to a clinical study, and on synthetic fractal data. Both proposed methods have provided good classification on Gaussian mixtures and on realistic data. However, the experimental results on fractal data indicated that the clustering-based methods were only slightly better than random guess, while the recursive partitioning provided significantly better classification accuracy.

  7. Image segmentation and 3D visualization for MRI mammography

    NASA Astrophysics Data System (ADS)

    Li, Lihua; Chu, Yong; Salem, Angela F.; Clark, Robert A.

    2002-05-01

    MRI mammography has a number of advantages, including the tomographic, and therefore three-dimensional (3-D) nature, of the images. It allows the application of MRI mammography to breasts with dense tissue, post operative scarring, and silicon implants. However, due to the vast quantity of images and subtlety of difference in MR sequence, there is a need for reliable computer diagnosis to reduce the radiologist's workload. The purpose of this work was to develop automatic breast/tissue segmentation and visualization algorithms to aid physicians in detecting and observing abnormalities in breast. Two segmentation algorithms were developed: one for breast segmentation, the other for glandular tissue segmentation. In breast segmentation, the MRI image is first segmented using an adaptive growing clustering method. Two tracing algorithms were then developed to refine the breast air and chest wall boundaries of breast. The glandular tissue segmentation was performed using an adaptive thresholding method, in which the threshold value was spatially adaptive using a sliding window. The 3D visualization of the segmented 2D slices of MRI mammography was implemented under IDL environment. The breast and glandular tissue rendering, slicing and animation were displayed.

  8. Extracting 3D layout from a single image using global image structures.

    PubMed

    Lou, Zhongyu; Gevers, Theo; Hu, Ninghang

    2015-10-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very beneficial for extracting pixel-level 3D layout since it implies the way how pixels in the image are organized. In this paper, we propose an approach that first predicts the global image structure, and then we use the global structure for fine-grained pixel-level 3D layout extraction. In particular, image features are extracted based on multiple layout templates. We then learn a discriminative model for classifying the global layout at the image-level. Using latent variables, we implicitly model the sublevel semantics of the image, which enrich the expressiveness of our model. After the image-level structure is obtained, it is used as the prior knowledge to infer pixel-wise 3D layout. Experiments show that the results of our model outperform the state-of-the-art methods by 11.7% for 3D structure classification. Moreover, we show that employing the 3D structure prior information yields accurate 3D scene layout segmentation. PMID:25966478

  9. 3D Multispectral Light Propagation Model For Subcutaneous Veins Imaging

    SciTech Connect

    Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William

    2008-01-01

    In this paper, we describe a new 3D light propagation model aimed at understanding the effects of various physiological properties on subcutaneous vein imaging. In particular, we build upon the well known MCML (Monte Carlo Multi Layer) code and present a tissue model that improves upon the current state-of-the-art by: incorporating physiological variation, such as melanin concentration, fat content, and layer thickness; including veins of varying depth and diameter; using curved surfaces from real arm shapes; and modeling the vessel wall interface. We describe our model, present results from the Monte Carlo modeling, and compare these results with those obtained with other Monte Carlo methods.

  10. 3D multispectral light propagation model for subcutaneous veins imaging

    NASA Astrophysics Data System (ADS)

    Paquit, Vincent; Price, Jeffery R.; Mériaudeau, Fabrice; Tobin, Kenneth W.

    2008-03-01

    In this paper, we describe a new 3D light propagation model aimed at understanding the effects of various physiological properties on subcutaneous vein imaging. In particular, we build upon the well known MCML (Monte Carlo Multi Layer) code and present a tissue model that improves upon the current state-of-the-art by: incorporating physiological variation, such as melanin concentration, fat content, and layer thickness; including veins of varying depth and diameter; using curved surfaces from real arm shapes; and modeling the vessel wall interface. We describe our model, present results from the Monte Carlo modeling, and compare these results with those obtained with other Monte Carlo methods.

  11. 3D laser optoacoustic ultrasonic imaging system for preclinical research

    NASA Astrophysics Data System (ADS)

    Ermilov, Sergey A.; Conjusteau, André; Hernandez, Travis; Su, Richard; Nadvoretskiy, Vyacheslav; Tsyboulski, Dmitri; Anis, Fatima; Anastasio, Mark A.; Oraevsky, Alexander A.

    2013-03-01

    In this work, we introduce a novel three-dimensional imaging system for in vivo high-resolution anatomical and functional whole-body visualization of small animal models developed for preclinical or other type of biomedical research. The system (LOUIS-3DM) combines a multi-wavelength optoacoustic and ultrawide-band laser ultrasound tomographies to obtain coregistered maps of tissue optical absorption and acoustic properties, displayed within the skin outline of the studied animal. The most promising applications of the LOUIS-3DM include 3D angiography, cancer research, and longitudinal studies of biological distribution of optoacoustic contrast agents (carbon nanotubes, metal plasmonic nanoparticles, etc.).

  12. Trini Diagram: imaging emotional identity 3D positioning tool

    NASA Astrophysics Data System (ADS)

    Castelli, Clino T.

    1999-12-01

    The possibility of associating an easy-to-use mental 3D model with a computer interface is the main innovation of the Trini Diagram. For the first time, data of an emotional character can be positioned in relation to one another, permitting an intersubjective interpretation and treatment using computer tools. One of the potential applications for this new technique is that of the 'interfacing engine' for cataloging and retrieval in image banks. The Trini Diagram can also become a fundamental architecture for the construction of 'subjective interfaces' for a new form of man-machine interaction.

  13. Pulsatile flow artifacts in 3D magnetic resonance imaging.

    PubMed

    Frank, L R; Buxton, R B; Kerber, C W

    1993-09-01

    Some of the important features of how pulsatile flow generates artifacts in three-dimensional magnetic resonance imaging are analyzed and demonstrated. Time variations in the magnetic resonance signal during the heart cycle lead to more complex patterns of artifacts in 3D imaging than in 2D imaging. The appearance and location of these artifacts within the image volume are shown to be describable as displacements along a line in a plane parallel to that defined by the phase and volume encode directions. The angle of the line in the plane depends solely upon the imaging parameters while the ghost displacement along the line is proportional to the signal modulation frequency. Aliasing of these ghosts leads to a variety of artifact patterns which are sensitive to the pulsation period and repetition time of the pulse sequence. Numerical simulations of these effects were found to be in good agreement with experimental images of an elastic model of a human carotid artery under simulated physiological conditions and with images of two human subjects. PMID:8412600

  14. Control of Retinal Ganglion Cell Positioning and Neurite Growth: Combining 3D Printing with Radial Electrospun Scaffolds.

    PubMed

    Kador, Karl E; Grogan, Shawn P; Dorthé, Erik W; Venugopalan, Praseeda; Malek, Monisha F; Goldberg, Jeffrey L; D'lima, Darryl D

    2016-02-01

    Retinal ganglion cells (RGCs) are responsible for the transfer of signals from the retina to the brain. As part of the central nervous system, RGCs are unable to regenerate following injury, and implanted cells have limited capacity to orient and integrate in vivo. During development, secreted guidance molecules along with signals from extracellular matrix and the vasculature guide cell positioning, for example, around the fovea, and axon outgrowth; however, these changes are temporally regulated and are not the same in the adult. Here, we combine electrospun cell transplantation scaffolds capable of RGC neurite guidance with thermal inkjet 3D cell printing techniques capable of precise positioning of RGCs on the scaffold surface. Optimal printing parameters are developed for viability, electrophysiological function and, neurite pathfinding. Different media, commonly used to promote RGC survival and growth, were tested under varying conditions. When printed in growth media containing both brain-derived neurotrophic factor (BDNF) and ciliary neurotrophic factor (CNTF), RGCs maintained survival and normal electrophysiological function, and displayed radial axon outgrowth when printed onto electrospun scaffolds. These results demonstrate that 3D printing technology may be combined with complex electrospun surfaces in the design of future retinal models or therapies. PMID:26729061

  15. Quantitative 3D Optical Imaging: Applications in Dosimetry and Biophysics

    NASA Astrophysics Data System (ADS)

    Thomas, Andrew Stephen

    Optical-CT has been shown to be a potentially useful imaging tool for the two very different spheres of biologists and radiation therapy physicists, but it has yet to live up to that potential. In radiation therapy, researchers have used optical-CT for the readout of 3D dosimeters, but it is yet to be a clinically relevant tool as the technology is too slow to be considered practical. Biologists have used the technique for structural imaging, but have struggled with emission tomography as the reality of photon attenuation for both excitation and emission have made the images quantitatively irrelevant. Dosimetry. The DLOS (Duke Large field of view Optical-CT Scanner) was designed and constructed to make 3D dosimetry utilizing optical-CT a fast and practical tool while maintaining the accuracy of readout of the previous, slower readout technologies. Upon construction/optimization/implementation of several components including a diffuser, band pass filter, registration mount & fluid filtration system the dosimetry system provides high quality data comparable to or exceeding that of commercial products. In addition, a stray light correction algorithm was tested and implemented. The DLOS in combination with the 3D dosimeter it was designed for, PREAGETM, then underwent rigorous commissioning and benchmarking tests validating its performance against gold standard data including a set of 6 irradiations. DLOS commissioning tests resulted in sub-mm isotropic spatial resolution (MTF >0.5 for frequencies of 1.5lp/mm) and a dynamic range of ˜60dB. Flood field uniformity was 10% and stable after 45minutes. Stray light proved to be small, due to telecentricity, but even the residual can be removed through deconvolution. Benchmarking tests showed the mean 3D passing gamma rate (3%, 3mm, 5% dose threshold) over the 6 benchmark data sets was 97.3% +/- 0.6% (range 96%-98%) scans totaling ˜10 minutes, indicating excellent ability to perform 3D dosimetry while improving the speed of readout. Noise was low at ˜2% for 2mm reconstructions. The DLOS/PRESAGERTM benchmark tests show consistently excellent performance, with very good agreement to simple known distributions. The telecentric design was critical to enabling fast (~15mins) imaging with minimal stray light artifacts. The system produces accurate isotropic 2mm3 dose data over clinical volumes (e.g. 16cm diameter phantoms, 12 cm height), and represents a uniquely useful and versatile new tool for commissioning complex radiotherapy techniques. The system also has wide versatility, and has successfully been used in preliminary tests with protons and with kV irradiations. Biology. Attenuation corrections for optical-emission-CT were done by modeling physical parameters in the imaging setup within the framework of an ordered subset expectation maximum (OSEM) iterative reconstruction algorithm. This process has a well documented history in single photon emission computed tomography (SPECT), but is inherently simpler due to the lack of excitation photons to account for. Excitation source strength distribution, excitation and emission attenuation were modeled. The accuracy of the correction was investigated by imaging phantoms containing known distributions of attenuation and fluorophores. The correction was validated on a manufactured phantom designed to give uniform emission in a central cuboidal region and later applied to a cleared mouse brain with GFP (green-fluorescentprotein) labeled vasculature and a cleared 4T1 xenograft flank tumor with constitutive RFP (red-fluorescent-protein). Reconstructions were compared to corresponding slices imaged with a fluorescent dissection microscope. Significant optical-ECT attenuation artifacts were observed in the uncorrected phantom images and appeared up to 80% less intense than the verification image in the central region. The corrected phantom images showed excellent agreement with the verification image with only slight variations. The corrected tissue sample reconstructions showed general agreement between the verification images. Comprehensive modeling in optical-ECT imaging was successfully implemented, creating quantitatively accurate 3D fluorophore distributions. This work represents the 1st successful attempt encompassing such a complete set of corrections. This method provides a means to accurately obtain 3D fluorophore distributions with the potential to better understand tumor biology and treatment responses.

  16. 3D/2D image registration by image transformation descriptors (ITDs) for thoracic aorta imaging

    NASA Astrophysics Data System (ADS)

    Łubniewski, Paweł J.; Sarry, Laurent; Miguel, Bruno; Lohou, Christophe

    2013-03-01

    In this article, we present a novel image registration technique. Unlike most state of the art methods, our approach allows us to compute directly the relationship between images. The proposed registration framework, built in a modular way, can be adjusted to particular problems. Tests on sample image database of thoracic aorta proved that our method is fast and robust and could be successfully used for many cases. We have enhanced our previous works to provide a rapid 3D/2D registration method. It uses direct computing of the image transformation descriptors (ITDs) to align the projection images. The 3D transformation is estimated by an interesting technique which allows to propose a 3D pose update, interpreting the 2D transform of the projections in the 3D domain. The presented 3D/2D registration technique based on ITDs can be used as an initialization technique for classic registration algorithms. Its unique properties can be advantageous for many image alignment problems. The possibility of using different descriptors, adapted for particular cases, makes our approach very flexible. Fast time of computing is an important feature and motivates to use our technique even as an initialization step before execution of well known standard algorithms which could be more precise, but slow and sensitive to initialization of the parameters.

  17. Retinal image quality, reading and myopia.

    PubMed

    Collins, Michael J; Buehren, Tobias; Iskander, D Robert

    2006-01-01

    Analysis was undertaken of the retinal image characteristics of the best-spectacle corrected eyes of progressing myopes (n = 20, mean age = 22 years; mean spherical equivalent = -3.84 D) and a control group of emmetropes (n = 20, mean age = 23 years; mean spherical equivalent = 0.00 D) before and after a 2h reading task. Retinal image quality was calculated based upon wavefront measurements taken with a Hartmann-Shack sensor with fixation on both a far (5.5 m) and near (individual reading distance) target. The visual Strehl ratio based on the optical transfer function (VSOTF) was significantly worse for the myopes prior to reading for both the far (p = 0.01) and near (p = 0.03) conditions. The myopic group showed significant reductions in various aspects of retinal image quality compared with the emmetropes, involving components of the modulation transfer function, phase transfer function and point spread function, often along the vertical meridian of the eye. The depth of focus of the myopes (0.54 D) was larger (p = 0.02) than the emmetropes (0.42 D) and the distribution of refractive power (away from optimal sphero-cylinder) was greater in the myopic eyes (variance of distributions p < 0.05). We found evidence that the lead and lag of accommodation are influenced by the higher order aberrations of the eye (e.g. significant correlations between lead/lag and the peak of the visual Strehl ratio based on the MTF). This could indicate that the higher accommodation lags seen in myopes are providing optimized retinal image characteristics. The interaction between low and high order aberrations of the eye play a significant role in reducing the retinal image quality of myopic eyes compared with emmetropes. PMID:15913701

  18. Retinal atlas statistics from color fundus images

    NASA Astrophysics Data System (ADS)

    Lee, Sangyeol; Abramoff, Michael D.; Reinhardt, Joseph M.

    2010-03-01

    An atlas provides a reference anatomic structure and an associated coordinate system. An atlas may be used in a variety of applications, including segmentation and registration, and can be used to characterize anatomy across a population. We present a method for generating an atlas of the human retina from 500 color fundus image pairs. Using color fundus image pairs, we register image pairs to obtain a larger anatomic field of view. Key retinal anatomic features are selected for atlas landmarks: disk center, fovea, and main vessel arch. An atlas coordinate system is defined based on the statistics of the landmarks. Images from the population are warped into the atlas space to produce a statistical retinal atlas which can be used for automatic diagnosis, concise indexing, semantic blending, etc.

  19. Geometric corner extraction in retinal fundus images.

    PubMed

    Lee, Jimmy Addison; Lee, Beng Hai; Xu, Guozhen; Ong, Ee Ping; Wong, Damon Wing Kee; Liu, Jiang; Lim, Tock Han

    2014-01-01

    This paper presents a novel approach of finding corner features between retinal fundus images. Such images are relatively textureless and comprising uneven shades which render state-of-the-art approaches e.g., SIFT to be ineffective. Many of the detected features have low repeatability (<; 10%), especially when the viewing angle difference in the corresponding images is large. Our approach is based on the finding of blood vessels using a robust line fitting algorithm, and locating corner features based on the bends and intersections between the blood vessels. These corner features have proven to be superior to the state-of-the-art feature extraction methods (i.e. SIFT, SURF, Harris, Good Features To Track (GFTT) and FAST) with regard to repeatability and stability in our experiment. Overall in average, the approach has close to 10% more repeatable detected features than the second best in two corresponding retinal images in the experiment. PMID:25569921

  20. Recent progress in 3-D imaging of sea freight containers

    SciTech Connect

    Fuchs, Theobald Schön, Tobias Sukowski, Frank; Dittmann, Jonas; Hanke, Randolf

    2015-03-31

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today’s 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  1. 3-D visualization and animation technologies in anatomical imaging

    PubMed Central

    McGhee, John

    2010-01-01

    This paper explores a 3-D computer artists approach to the creation of three-dimensional computer-generated imagery (CGI) derived from clinical scan data. Interpretation of scientific imagery, such as magnetic resonance imaging (MRI), is restricted to the eye of the trained medical practitioner in a clinical or scientific context. In the research work described here, MRI data are visualized and interpreted by a 3-D computer artist using the tools of the digital animator to navigate image complexity and widen interaction. In this process, the artefact moves across disciplines; it is no longer tethered to its diagnostic origins. It becomes an object that has visual attributes such as light, texture and composition, and a visual aesthetic of its own. The introduction of these visual attributes provides a platform for improved accessibility by a lay audience. The paper argues that this more artisan approach to clinical data visualization has a potential real-world application as a communicative tool for clinicians and patients during consultation. PMID:20002229

  2. Recent progress in 3-D imaging of sea freight containers

    NASA Astrophysics Data System (ADS)

    Fuchs, Theobald; Schön, Tobias; Dittmann, Jonas; Sukowski, Frank; Hanke, Randolf

    2015-03-01

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today's 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  3. Three-dimensional Retinal Imaging with High-Speed Ultrahigh-Resolution Optical Coherence Tomography

    PubMed Central

    Wojtkowski, Maciej; Srinivasan, Vivek; Fujimoto, James G.; Ko, Tony; Schuman, Joel S.; Kowalczyk, Andrzej; Duker, Jay S.

    2007-01-01

    Purpose To demonstrate high-speed, ultrahigh-resolution, 3-dimensional optical coherence tomography (3D OCT) and new protocols for retinal imaging. Methods Ultrahigh-resolution OCT using broadband light sources achieves axial image resolutions of ~2 μm compared with standard 10-μm-resolution OCT current commercial instruments. High-speed OCT using spectral/Fourier domain detection enables dramatic increases in imaging speeds. Three-dimensional OCT retinal imaging is performed in normal human subjects using high-speed ultrahigh-resolution OCT. Three-dimensional OCT data of the macula and optic disc are acquired using a dense raster scan pattern. New processing and display methods for generating virtual OCT fundus images; cross-sectional OCT images with arbitrary orientations; quantitative maps of retinal, nerve fiber layer, and other intraretinal layer thicknesses; and optic nerve head topographic parameters are demonstrated. Results Three-dimensional OCT imaging enables new imaging protocols that improve visualization and mapping of retinal microstructure. An OCT fundus image can be generated directly from the 3D OCT data, which enables precise and repeatable registration of cross-sectional OCT images and thickness maps with fundus features. Optical coherence tomography images with arbitrary orientations, such as circumpapillary scans, can be generated from 3D OCT data. Mapping of total retinal thickness and thicknesses of the nerve fiber layer, photoreceptor layer, and other intraretinal layers is demonstrated. Measurement of optic nerve head topography and disc parameters is also possible. Three-dimensional OCT enables measurements that are similar to those of standard instruments, including the StratusOCT, GDx, HRT, and RTA. Conclusion Three-dimensional OCT imaging can be performed using high-speed ultrahigh-resolution OCT. Three-dimensional OCT provides comprehensive visualization and mapping of retinal microstructures. The high data acquisition speeds enable high-density data sets with large numbers of transverse positions on the retina, which reduces the possibility of missing focal pathologies. In addition to providing image information such as OCT cross-sectional images, OCT fundus images, and 3D rendering, quantitative measurement and mapping of intraretinal layer thickness and topographic features of the optic disc are possible. We hope that 3D OCT imaging may help to elucidate the structural changes associated with retinal disease as well as improve early diagnosis and monitoring of disease progression and response to treatment. PMID:16140383

  4. Abdominal aortic aneurysm imaging with 3-D ultrasound: 3-D-based maximum diameter measurement and volume quantification.

    PubMed

    Long, A; Rouet, L; Debreuve, A; Ardon, R; Barbe, C; Becquemin, J P; Allaire, E

    2013-08-01

    The clinical reliability of 3-D ultrasound imaging (3-DUS) in quantification of abdominal aortic aneurysm (AAA) was evaluated. B-mode and 3-DUS images of AAAs were acquired for 42 patients. AAAs were segmented. A 3-D-based maximum diameter (Max3-D) and partial volume (Vol30) were defined and quantified. Comparisons between 2-D (Max2-D) and 3-D diameters and between orthogonal acquisitions were performed. Intra- and inter-observer reproducibility was evaluated. Intra- and inter-observer coefficients of repeatability (CRs) were less than 5.18 mm for Max3-D. Intra-observer and inter-observer CRs were respectively less than 6.16 and 8.71 mL for Vol30. The mean of normalized errors of Vol30 was around 7%. Correlation between Max2-D and Max3-D was 0.988 (p < 0.0001). Max3-D and Vol30 were not influenced by a probe rotation of 90°. Use of 3-DUS to quantify AAA is a new approach in clinical practice. The present study proposed and evaluated dedicated parameters. Their reproducibility makes the technique clinically reliable. PMID:23743100

  5. Computing 3D head orientation from a monocular image sequence

    NASA Astrophysics Data System (ADS)

    Horprasert, Thanarat; Yacoob, Yaser; Davis, Larry S.

    1997-02-01

    An approach for estimating 3D head orientation in a monocular image sequence is proposed. The approach employs recently developed image-based parameterized tracking for face and face features to locate the area in which a sub- pixel parameterized shape estimation of the eye's boundary is performed. This involves tracking of five points (four at the eye corners and the fifth is the tip of the nose). We describe an approach that relies on the coarse structure of the face to compute orientation relative to the camera plane. Our approach employs projective invariance of the cross-ratios of the eye corners and anthropometric statistics to estimate the head yaw, roll and pitch. Analytical and experimental results are reported.

  6. High Resolution 3D Radar Imaging of Comet Interiors

    NASA Astrophysics Data System (ADS)

    Asphaug, E. I.; Gim, Y.; Belton, M.; Brophy, J.; Weissman, P. R.; Heggy, E.

    2012-12-01

    Knowing the interiors of comets and other primitive bodies is fundamental to our understanding of how planets formed. We have developed a Discovery-class mission formulation, Comet Radar Explorer (CORE), based on the use of previously flown planetary radar sounding techniques, with the goal of obtaining high resolution 3D images of the interior of a small primitive body. We focus on the Jupiter-Family Comets (JFCs) as these are among the most primitive bodies reachable by spacecraft. Scattered in from far beyond Neptune, they are ultimate targets of a cryogenic sample return mission according to the Decadal Survey. Other suitable targets include primitive NEOs, Main Belt Comets, and Jupiter Trojans. The approach is optimal for small icy bodies ~3-20 km diameter with spin periods faster than about 12 hours, since (a) navigation is relatively easy, (b) radar penetration is global for decameter wavelengths, and (c) repeated overlapping ground tracks are obtained. The science mission can be as short as ~1 month for a fast-rotating JFC. Bodies smaller than ~1 km can be globally imaged, but the navigation solutions are less accurate and the relative resolution is coarse. Larger comets are more interesting, but radar signal is unlikely to be reflected from depths greater than ~10 km. So, JFCs are excellent targets for a variety of reasons. We furthermore focus on the use of Solar Electric Propulsion (SEP) to rendezvous shortly after the comet's perihelion. This approach leaves us with ample power for science operations under dormant conditions beyond ~2-3 AU. This leads to a natural mission approach of distant observation, followed by closer inspection, terminated by a dedicated radar mapping orbit. Radar reflections are obtained from a polar orbit about the icy nucleus, which spins underneath. Echoes are obtained from a sounder operating at dual frequencies 5 and 15 MHz, with 1 and 10 MHz bandwidths respectively. The dense network of echoes is used to obtain global 3D images of interior structure to ~20 m, and to map dielectric properties (related to internal composition) to better than 200 m throughout. This is comparable in detail to modern 3D medical ultrasound, although we emphasize that the techniques are somewhat different. An interior mass distribution is obtained through spacecraft tracking, using data acquired during the close, quiet radar orbits. This is aligned with the radar-based images of the interior, and the shape model, to contribute to the multi-dimensional 3D global view. High-resolution visible imaging provides boundary conditions and geologic context to these interior views. An infrared spectroscopy and imaging campaign upon arrival reveals the time-evolving activity of the nucleus and the structure and composition of the inner coma, and the definition of surface units. CORE is designed to obtain a total view of a comet, from the coma to the active and evolving surface to the deep interior. Its primary science goal is to obtain clear images of internal structure and dielectric composition. These will reveal how the comet was formed, what it is made of, and how it 'works'. By making global yet detailed connections from interior to exterior, this knowledge will be an important complement to the Rosetta mission, and will lay the foundation for comet nucleus sample return by revealing the areas of shallow depth to 'bedrock', and relating accessible deposits to their originating provenances within the nucleus.

  7. 3D Image Analysis of Geomaterials using Confocal Microscopy

    NASA Astrophysics Data System (ADS)

    Mulukutla, G.; Proussevitch, A.; Sahagian, D.

    2009-05-01

    Confocal microscopy is one of the most significant advances in optical microscopy of the last century. It is widely used in biological sciences but its application to geomaterials lingers due to a number of technical problems. Potentially the technique can perform non-invasive testing on a laser illuminated sample that fluoresces using a unique optical sectioning capability that rejects out-of-focus light reaching the confocal aperture. Fluorescence in geomaterials is commonly induced using epoxy doped with a fluorochrome that is impregnated into the sample to enable discrimination of various features such as void space or material boundaries. However, for many geomaterials, this method cannot be used because they do not naturally fluoresce and because epoxy cannot be impregnated into inaccessible parts of the sample due to lack of permeability. As a result, the confocal images of most geomaterials that have not been pre-processed with extensive sample preparation techniques are of poor quality and lack the necessary image and edge contrast necessary to apply any commonly used segmentation techniques to conduct any quantitative study of its features such as vesicularity, internal structure, etc. In our present work, we are developing a methodology to conduct a quantitative 3D analysis of images of geomaterials collected using a confocal microscope with minimal amount of prior sample preparation and no addition of fluorescence. Two sample geomaterials, a volcanic melt sample and a crystal chip containing fluid inclusions are used to assess the feasibility of the method. A step-by-step process of image analysis includes application of image filtration to enhance the edges or material interfaces and is based on two segmentation techniques: geodesic active contours and region competition. Both techniques have been applied extensively to the analysis of medical MRI images to segment anatomical structures. Preliminary analysis suggests that there is distortion in the shapes of the segmented vesicles, vapor bubbles, and void spaces due to the optical measurements, so corrective actions are being explored. This will establish a practical and reliable framework for an adaptive 3D image processing technique for the analysis of geomaterials using confocal microscopy.

  8. Multimodal Imaging in Hereditary Retinal Diseases

    PubMed Central

    Morara, Mariachiara; Veronese, Chiara; Nucci, Paolo; Ciardella, Antonio P.

    2013-01-01

    Introduction. In this retrospective study we evaluated the multimodal visualization of retinal genetic diseases to better understand their natural course. Material and Methods. We reviewed the charts of 70 consecutive patients with different genetic retinal pathologies who had previously undergone multimodal imaging analyses. Genomic DNA was extracted from peripheral blood and genotyped at the known locus for the different diseases. Results. The medical records of 3 families of a 4-generation pedigree affected by North Carolina macular dystrophy were reviewed. A total of 8 patients with Stargardt disease were evaluated for their two main defining clinical characteristics, yellow subretinal flecks and central atrophy. Nine male patients with a previous diagnosis of choroideremia and eleven female carriers were evaluated. Fourteen patients with Best vitelliform macular dystrophy and 6 family members with autosomal recessive bestrophinopathy were included. Seven patients with enhanced s-cone syndrome were ascertained. Lastly, we included 3 unrelated patients with fundus albipunctatus. Conclusions. In hereditary retinal diseases, clinical examination is often not sufficient for evaluating the patient's condition. Retinal imaging then becomes important in making the diagnosis, in monitoring the progression of disease, and as a surrogate outcome measure of the efficacy of an intervention. PMID:23710333

  9. Web based 3-D medical image visualization on the PC.

    PubMed

    Kim, N; Lee, D H; Kim, J H; Kim, Y; Cho, H J

    1998-01-01

    With the recent advance of Web and its associated technologies, information sharing on distribute computing environments has gained a great amount of attention from many researchers in many application areas, such as medicine, engineering, and business. One basic requirement of distributed medical consultation systems is that geographically dispersed, disparate participants are allowed to exchange information readily with each other. Such software also needs to be supported on a broad range of computer platforms to increase the softwares accessibility. In this paper, the development of world-wide-web based medical consultation system for radiology imaging is addressed to provide platform independence and greater accessibility. The system supports sharing of 3-dimensional objects. We use VRML (Virtual Reality Modeling Language), which is the defacto standard in 3-D modeling on the Web. 3-D objects are reconstructed from CT or MRI volume data using a VRML format, which can be viewed and manipulated easily in Web-browsers with a VRML plug-in. A Marching cubes method is used in the transformation of scanned volume data sets to polygonal surfaces of VRML. A decimation algorithm is adopted to reduce the number of meshes in the resulting VRML file. 3-D volume data are often very large in size, hence loading the data on PC level computers requires a significant reduction of the size of the data, while minimizing the loss of the original shape information. This is also important to decrease network delays. A prototype system has been implemented (http://cybernet5.snu.ac.kr/-cyber/mrivrml .html), and several sessions of experiments are carried out. PMID:10384632

  10. Enhancing retinal images by nonlinear registration

    NASA Astrophysics Data System (ADS)

    Molodij, G.; Ribak, E. N.; Glanc, M.; Chenegros, G.

    2015-05-01

    Being able to image the human retina in high resolution opens a new era in many important fields, such as pharmacological research for retinal diseases, researches in human cognition, nervous system, metabolism and blood stream, to name a few. In this paper, we propose to share the knowledge acquired in the fields of optics and imaging in solar astrophysics in order to improve the retinal imaging in the perspective to perform a medical diagnosis. The main purpose would be to assist health care practitioners by enhancing the spatial resolution of the retinal images and increase the level of confidence of the abnormal feature detection. We apply a nonlinear registration method using local correlation tracking to increase the field of view and follow structure evolutions using correlation techniques borrowed from solar astronomy technique expertise. Another purpose is to define the tracer of movements after analyzing local correlations to follow the proper motions of an image from one moment to another, such as changes in optical flows that would be of high interest in a medical diagnosis.

  11. Fast 3-D Tomographic Microwave Imaging for Breast Cancer Detection

    PubMed Central

    Meaney, Paul M.; Kaufman, Peter A.; diFlorio-Alexander, Roberta M.; Paulsen, Keith D.

    2013-01-01

    Microwave breast imaging (using electromagnetic waves of frequencies around 1 GHz) has mostly remained at the research level for the past decade, gaining little clinical acceptance. The major hurdles limiting patient use are both at the hardware level (challenges in collecting accurate and noncorrupted data) and software level (often plagued by unrealistic reconstruction times in the tens of hours). In this paper we report improvements that address both issues. First, the hardware is able to measure signals down to levels compatible with sub-centimeter image resolution while keeping an exam time under 2 min. Second, the software overcomes the enormous time burden and produces similarly accurate images in less than 20 min. The combination of the new hardware and software allows us to produce and report here the first clinical 3-D microwave tomographic images of the breast. Two clinical examples are selected out of 400+ exams conducted at the Dartmouth Hitchcock Medical Center (Lebanon, NH). The first example demonstrates the potential usefulness of our system for breast cancer screening while the second example focuses on therapy monitoring. PMID:22562726

  12. Image appraisal for 2D and 3D electromagnetic inversion

    SciTech Connect

    Alumbaugh, D.L.; Newman, G.A.

    1998-04-01

    Linearized methods are presented for appraising image resolution and parameter accuracy in images generated with two and three dimensional non-linear electromagnetic inversion schemes. When direct matrix inversion is employed, the model resolution and model covariance matrices can be directly calculated. The columns of the model resolution matrix are shown to yield empirical estimates of the horizontal and vertical resolution throughout the imaging region. Plotting the square root of the diagonal of the model covariance matrix yields an estimate of how the estimated data noise maps into parameter error. When the conjugate gradient method is employed rather than a direct inversion technique (for example in 3D inversion), an iterative method can be applied to statistically estimate the model covariance matrix, as well as a regularization covariance matrix. The latter estimates the error in the inverted results caused by small variations in the regularization parameter. A method for calculating individual columns of the model resolution matrix using the conjugate gradient method is also developed. Examples of the image analysis techniques are provided on a synthetic cross well EM data set.

  13. Brain surface maps from 3-D medical images

    NASA Astrophysics Data System (ADS)

    Lu, Jiuhuai; Hansen, Eric W.; Gazzaniga, Michael S.

    1991-06-01

    The anatomic and functional localization of brain lesions for neurologic diagnosis and brain surgery is facilitated by labeling the cortical surface in 3D images. This paper presents a method which extracts cortical contours from magnetic resonance (MR) image series and then produces a planar surface map which preserves important anatomic features. The resultant map may be used for manual anatomic localization as well as for further automatic labeling. Outer contours are determined on MR cross-sectional images by following the clear boundaries between gray matter and cerebral-spinal fluid, skipping over sulci. Carrying this contour below the surface by shrinking it along its normal produces an inner contour that alternately intercepts gray matter (sulci) and white matter along its length. This procedure is applied to every section in the set, and the image (grayscale) values along the inner contours are radially projected and interpolated onto a semi-cylindrical surface with axis normal to the slices and large enough to cover the whole brain. A planar map of the cortical surface results by flattening this cylindrical surface. The projection from inner contour to cylindrical surface is unique in the sense that different points on the inner contour correspond to different points on the cylindrical surface. As the outer contours are readily obtained by automatic segmentation, cortical maps can be made directly from an MR series.

  14. Research of Fast 3D Imaging Based on Multiple Mode

    NASA Astrophysics Data System (ADS)

    Chen, Shibing; Yan, Huimin; Ni, Xuxiang; Zhang, Xiuda; Wang, Yu

    2016-02-01

    Three-dimensional (3D) imaging has received increasingly extensive attention and has been widely used currently. Lots of efforts have been put on three-dimensional imaging method and system study, in order to meet fast and high accurate requirement. In this article, we realize a fast and high quality stereo matching algorithm on field programmable gate array (FPGA) using the combination of time-of-flight (TOF) camera and binocular camera. Images captured from the two cameras own a same spatial resolution, letting us use the depth maps taken by the TOF camera to figure initial disparity. Under the constraint of the depth map as the stereo pairs when comes to stereo matching, expected disparity of each pixel is limited within a narrow search range. In the meanwhile, using field programmable gate array (FPGA, altera cyclone IV series) concurrent computing we can configure multi core image matching system, thus doing stereo matching on embedded system. The simulation results demonstrate that it can speed up the process of stereo matching and increase matching reliability and stability, realize embedded calculation, expand application range.

  15. Fast 3D subsurface imaging with stepped-frequency GPR

    NASA Astrophysics Data System (ADS)

    Masarik, Matthew P.; Burns, Joseph; Thelen, Brian T.; Sutter, Lena

    2015-05-01

    This paper investigates an algorithm for forming 3D images of the subsurface using stepped-frequency GPR data. The algorithm is specifically designed for a handheld GPR and therefore accounts for the irregular sampling pattern in the data and the spatially-variant air-ground interface by estimating an effective "ground-plane" and then registering the data to the plane. The algorithm efficiently solves the 4th-order polynomial for the Snell reflection points using a fully vectorized iterative scheme. The forward operator is implemented efficiently using an accelerated nonuniform FFT (Greengard and Lee, 2004); the adjoint operator is implemented efficiently using an interpolation step coupled with an upsampled FFT. The imaging is done as a linearized version of the full inverse problem, which is regularized using a sparsity constraint to reduce sidelobes and therefore improve image localization. Applying an appropriate sparsity constraint, the algorithm is able to eliminate most the surrounding clutter and sidelobes, while still rendering valuable image properties such as shape and size. The algorithm is applied to simulated data, controlled experimental data (made available by Dr. Waymond Scott, Georgia Institute of Technology), and government-provided data with irregular sampling and air-ground interface.

  16. 3D lung image retrieval using localized features

    NASA Astrophysics Data System (ADS)

    Depeursinge, Adrien; Zrimec, Tatjana; Busayarat, Sata; Müller, Henning

    2011-03-01

    The interpretation of high-resolution computed tomography (HRCT) images of the chest showing disorders of the lung tissue associated with interstitial lung diseases (ILDs) is time-consuming and requires experience. Whereas automatic detection and quantification of the lung tissue patterns showed promising results in several studies, its aid for the clinicians is limited to the challenge of image interpretation, letting the radiologists with the problem of the final histological diagnosis. Complementary to lung tissue categorization, providing visually similar cases using content-based image retrieval (CBIR) is in line with the clinical workflow of the radiologists. In a preliminary study, a Euclidean distance based on volume percentages of five lung tissue types was used as inter-case distance for CBIR. The latter showed the feasibility of retrieving similar histological diagnoses of ILD based on visual content, although no localization information was used for CBIR. However, to retrieve and show similar images with pathology appearing at a particular lung position was not possible. In this work, a 3D localization system based on lung anatomy is used to localize low-level features used for CBIR. When compared to our previous study, the introduction of localization features allows improving early precision for some histological diagnoses, especially when the region of appearance of lung tissue disorders is important.

  17. High-resolution 3D coherent laser radar imaging

    NASA Astrophysics Data System (ADS)

    Buck, Joseph; Malm, Andrew; Zakel, Andrew; Krause, Brian; Tiemann, Bruce

    2007-04-01

    The Super-resolution Sensor System (S 3) program is an ambitious effort to exploit the maximum information a laser-based sensor can obtain. At Lockheed Martin Coherent Technologies (LMCT), we are developing methods of incorporating multi-function operation (3D imaging, vibrometry, polarimetry, aperture synthesis, etc.) into a single device. The waveforms will be matched to the requirements of both hardware (e.g., optical amplifiers, modulators) and the targets being imaged. The first successful demonstrations of this program have produced high-resolution, three-dimensional images at intermediate stand-off ranges. In addition, heavy camouflage penetration has been successfully demonstrated. The resolution of a ladar sensor scales with the bandwidth as dR = c/(2B), with a corresponding scaling of the range precision. Therefore, the ability to achieve large bandwidths is crucial to developing a high-resolution sensor. While there are many methods of achieving the benefit of large bandwidths while using lower bandwidth electronics (e.g., an FMCW implementation), the S 3 system produces and detects the full waveform bandwidth, enabling a large set of adaptive waveforms for applications requiring large range search intervals (RSI) and short duration waveforms. This paper highlights the three-dimensional imaging and camo penetration.

  18. Retinal image quality in the rodent eye.

    PubMed

    Artal, P; Herreros de Tejada, P; Muñoz Tedó, C; Green, D G

    1998-01-01

    Many rodents do not see well. For a target to be resolved by a rat or a mouse, it must subtend a visual angle of a degree or more. It is commonly assumed that this poor spatial resolving capacity is due to neural rather than optical limitations, but the quality of the retinal image has not been well characterized in these animals. We have modified a double-pass apparatus, initially designed for the human eye, so it could be used with rodents to measure the modulation transfer function (MTF) of the eye's optics. That is, the double-pass retinal image of a monochromatic (lambda = 632.8 nm) point source was digitized with a CCD camera. From these double-pass measurements, the single-pass MTF was computed under a variety of conditions of focus and with different pupil sizes. Even with the eye in best focus, the image quality in both rats and mice is exceedingly poor. With a 1-mm pupil, for example, the MTF in the rat had an upper limit of about 2.5 cycles/deg, rather than the 28 cycles/deg one would obtain if the eye were a diffraction-limited system. These images are about 10 times worse than the comparable retinal images in the human eye. Using our measurements of the optics and the published behavioral and electrophysiological contrast sensitivity functions (CSFs) of rats, we have calculated the CSF that the rat would have if it had perfect rather than poor optics. We find, interestingly, that diffraction-limited optics would produce only slight improvement overall. That is, in spite of retinal images which are of very low quality, the upper limit of visual resolution in rodents is neurally determined. Rats and mice seem to have eyes in which the optics and retina/brain are well matched. PMID:9682864

  19. 3D geometric analysis of the aorta in 3D MRA follow-up pediatric image data

    NASA Astrophysics Data System (ADS)

    Wörz, Stefan; Alrajab, Abdulsattar; Arnold, Raoul; Eichhorn, Joachim; von Tengg-Kobligk, Hendrik; Schenk, Jens-Peter; Rohr, Karl

    2014-03-01

    We introduce a new model-based approach for the segmentation of the thoracic aorta and its main branches from follow-up pediatric 3D MRA image data. For robust segmentation of vessels even in difficult cases (e.g., neighboring structures), we propose a new extended parametric cylinder model which requires only relatively few model parameters. The new model is used in conjunction with a two-step fitting scheme for refining the segmentation result yielding an accurate segmentation of the vascular shape. Moreover, we include a novel adaptive background masking scheme and we describe a spatial normalization scheme to align the segmentation results from follow-up examinations. We have evaluated our proposed approach using different 3D synthetic images and we have successfully applied the approach to follow-up pediatric 3D MRA image data.

  20. 3D Chemical and Elemental Imaging by STXM Spectrotomography

    NASA Astrophysics Data System (ADS)

    Wang, J.; Hitchcock, A. P.; Karunakaran, C.; Prange, A.; Franz, B.; Harkness, T.; Lu, Y.; Obst, M.; Hormes, J.

    2011-09-01

    Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

  1. 3D Chemical and Elemental Imaging by STXM Spectrotomography

    SciTech Connect

    Wang, J.; Karunakaran, C.; Lu, Y.; Hormes, J.; Hitchcock, A. P.; Prange, A.; Franz, B.; Harkness, T.; Obst, M.

    2011-09-09

    Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

  2. 3D-Imaging of cardiac structures using 3D heart models for planning in heart surgery: a preliminary study.

    PubMed

    Jacobs, Stephan; Grunert, Ronny; Mohr, Friedrich W; Falk, Volkmar

    2008-02-01

    The aim of the study was to create an anatomical correct 3D rapid prototyping model (RPT) for patients with complex heart disease and altered geometry of the atria or ventricles to facilitate planning and execution of the surgical procedure. Based on computer tomography (CT) and magnetic resonance imaging (MRI) images, regions of interest were segmented using the Mimics 9.0 software (Materialise, Leuven, Belgium). The segmented regions were the target volume and structures at risk. After generating an STL-file (StereoLithography file) out of the patient's data set, the 3D printer Ztrade mark 510 (4D Concepts, Gross-Gerau, Germany) created a 3D plaster model. The patient individual 3D printed RPT-models were used to plan the resection of a left ventricular aneurysm and right ventricular tumor. The surgeon was able to identify risk structures, assess the ideal resection lines and determine the residual shape after a reconstructive procedure (LV remodelling, infiltrating tumor resection). Using a 3D-print of the LV-aneurysm, reshaping of the left ventricle ensuring sufficient LV volume was easily accomplished. The use of the 3D rapid prototyping model (RPT-model) during resection of ventricular aneurysm and malignant cardiac tumors may facilitate the surgical procedure due to better planning and improved orientation. PMID:17925319

  3. Adaptive-optics SLO imaging combined with widefield OCT and SLO enables precise 3D localization of fluorescent cells in the mouse retina

    PubMed Central

    Zawadzki, Robert J.; Zhang, Pengfei; Zam, Azhar; Miller, Eric B.; Goswami, Mayank; Wang, Xinlei; Jonnal, Ravi S.; Lee, Sang-Hyuck; Kim, Dae Yu; Flannery, John G.; Werner, John S.; Burns, Marie E.; Pugh, Edward N.

    2015-01-01

    Adaptive optics scanning laser ophthalmoscopy (AO-SLO) has recently been used to achieve exquisite subcellular resolution imaging of the mouse retina. Wavefront sensing-based AO typically restricts the field of view to a few degrees of visual angle. As a consequence the relationship between AO-SLO data and larger scale retinal structures and cellular patterns can be difficult to assess. The retinal vasculature affords a large-scale 3D map on which cells and structures can be located during in vivo imaging. Phase-variance OCT (pv-OCT) can efficiently image the vasculature with near-infrared light in a label-free manner, allowing 3D vascular reconstruction with high precision. We combined widefield pv-OCT and SLO imaging with AO-SLO reflection and fluorescence imaging to localize two types of fluorescent cells within the retinal layers: GFP-expressing microglia, the resident macrophages of the retina, and GFP-expressing cone photoreceptor cells. We describe in detail a reflective afocal AO-SLO retinal imaging system designed for high resolution retinal imaging in mice. The optical performance of this instrument is compared to other state-of-the-art AO-based mouse retinal imaging systems. The spatial and temporal resolution of the new AO instrumentation was characterized with angiography of retinal capillaries, including blood-flow velocity analysis. Depth-resolved AO-SLO fluorescent images of microglia and cone photoreceptors are visualized in parallel with 469 nm and 663 nm reflectance images of the microvasculature and other structures. Additional applications of the new instrumentation are discussed. PMID:26114038

  4. 3D x-ray reconstruction using lightfield imaging

    NASA Astrophysics Data System (ADS)

    Saha, Sajib; Tahtali, Murat; Lambert, Andrew; Pickering, Mark R.

    2014-09-01

    Existing Computed Tomography (CT) systems require full 360 rotation projections. Using the principles of lightfield imaging, only 4 projections under ideal conditions can be sufficient when the object is illuminated with multiple-point Xray sources. The concept was presented in a previous work with synthetically sampled data from a synthetic phantom. Application to real data requires precise calibration of the physical set up. This current work presents the calibration procedures along with experimental findings for the reconstruction of a physical 3D phantom consisting of simple geometric shapes. The crucial part of this process is to determine the effective distances of the X-ray paths, which are not possible or very difficult by direct measurements. Instead, they are calculated by tracking the positions of fiducial markers under prescribed source and object movements. Iterative algorithms are used for the reconstruction. Customized backprojection is used to ensure better initial guess for the iterative algorithms to start with.

  5. 3D imaging of semiconductor components by discrete laminography

    SciTech Connect

    Batenburg, K. J.; Palenstijn, W. J.; Sijbers, J.

    2014-06-19

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  6. Quantitative Multiscale Cell Imaging in Controlled 3D Microenvironments.

    PubMed

    Welf, Erik S; Driscoll, Meghan K; Dean, Kevin M; Schäfer, Claudia; Chu, Jun; Davidson, Michael W; Lin, Michael Z; Danuser, Gaudenz; Fiolka, Reto

    2016-02-22

    The microenvironment determines cell behavior, but the underlying molecular mechanisms are poorly understood because quantitative studies of cell signaling and behavior have been challenging due to insufficient spatial and/or temporal resolution and limitations on microenvironmental control. Here we introduce microenvironmental selective plane illumination microscopy (meSPIM) for imaging and quantification of intracellular signaling and submicrometer cellular structures as well as large-scale cell morphological and environmental features. We demonstrate the utility of this approach by showing that the mechanical properties of the microenvironment regulate the transition of melanoma cells from actin-driven protrusion to blebbing, and we present tools to quantify how cells manipulate individual collagen fibers. We leverage the nearly isotropic resolution of meSPIM to quantify the local concentration of actin and phosphatidylinositol 3-kinase signaling on the surfaces of cells deep within 3D collagen matrices and track the many small membrane protrusions that appear in these more physiologically relevant environments. PMID:26906741

  7. An Efficient 3D Imaging using Structured Light Systems

    NASA Astrophysics Data System (ADS)

    Lee, Deokwoo

    Structured light 3D surface imaging has been crucial in the fields of image processing and computer vision, particularly in reconstruction, recognition and others. In this dissertation, we propose the approaches to development of an efficient 3D surface imaging system using structured light patterns including reconstruction, recognition and sampling criterion. To achieve an efficient reconstruction system, we address the problem in its many dimensions. In the first, we extract geometric 3D coordinates of an object which is illuminated by a set of concentric circular patterns and reflected to a 2D image plane. The relationship between the original and the deformed shape of the light patterns due to a surface shape provides sufficient 3D coordinates information. In the second, we consider system efficiency. The efficiency, which can be quantified by the size of data, is improved by reducing the number of circular patterns to be projected onto an object of interest. Akin to the Shannon-Nyquist Sampling Theorem, we derive the minimum number of circular patterns which sufficiently represents the target object with no considerable information loss. Specific geometric information (e.g. the highest curvature) of an object is key to deriving the minimum sampling density. In the third, the object, represented using the minimum number of patterns, has incomplete color information (i.e. color information is given a priori along with the curves). An interpolation is carried out to complete the photometric reconstruction. The results can be approximately reconstructed because the minimum number of the patterns may not exactly reconstruct the original object. But the result does not show considerable information loss, and the performance of an approximate reconstruction is evaluated by performing recognition or classification. In an object recognition, we use facial curves which are deformed circular curves (patterns) on a target object. We simply carry out comparison between the facial curves of different faces or different expressions, and subsequently evaluate the performance of the reconstruction results. Since comparison between all pairs of curves can increase the computational complexity, we propose an approach for classification which is based on the shortest geodesic distances. Shape-based comparison is carried out because it shows robustness to scaling effect and rotation due to varying viewpoints. Previously, linear methods and non-linear methods have been investigated for a dimensional reduction which achieves efficient recognition / classification algorithms. But, existing approaches generate many parameters which leads to an optimization procedures which sometimes do not provide explicit solution. The proposed approach to dimensionality reduction for recognition is based on the property of the Fourier Transform whose magnitude response is symmetric and invariant to time-shift, and the results are much more explicit without loss of intrinsic information of targets. In practice, to achieve the reconstruction of a larger sized object, we use multipleprojector-viewpoints (MPV) system. The minimum number of cameras and projectors is critical part to achieve an efficient MPV system. For an alternative view of reconstruction, we apply the concepts of a system identification to the reconstruction problem. The first one is a general system identification determined by the ratio of the output to input, and the second one is a modulation-demodulation theory used to estimate an input (transmitted) signal from an output (received or observed) signal.

  8. 3D reconstruction from a single image using geometric constraints

    NASA Astrophysics Data System (ADS)

    van den Heuvel, Frank A.

    Photogrammetry has many advantages as a technique for the acquisition of three-dimensional models for virtual reality. But the traditional photogrammetric process to extract 3D geometry from multiple images is often considered too labour-intensive. In this paper a method is presented with which a polyhedral object model can be efficiently derived from measurements in a single image combined with geometric knowledge on the object. Man-made objects can often be described by a polyhedral model and usually many geometric constraints are valid. These constraints are inferred during image interpretation or may even be extracted automatically. In this paper different types of geometric constraints and their use for object reconstruction are discussed. Applying more constraints than needed for reconstruction will lead to redundancy and thereby to the need for an adjustment. The redundancy is the basis for reliability that is introduced by testing for possible measurement errors. The adjusted observations are used for object reconstruction in a separate step. Of course the model that is obtained from a single image will not be complete, for instance due to occlusion. An arbitrary number of models can be combined using similarity transformations based on the coordinates of common points. The information gathered allows for a bundle adjustment if highest accuracy is strived for. In virtual reality applications this is generally not the case, as quality is mainly determined by visual perception. A visual aspect of major importance is the photo-realistic texture mapped to the faces of the object. This texture is extracted from the same (single) image. In this paper the measurement process, the different types of constraints, their adjustment and the object model reconstruction are treated. A practical application of the proposed method is discussed in which a texture mapped model of a historic building is constructed and the repeatability of the method is assessed. The application shows the feasibility of the method and the potential of photogrammetry as an efficient tool for the production of 3D models for virtual reality applications.

  9. Needle placement for piriformis injection using 3-D imaging.

    PubMed

    Clendenen, Steven R; Candler, Shawn A; Osborne, Michael D; Palmer, Scott C; Duench, Stephanie; Glynn, Laura; Ghazi, Salim M

    2013-01-01

    Piriformis syndrome is a pain syndrome originating in the buttock and is attributed to 6% - 8% of patients referred for the treatment of back and leg pain. The treatment for piriformis syndrome using fluoroscopy, computed tomography (CT), electromyography (EMG), and ultrasound (US) has become standard practice. The treatment of Piriformis Syndrome has evolved to include fluoroscopy and EMG with CT guidance. We present a case study of 5 successful piriformis injections using 3-D computer-assisted electromagnet needle tracking coupled with ultrasound. A 6-degree of freedom electromagnetic position tracker was attached to the ultrasound probe that allowed the system to detect the position and orientation of the probe in the magnetic field. The tracked ultrasound probe was used to find the posterior superior iliac spine. Subsequently, 3 points were captured to register the ultrasound image with the CT or magnetic resonance image scan. Moreover, after the registration was obtained, the navigation system visualized the tracked needle relative to the CT scan in real-time using 2 orthogonal multi-planar reconstructions centered at the tracked needle tip. Conversely, a recent study revealed that fluoroscopically guided injections had 30% accuracy compared to ultrasound guided injections, which tripled the accuracy percentage. This novel technique exhibited an accurate needle guidance injection precision of 98% while advancing to the piriformis muscle and avoiding the sciatic nerve. The mean (± SD) procedure time was 19.08 (± 4.9) minutes. This technique allows for electromagnetic instrument tip tracking with real-time 3-D guidance to the selected target. As with any new technique, a learning curve is expected; however, this technique could offer an alternative, minimizing radiation exposure. PMID:23703429

  10. GPU-accelerated denoising of 3D magnetic resonance images

    SciTech Connect

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  11. Spectral ladar: towards active 3D multispectral imaging

    NASA Astrophysics Data System (ADS)

    Powers, Michael A.; Davis, Christopher C.

    2010-04-01

    In this paper we present our Spectral LADAR concept, an augmented implementation of traditional LADAR. This sensor uses a polychromatic source to obtain range-resolved 3D spectral images which are used to identify objects based on combined spatial and spectral features, resolving positions in three dimensions and up to hundreds of meters in distance. We report on a proof-of-concept Spectral LADAR demonstrator that generates spectral point clouds from static scenes. The demonstrator transmits nanosecond supercontinuum pulses generated in a photonic crystal fiber. Currently we use a rapidly tuned receiver with a high-speed InGaAs APD for 25 spectral bands with the future expectation of implementing a linear APD array spectrograph. Each spectral band is independently range resolved with multiple return pulse recognition. This is a critical feature, enabling simultaneous spectral and spatial unmixing of partially obscured objects when not achievable using image fusion of monochromatic LADAR and passive spectral imagers. This enables higher identification confidence in highly cluttered environments such as forested or urban areas (e.g. vehicles behind camouflage or foliage). These environments present challenges for situational awareness and robotic perception which can benefit from the unique attributes of Spectral LADAR. Results from this demonstrator unit are presented for scenes typical of military operations and characterize the operation of the device. The results are discussed here in the context of autonomous vehicle navigation and target recognition.

  12. Micro-optical system based 3D imaging for full HD depth image capturing

    NASA Astrophysics Data System (ADS)

    Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan

    2012-03-01

    20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.

  13. A Simple Quality Assessment Index for Stereoscopic Images Based on 3D Gradient Magnitude

    PubMed Central

    Wang, Shanshan; Shao, Feng; Li, Fucui; Yu, Mei; Jiang, Gangyi

    2014-01-01

    We present a simple quality assessment index for stereoscopic images based on 3D gradient magnitude. To be more specific, we construct 3D volume from the stereoscopic images across different disparity spaces and calculate pointwise 3D gradient magnitude similarity (3D-GMS) along three horizontal, vertical, and viewpoint directions. Then, the quality score is obtained by averaging the 3D-GMS scores of all points in the 3D volume. Experimental results on four publicly available 3D image quality assessment databases demonstrate that, in comparison with the most related existing methods, the devised algorithm achieves high consistency alignment with subjective assessment. PMID:25133265

  14. Performance assessment of 3D surface imaging technique for medical imaging applications

    NASA Astrophysics Data System (ADS)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Recent development in optical 3D surface imaging technologies provide better ways to digitalize the 3D surface and its motion in real-time. The non-invasive 3D surface imaging approach has great potential for many medical imaging applications, such as motion monitoring of radiotherapy, pre/post evaluation of plastic surgery and dermatology, to name a few. Various commercial 3D surface imaging systems have appeared on the market with different dimension, speed and accuracy. For clinical applications, the accuracy, reproducibility and robustness across the widely heterogeneous skin color, tone, texture, shape properties, and ambient lighting is very crucial. Till now, a systematic approach for evaluating the performance of different 3D surface imaging systems still yet exist. In this paper, we present a systematic performance assessment approach to 3D surface imaging system assessment for medical applications. We use this assessment approach to exam a new real-time surface imaging system we developed, dubbed "Neo3D Camera", for image-guided radiotherapy (IGRT). The assessments include accuracy, field of view, coverage, repeatability, speed and sensitivity to environment, texture and color.

  15. Retinal Image Simulation of Subjective Refraction Techniques

    PubMed Central

    Perches, Sara; Collados, M. Victoria; Ares, Jorge

    2016-01-01

    Refraction techniques make it possible to determine the most appropriate sphero-cylindrical lens prescription to achieve the best possible visual quality. Among these techniques, subjective refraction (i.e., patient’s response-guided refraction) is the most commonly used approach. In this context, this paper’s main goal is to present a simulation software that implements in a virtual manner various subjective-refraction techniques—including Jackson’s Cross-Cylinder test (JCC)—relying all on the observation of computer-generated retinal images. This software has also been used to evaluate visual quality when the JCC test is performed in multifocal-contact-lens wearers. The results reveal this software’s usefulness to simulate the retinal image quality that a particular visual compensation provides. Moreover, it can help to gain a deeper insight and to improve existing refraction techniques and it can be used for simulated training. PMID:26938648

  16. Retinal Image Simulation of Subjective Refraction Techniques.

    PubMed

    Perches, Sara; Collados, M Victoria; Ares, Jorge

    2016-01-01

    Refraction techniques make it possible to determine the most appropriate sphero-cylindrical lens prescription to achieve the best possible visual quality. Among these techniques, subjective refraction (i.e., patient's response-guided refraction) is the most commonly used approach. In this context, this paper's main goal is to present a simulation software that implements in a virtual manner various subjective-refraction techniques-including Jackson's Cross-Cylinder test (JCC)-relying all on the observation of computer-generated retinal images. This software has also been used to evaluate visual quality when the JCC test is performed in multifocal-contact-lens wearers. The results reveal this software's usefulness to simulate the retinal image quality that a particular visual compensation provides. Moreover, it can help to gain a deeper insight and to improve existing refraction techniques and it can be used for simulated training. PMID:26938648

  17. Development and evaluation of a semiautomatic 3D segmentation technique of the carotid arteries from 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Gill, Jeremy D.; Ladak, Hanif M.; Steinman, David A.; Fenster, Aaron

    1999-05-01

    In this paper, we report on a semi-automatic approach to segmentation of carotid arteries from 3D ultrasound (US) images. Our method uses a deformable model which first is rapidly inflated to approximately find the boundary of the artery, then is further deformed using image-based forces to better localize the boundary. An operator is required to initialize the model by selecting a position in the 3D US image, which is within the carotid vessel. Since the choice of position is user-defined, and therefore arbitrary, there is an inherent variability in the position and shape of the final segmented boundary. We have assessed the performance of our segmentation method by examining the local variability in boundary shape as the initial selected position is varied in a freehand 3D US image of a human carotid bifurcation. Our results indicate that high variability in boundary position occurs in regions where either the segmented boundary is highly curved, or the 3D US image has poorly defined vessel edges.

  18. Effect of Refractive Status and Axial Length on Peripapillary Retinal Nerve Fibre Layer Thickness: An Analysis Using 3D OCT

    PubMed Central

    Sowmya, V.; Venkataramanan, V.R.

    2015-01-01

    Background Accurate measurement of retinal nerve fiber layer (RNFL) is now possible with the high resolution optical coherence tomography (OCT). Effect of refractive status of the eye on RNFL thickness may be relevant in the diagnosis of glaucoma and other optic nerve diseases. Aim To assess the RNFL thickness and compare its correlation with refractive status and axial length of the eye. Material and Methods Three hundred eyes of 150 patients were included in this study, who underwent RNFL analysis using TOPCON 3D OCT 2000. Analysis of variance has been used to find the significance of study parameters between the study groups. Results The study showed that refractive status/axial length affected the peripapillary RNFL thickness significantly. Conclusion The study suggests that the diagnostic accuracy of OCT may be improved by considering refractive status and axial length of the eye when RNFL is measured. PMID:26500931

  19. High resolution 3D imaging of synchrotron generated microbeams

    SciTech Connect

    Gagliardi, Frank M.; Cornelius, Iwan; Blencowe, Anton; Franich, Rick D.; Geso, Moshi

    2015-12-15

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery.

  20. 3D Slicer as an image computing platform for the Quantitative Imaging Network.

    PubMed

    Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V; Pieper, Steve; Kikinis, Ron

    2012-11-01

    Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open-source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future directions that can further facilitate development and validation of imaging biomarkers using 3D Slicer. PMID:22770690

  1. 3D Slicer as an Image Computing Platform for the Quantitative Imaging Network

    PubMed Central

    Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V.; Pieper, Steve; Kikinis, Ron

    2012-01-01

    Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm, and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future directions that can further facilitate development and validation of imaging biomarkers using 3D Slicer. PMID:22770690

  2. ROIC for gated 3D imaging LADAR receiver

    NASA Astrophysics Data System (ADS)

    Chen, Guoqiang; Zhang, Junling; Wang, Pan; Zhou, Jie; Gao, Lei; Ding, Ruijun

    2013-09-01

    Time of flight laser range finding, deep space communications and scanning video imaging are three applications requiring very low noise optical receivers to achieve detection of fast and weak optical signal. HgCdTe electrons initiated avalanche photodiodes (e-APDs) in linear multiplication mode is the detector of choice thanks to its high quantum efficiency, high gain at low bias, high bandwidth and low noise factor. In this project, a readout integrated circuit of hybrid e-APD focal plane array (FPA) with 100um pitch for 3D-LADAR was designed for gated optical receiver. The ROIC works at 77K, including unit cell circuit, column-level circuit, timing control, bias circuit and output driver. The unit cell circuit is a key component, which consists of preamplifier, correlated double Sampling (CDS), bias circuit and timing control module. Specially, the preamplifier used the capacitor feedback transimpedance amplifier (CTIA) structure which has two capacitors to offer switchable capacitance for passive/active dual mode imaging. The main circuit of column-level circuit is a precision Multiply-by-Two circuit which is implemented by switched-capacitor circuit. Switched-capacitor circuit is quite suitable for the signal processing of readout integrated circuit (ROIC) due to the working characteristics. The output driver uses a simply unity-gain buffer. Because the signal is amplified in column-level circuit, the amplifier in unity-gain buffer uses a rail-rail amplifier. In active imaging mode, the integration time is 80ns. Integrating current from 200nA to 4uA, this circuit shows the nonlinearity is less than 1%. In passive imaging mode, the integration time is 150ns. Integrating current from 1nA to 20nA shows the nonlinearity less than 1%.

  3. Autostereoscopic 3D visualization and image processing system for neurosurgery.

    PubMed

    Meyer, Tobias; Kuß, Julia; Uhlemann, Falk; Wagner, Stefan; Kirsch, Matthias; Sobottka, Stephan B; Steinmeier, Ralf; Schackert, Gabriele; Morgenstern, Ute

    2013-06-01

    A demonstrator system for planning neurosurgical procedures was developed based on commercial hardware and software. The system combines an easy-to-use environment for surgical planning with high-end visualization and the opportunity to analyze data sets for research purposes. The demonstrator system is based on the software AMIRA. Specific algorithms for segmentation, elastic registration, and visualization have been implemented and adapted to the clinical workflow. Modules from AMIRA and the image processing library Insight Segmentation and Registration Toolkit (ITK) can be combined to solve various image processing tasks. Customized modules tailored to specific clinical problems can easily be implemented using the AMIRA application programming interface and a self-developed framework for ITK filters. Visualization is done via autostereoscopic displays, which provide a 3D impression without viewing aids. A Spaceball device allows a comfortable, intuitive way of navigation in the data sets. Via an interface to a neurosurgical navigation system, the demonstrator system can be used intraoperatively. The precision, applicability, and benefit of the demonstrator system for planning of neurosurgical interventions and for neurosurgical research were successfully evaluated by neurosurgeons using phantom and patient data sets. PMID:23740656

  4. Enhanced 3D fluorescence live cell imaging on nanoplasmonic substrate

    NASA Astrophysics Data System (ADS)

    Ranjan Gartia, Manas; Hsiao, Austin; Sivaguru, Mayandi; Chen, Yi; Logan Liu, G.

    2011-09-01

    We have created a randomly distributed nanocone substrate on silicon coated with silver for surface-plasmon-enhanced fluorescence detection and 3D cell imaging. Optical characterization of the nanocone substrate showed it can support several plasmonic modes (in the 300-800 nm wavelength range) that can be coupled to a fluorophore on the surface of the substrate, which gives rise to the enhanced fluorescence. Spectral analysis suggests that a nanocone substrate can create more excitons and shorter lifetime in the model fluorophore Rhodamine 6G (R6G) due to plasmon resonance energy transfer from the nanocone substrate to the nearby fluorophore. We observed three-dimensional fluorescence enhancement on our substrate shown from the confocal fluorescence imaging of chinese hamster ovary (CHO) cells grown on the substrate. The fluorescence intensity from the fluorophores bound on the cell membrane was amplified more than 100-fold as compared to that on a glass substrate. We believe that strong scattering within the nanostructured area coupled with random scattering inside the cell resulted in the observed three-dimensional enhancement in fluorescence with higher photostability on the substrate surface.

  5. [3D virtual imaging of the upper airways].

    PubMed

    Ferretti, G; Coulomb, M

    2000-04-01

    The different three dimensional reconstructions of the upper airways that can be obtained with spiral computed tomograpy (CT) are presented here. The parameters indispensable to achieve as real as possible spiral CT images are recalled together with the advantages and disadvantages of the different techniues. Multislice reconstruction (MSR) produces slices in different planes of space with the high contrast of CT slices. They provide information similar to that obtained for the rare indications for thoracic MRI. Thick slice reconstructions with maximum intensity projection (MIP) or minimum intensity projection (minIP) give projection views where the contrast can be modified by selecting the more dense (MIP) or less dense (minIP) voxels. They find their application in the exploration of the upper airways. Surface and volume external 3D reconstructions can be obtained. They give an overall view of the upper airways, similar to a bronchogram. Virtual endoscopy reproduces real endoscopic images but cannot provide information on the aspect of the mucosa or biopsy specimens. It offers possible applications for preparing, guiding and controlling interventional fibroscopy procedures. PMID:10810199

  6. Multiframe image point matching and 3-d surface reconstruction.

    PubMed

    Tsai, R Y

    1983-02-01

    This paper presents two new methods, the Joint Moment Method (JMM) and the Window Variance Method (WVM), for image matching and 3-D object surface reconstruction using multiple perspective views. The viewing positions and orientations for these perspective views are known a priori, as is usually the case for such applications as robotics and industrial vision as well as close range photogrammetry. Like the conventional two-frame correlation method, the JMM and WVM require finding the extrema of 1-D curves, which are proved to theoretically approach a delta function exponentially as the number of frames increases for the JMM and are much sharper than the two-frame correlation function for both the JMM and the WVM, even when the image point to be matched cannot be easily distinguished from some of the other points. The theoretical findings have been supported by simulations. It is also proved that JMM and WVM are not sensitive to certain radiometric effects. If the same window size is used, the computational complexity for the proposed methods is about n - 1 times that for the two-frame method where n is the number of frames. Simulation results show that the JMM and WVM require smaller windows than the two-frame correlation method with better accuracy, and therefore may even be more computationally feasible than the latter since the computational complexity increases quadratically as a function of the window size. PMID:21869097

  7. Complex adaptation-based LDR image rendering for 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  8. 3D image processing architecture for camera phones

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Goma, Sergio R.; Aleksic, Milivoje

    2011-03-01

    Putting high quality and easy-to-use 3D technology into the hands of regular consumers has become a recent challenge as interest in 3D technology has grown. Making 3D technology appealing to the average user requires that it be made fully automatic and foolproof. Designing a fully automatic 3D capture and display system requires: 1) identifying critical 3D technology issues like camera positioning, disparity control rationale, and screen geometry dependency, 2) designing methodology to automatically control them. Implementing 3D capture functionality on phone cameras necessitates designing algorithms to fit within the processing capabilities of the device. Various constraints like sensor position tolerances, sensor 3A tolerances, post-processing, 3D video resolution and frame rate should be carefully considered for their influence on 3D experience. Issues with migrating functions such as zoom and pan from the 2D usage model (both during capture and display) to 3D needs to be resolved to insure the highest level of user experience. It is also very important that the 3D usage scenario (including interactions between the user and the capture/display device) is carefully considered. Finally, both the processing power of the device and the practicality of the scheme needs to be taken into account while designing the calibration and processing methodology.

  9. Use of Low-cost 3-D Images in Teaching Gross Anatomy.

    ERIC Educational Resources Information Center

    Richards, Boyd F.; And Others

    1987-01-01

    With advances in computer technology, it has become possible to create three-dimensional (3-D) images of anatomical structures for use in teaching gross anatomy. Reported is a survey of attitudes of 91 first-year medical students toward the use of 3-D images in their anatomy course. Reactions to the 3-D images and suggestions for improvement are…

  10. 3D-3D registration of partial capitate bones using spin-images

    NASA Astrophysics Data System (ADS)

    Breighner, Ryan; Holmes, David R.; Leng, Shuai; An, Kai-Nan; McCollough, Cynthia; Zhao, Kristin

    2013-03-01

    It is often necessary to register partial objects in medical imaging. Due to limited field of view (FOV), the entirety of an object cannot always be imaged. This study presents a novel application of an existing registration algorithm to this problem. The spin-image algorithm [1] creates pose-invariant representations of global shape with respect to individual mesh vertices. These `spin-images,' are then compared for two different poses of the same object to establish correspondences and subsequently determine relative orientation of the poses. In this study, the spin-image algorithm is applied to 4DCT-derived capitate bone surfaces to assess the relative accuracy of registration with various amounts of geometry excluded. The limited longitudinal coverage under the 4DCT technique (38.4mm, [2]), results in partial views of the capitate when imaging wrist motions. This study assesses the ability of the spin-image algorithm to register partial bone surfaces by artificially restricting the capitate geometry available for registration. Under IRB approval, standard static CT and 4DCT scans were obtained on a patient. The capitate was segmented from the static CT and one phase of 4DCT in which the whole bone was available. Spin-image registration was performed between the static and 4DCT. Distal portions of the 4DCT capitate (10-70%) were then progressively removed and registration was repeated. Registration accuracy was evaluated by angular errors and the percentage of sub-resolution fitting. It was determined that 60% of the distal capitate could be omitted without appreciable effect on registration accuracy using the spin-image algorithm (angular error < 1.5 degree, sub-resolution fitting < 98.4%).

  11. Molecular Imaging of Retinal Disease

    PubMed Central

    Capozzi, Megan E.; Gordon, Andrew Y.; Penn, John S.

    2013-01-01

    Abstract Imaging of the eye plays an important role in ocular therapeutic discovery and evaluation in preclinical models and patients. Advances in ophthalmic imaging instrumentation have enabled visualization of the retina at an unprecedented resolution. These developments have contributed toward early detection of the disease, monitoring of disease progression, and assessment of the therapeutic response. These powerful technologies are being further harnessed for clinical applications by configuring instrumentation to detect disease biomarkers in the retina. These biomarkers can be detected either by measuring the intrinsic imaging contrast in tissue, or by the engineering of targeted injectable contrast agents for imaging of the retina at the cellular and molecular level. Such approaches have promise in providing a window on dynamic disease processes in the retina such as inflammation and apoptosis, enabling translation of biomarkers identified in preclinical and clinical studies into useful diagnostic targets. We discuss recently reported and emerging imaging strategies for visualizing diverse cell types and molecular mediators of the retina in vivo during health and disease, and the potential for clinical translation of these approaches. PMID:23421501

  12. 3D Soil Images Structure Quantification using Relative Entropy

    NASA Astrophysics Data System (ADS)

    Tarquis, A. M.; Gonzalez-Nieto, P. L.; Bird, N. R. A.

    2012-04-01

    Soil voids manifest the cumulative effect of local pedogenic processes and ultimately influence soil behavior - especially as it pertains to aeration and hydrophysical properties. Because of the relatively weak attenuation of X-rays by air, compared with liquids or solids, non-disruptive CT scanning has become a very attractive tool for generating three-dimensional imagery of soil voids. One of the main steps involved in this analysis is the thresholding required to transform the original (greyscale) images into the type of binary representation (e.g., pores in white, solids in black) needed for fractal analysis or simulation with Lattice-Boltzmann models (Baveye et al., 2010). The objective of the current work is to apply an innovative approach to quantifying soil voids and pore networks in original X-ray CT imagery using Relative Entropy (Bird et al., 2006; Tarquis et al., 2008). These will be illustrated using typical imagery representing contrasting soil structures. Particular attention will be given to the need to consider the full 3D context of the CT imagery, as well as scaling issues, in the application and interpretation of this index.

  13. Adaptive Optics Retinal Imaging: Emerging Clinical Applications

    PubMed Central

    Godara, Pooja; Dubis, Adam M.; Roorda, Austin; Duncan, Jacque L.; Carroll, Joseph

    2010-01-01

    The human retina is a uniquely accessible tissue. Tools like scanning laser ophthalmoscopy (SLO) and spectral domain optical coherence tomography (SD-OCT) provide clinicians with remarkably clear pictures of the living retina. While the anterior optics of the eye permit such non-invasive visualization of the retina and associated pathology, these same optics induce significant aberrations that in most cases obviate cellular-resolution imaging. Adaptive optics (AO) imaging systems use active optical elements to compensate for aberrations in the optical path between the object and the camera. Applied to the human eye, AO allows direct visualization of individual rod and cone photoreceptor cells, RPE cells, and white blood cells. AO imaging has changed the way vision scientists and ophthalmologists see the retina, helping to clarify our understanding of retinal structure, function, and the etiology of various retinal pathologies. Here we review some of the advances made possible with AO imaging of the human retina, and discuss applications and future prospects for clinical imaging. PMID:21057346

  14. A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images

    PubMed Central

    Belghith, Akram; Bowd, Christopher; Weinreb, Robert N.; Zangwill, Linda M.

    2014-01-01

    Glaucoma is an ocular disease characterized by distinctive changes in the optic nerve head (ONH) and visual field. Glaucoma can strike without symptoms and causes blindness if it remains without treatment. Therefore, early disease detection is important so that treatment can be initiated and blindness prevented. In this context, important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, an essential element for glaucoma detection and monitoring. 3D spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, has been commonly used to discriminate glaucomatous from healthy subjects. In this paper, we present a new framework for detection of glaucoma progression using 3D SD-OCT images. In contrast to previous works that the retinal nerve fiber layer (RNFL) thickness measurement provided by commercially available spectral-domain optical coherence tomograph, we consider the whole 3D volume for change detection. To integrate a priori knowledge and in particular the spatial voxel dependency in the change detection map, we propose the use of the Markov Random Field to handle a such dependency. To accommodate the presence of false positive detection, the estimated change detection map is then used to classify a 3D SDOCT image into the “non-progressing” and “progressing” glaucoma classes, based on a fuzzy logic classifier. We compared the diagnostic performance of the proposed framework to existing methods of progression detection. PMID:25606299

  15. A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images

    NASA Astrophysics Data System (ADS)

    Belghith, Akram; Bowd, Christopher; Weinreb, Robert N.; Zangwill, Linda M.

    2014-03-01

    Glaucoma is an ocular disease characterized by distinctive changes in the optic nerve head (ONH) and visual field. Glaucoma can strike without symptoms and causes blindness if it remains without treatment. Therefore, early disease detection is important so that treatment can be initiated and blindness prevented. In this context, important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, an essential element for glaucoma detection and monitoring. 3D spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, has been commonly used to discriminate glaucomatous from healthy subjects. In this paper, we present a new framework for detection of glaucoma progression using 3D SD-OCT images. In contrast to previous works that the retinal nerve fiber layer (RNFL) thickness measurement provided by commercially available spectral-domain optical coherence tomograph, we consider the whole 3D volume for change detection. To integrate a priori knowledge and in particular the spatial voxel dependency in the change detection map, we propose the use of the Markov Random Field to handle a such dependency. To accommodate the presence of false positive detection, the estimated change detection map is then used to classify a 3D SDOCT image into the "non-progressing" and "progressing" glaucoma classes, based on a fuzzy logic classifier. We compared the diagnostic performance of the proposed framework to existing methods of progression detection.

  16. Adaptive Optics Technology for High-Resolution Retinal Imaging

    PubMed Central

    Lombardo, Marco; Serrao, Sebastiano; Devaney, Nicholas; Parravano, Mariacristina; Lombardo, Giuseppe

    2013-01-01

    Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effects of optical aberrations. The direct visualization of the photoreceptor cells, capillaries and nerve fiber bundles represents the major benefit of adding AO to retinal imaging. Adaptive optics is opening a new frontier for clinical research in ophthalmology, providing new information on the early pathological changes of the retinal microstructures in various retinal diseases. We have reviewed AO technology for retinal imaging, providing information on the core components of an AO retinal camera. The most commonly used wavefront sensing and correcting elements are discussed. Furthermore, we discuss current applications of AO imaging to a population of healthy adults and to the most frequent causes of blindness, including diabetic retinopathy, age-related macular degeneration and glaucoma. We conclude our work with a discussion on future clinical prospects for AO retinal imaging. PMID:23271600

  17. Snapshot retinal imaging Mueller matrix polarimeter

    NASA Astrophysics Data System (ADS)

    Wang, Yifan; Kudenov, Michael; Kashani, Amir; Schwiegerling, Jim; Escuti, Michael

    2015-09-01

    Early diagnosis of glaucoma, which is a leading cause for visual impairment, is critical for successful treatment. It has been shown that Imaging polarimetry has advantages in early detection of structural changes in the retina. Here, we theoretically and experimentally present a snapshot Mueller Matrix Polarimeter fundus camera, which has the potential to record the polarization-altering characteristics of retina with a single snapshot. It is made by incorporating polarization gratings into a fundus camera design. Complete Mueller Matrix data sets can be obtained by analyzing the polarization fringes projected onto the image plane. In this paper, we describe the experimental implementation of the snapshot retinal imaging Mueller matrix polarimeter (SRIMMP), highlight issues related to calibration, and provide preliminary images acquired from the camera.

  18. High-speed, digitally refocused retinal imaging with line-field parallel swept source OCT

    NASA Astrophysics Data System (ADS)

    Fechtig, Daniel J.; Kumar, Abhishek; Ginner, Laurin; Drexler, Wolfgang; Leitgeb, Rainer A.

    2015-03-01

    MHz OCT allows mitigating undesired influence of motion artifacts during retinal assessment, but comes in state-of-the-art point scanning OCT at the price of increased system complexity. By changing the paradigm from scanning to parallel OCT for in vivo retinal imaging the three-dimensional (3D) acquisition time is reduced without a trade-off between speed, sensitivity and technological requirements. Furthermore, the intrinsic phase stability allows for applying digital refocusing methods increasing the in-focus imaging depth range. Line field parallel interferometric imaging (LPSI) is utilizing a commercially available swept source, a single-axis galvo-scanner and a line scan camera for recording 3D data with up to 1MHz A-scan rate. Besides line-focus illumination and parallel detection, we mitigate the necessity for high-speed sensor and laser technology by holographic full-range imaging, which allows for increasing the imaging speed by low sampling of the optical spectrum. High B-scan rates up to 1kHz further allow for implementation of lable-free optical angiography in 3D by calculating the inter B-scan speckle variance. We achieve a detection sensitivity of 93.5 (96.5) dB at an equivalent A-scan rate of 1 (0.6) MHz and present 3D in vivo retinal structural and functional imaging utilizing digital refocusing. Our results demonstrate for the first time competitive imaging sensitivity, resolution and speed with a parallel OCT modality. LPSI is in fact currently the fastest OCT device applied to retinal imaging and operating at a central wavelength window around 800 nm with a detection sensitivity of higher than 93.5 dB.

  19. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    NASA Astrophysics Data System (ADS)

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  20. Automated lesion detectors in retinal fundus images.

    PubMed

    Figueiredo, I N; Kumar, S; Oliveira, C M; Ramos, J D; Engquist, B

    2015-11-01

    Diabetic retinopathy (DR) is a sight-threatening condition occurring in persons with diabetes, which causes progressive damage to the retina. The early detection and diagnosis of DR is vital for saving the vision of diabetic persons. The early signs of DR which appear on the surface of the retina are the dark lesions such as microaneurysms (MAs) and hemorrhages (HEMs), and bright lesions (BLs) such as exudates. In this paper, we propose a novel automated system for the detection and diagnosis of these retinal lesions by processing retinal fundus images. We devise appropriate binary classifiers for these three different types of lesions. Some novel contextual/numerical features are derived, for each lesion type, depending on its inherent properties. This is performed by analysing several wavelet bands (resulting from the isotropic undecimated wavelet transform decomposition of the retinal image green channel) and by using an appropriate combination of Hessian multiscale analysis, variational segmentation and cartoon+texture decomposition. The proposed methodology has been validated on several medical datasets, with a total of 45,770 images, using standard performance measures such as sensitivity and specificity. The individual performance, per frame, of the MA detector is 93% sensitivity and 89% specificity, of the HEM detector is 86% sensitivity and 90% specificity, and of the BL detector is 90% sensitivity and 97% specificity. Regarding the collective performance of these binary detectors, as an automated screening system for DR (meaning that a patient is considered to have DR if it is a positive patient for at least one of the detectors) it achieves an average 95-100% of sensitivity and 70% of specificity at a per patient basis. Furthermore, evaluation conducted on publicly available datasets, for comparison with other existing techniques, shows the promising potential of the proposed detectors. PMID:26378502

  1. 3-D Imaging Systems for Agricultural Applications-A Review.

    PubMed

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  2. 3-D Imaging Systems for Agricultural Applications—A Review

    PubMed Central

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  3. Segmented images and 3D images for studying the anatomical structures in MRIs

    NASA Astrophysics Data System (ADS)

    Lee, Yong Sook; Chung, Min Suk; Cho, Jae Hyun

    2004-05-01

    For identifying the pathological findings in MRIs, the anatomical structures in MRIs should be identified in advance. For studying the anatomical structures in MRIs, an education al tool that includes the horizontal, coronal, sagittal MRIs of entire body, corresponding segmented images, 3D images, and browsing software is necessary. Such an educational tool, however, is hard to obtain. Therefore, in this research, such an educational tool which helps medical students and doctors study the anatomical structures in MRIs was made as follows. A healthy, young Korean male adult with standard body shape was selected. Six hundred thirteen horizontal MRIs of the entire body were scanned and inputted to the personal computer. Sixty anatomical structures in the horizontal MRIs were segmented to make horizontal segmented images. Coronal, sagittal MRIs and coronal, sagittal segmented images were made. 3D images of anatomical structures in the segmented images were reconstructed by surface rendering method. Browsing software of the MRIs, segmented images, and 3D images was composed. This educational tool that includes horizontal, coronal, sagittal MRIs of entire body, corresponding segmented images, 3D images, and browsing software is expected to help medical students and doctors study anatomical structures in MRIs.

  4. 3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine

    NASA Astrophysics Data System (ADS)

    Hamamoto, Kazuhiko; Sato, Motoyoshi

    3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.

  5. DSP based image processing for retinal prosthesis.

    PubMed

    Parikh, Neha J; Weiland, James D; Humayun, Mark S; Shah, Saloni S; Mohile, Gaurav S

    2004-01-01

    The real-time image processing in retinal prosthesis consists of the implementation of various image processing algorithms like edge detection, edge enhancement, decimation etc. The algorithmic computations in real-time may have high level of computational complexity and hence the use of digital signal processors (DSPs) for the implementation of such algorithms is proposed here. This application desires that the DSPs be highly computationally efficient while working on low power. DSPs have computational capabilities of hundreds of millions of instructions per second (MIPS) or millions of floating point operations per second (MFLOPS) along with certain processor configurations having low power. The various image processing algorithms, the DSP requirements and capabilities of different platforms would be discussed in this paper. PMID:17271974

  6. Implementation of wireless 3D stereo image capture system and 3D exaggeration algorithm for the region of interest

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Badarch, Luubaatar

    2015-05-01

    In this paper, we introduce the mobile embedded system implemented for capturing stereo image based on two CMOS camera module. We use WinCE as an operating system and capture the stereo image by using device driver for CMOS camera interface and Direct Draw API functions. We aslo comments on the GPU hardware and CUDA programming for implementation of 3D exaggeraion algorithm for ROI by adjusting and synthesizing the disparity value of ROI (region of interest) in real time. We comment on the pattern of aperture for deblurring of CMOS camera module based on the Kirchhoff diffraction formula and clarify the reason why we can get more sharp and clear image by blocking some portion of aperture or geometric sampling. Synthesized stereo image is real time monitored on the shutter glass type three-dimensional LCD monitor and disparity values of each segment are analyzed to prove the validness of emphasizing effect of ROI.

  7. Quantitative 3-D imaging topogrammetry for telemedicine applications

    NASA Technical Reports Server (NTRS)

    Altschuler, Bruce R.

    1994-01-01

    The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with precision micro-sewing machines, splice neural connections with laser welds, micro-bore through constricted vessels, and computer combine ultrasound, microradiography, and 3-D mini-borescopes to quickly assess and trace vascular problems in situ. The spatial relationships between organs, robotic arms, and end-effector diagnostic, manipulative, and surgical instruments would be constantly monitored by the robot 'brain' using inputs from its multiple 3-D quantitative 'eyes' remote sensing, as well as by contact and proximity force measuring devices. Methods to create accurate and quantitative 3-D topograms at continuous video data rates are described.

  8. 3D fingerprint imaging system based on full-field fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  9. 3D high-density localization microscopy using hybrid astigmatic/ biplane imaging and sparse image reconstruction.

    PubMed

    Min, Junhong; Holden, Seamus J; Carlini, Lina; Unser, Michael; Manley, Suliana; Ye, Jong Chul

    2014-11-01

    Localization microscopy achieves nanoscale spatial resolution by iterative localization of sparsely activated molecules, which generally leads to a long acquisition time. By implementing advanced algorithms to treat overlapping point spread functions (PSFs), imaging of densely activated molecules can improve the limited temporal resolution, as has been well demonstrated in two-dimensional imaging. However, three-dimensional (3D) localization of high-density data remains challenging since PSFs are far more similar along the axial dimension than the lateral dimensions. Here, we present a new, high-density 3D imaging system and algorithm. The hybrid system is implemented by combining astigmatic and biplane imaging. The proposed 3D reconstruction algorithm is extended from our state-of-the art 2D high-density localization algorithm. Using mutual coherence analysis of model PSFs, we validated that the hybrid system is more suitable than astigmatic or biplane imaging alone for 3D localization of high-density data. The efficacy of the proposed method was confirmed via simulation and real data of microtubules. Furthermore, we also successfully demonstrated fluorescent-protein-based live cell 3D localization microscopy with a temporal resolution of just 3 seconds, capturing fast dynamics of the endoplasmic recticulum. PMID:26526603

  10. 3D high-density localization microscopy using hybrid astigmatic/ biplane imaging and sparse image reconstruction

    PubMed Central

    Min, Junhong; Holden, Seamus J.; Carlini, Lina; Unser, Michael; Manley, Suliana; Ye, Jong Chul

    2014-01-01

    Localization microscopy achieves nanoscale spatial resolution by iterative localization of sparsely activated molecules, which generally leads to a long acquisition time. By implementing advanced algorithms to treat overlapping point spread functions (PSFs), imaging of densely activated molecules can improve the limited temporal resolution, as has been well demonstrated in two-dimensional imaging. However, three-dimensional (3D) localization of high-density data remains challenging since PSFs are far more similar along the axial dimension than the lateral dimensions. Here, we present a new, high-density 3D imaging system and algorithm. The hybrid system is implemented by combining astigmatic and biplane imaging. The proposed 3D reconstruction algorithm is extended from our state-of-the art 2D high-density localization algorithm. Using mutual coherence analysis of model PSFs, we validated that the hybrid system is more suitable than astigmatic or biplane imaging alone for 3D localization of high-density data. The efficacy of the proposed method was confirmed via simulation and real data of microtubules. Furthermore, we also successfully demonstrated fluorescent-protein-based live cell 3D localization microscopy with a temporal resolution of just 3 seconds, capturing fast dynamics of the endoplasmic recticulum. PMID:26526603

  11. Image-based RSA: Roentgen stereophotogrammetric analysis based on 2D-3D image registration.

    PubMed

    de Bruin, P W; Kaptein, B L; Stoel, B C; Reiber, J H C; Rozing, P M; Valstar, E R

    2008-01-01

    Image-based Roentgen stereophotogrammetric analysis (IBRSA) integrates 2D-3D image registration and conventional RSA. Instead of radiopaque RSA bone markers, IBRSA uses 3D CT data, from which digitally reconstructed radiographs (DRRs) are generated. Using 2D-3D image registration, the 3D pose of the CT is iteratively adjusted such that the generated DRRs resemble the 2D RSA images as closely as possible, according to an image matching metric. Effectively, by registering all 2D follow-up moments to the same 3D CT, the CT volume functions as common ground. In two experiments, using RSA and using a micromanipulator as gold standard, IBRSA has been validated on cadaveric and sawbone scapula radiographs, and good matching results have been achieved. The accuracy was: |mu |< 0.083 mm for translations and |mu| < 0.023 degrees for rotations. The precision sigma in x-, y-, and z-direction was 0.090, 0.077, and 0.220 mm for translations and 0.155 degrees , 0.243 degrees , and 0.074 degrees for rotations. Our results show that the accuracy and precision of in vitro IBRSA, performed under ideal laboratory conditions, are lower than in vitro standard RSA but higher than in vivo standard RSA. Because IBRSA does not require radiopaque markers, it adds functionality to the RSA method by opening new directions and possibilities for research, such as dynamic analyses using fluoroscopy on subjects without markers and computer navigation applications. PMID:17706656

  12. Lensfree diffractive tomography for the imaging of 3D cell cultures

    PubMed Central

    Momey, F.; Berdeu, A.; Bordy, T.; Dinten, J.-M.; Marcel, F. Kermarrec; Picollet-D’hahan, N.; Gidrol, X.; Allier, C.

    2016-01-01

    New microscopes are needed to help realize the full potential of 3D organoid culture studies. In order to image large volumes of 3D organoid cultures while preserving the ability to catch every single cell, we propose a new imaging platform based on lensfree microscopy. We have built a lensfree diffractive tomography setup performing multi-angle acquisitions of 3D organoid culture embedded in Matrigel and developed a dedicated 3D holographic reconstruction algorithm based on the Fourier diffraction theorem. With this new imaging platform, we have been able to reconstruct a 3D volume as large as 21.5 mm3 of a 3D organoid culture of prostatic RWPE1 cells showing the ability of these cells to assemble in 3D intricate cellular network at the mesoscopic scale. Importantly, comparisons with 2D images show that it is possible to resolve single cells isolated from the main cellular structure with our lensfree diffractive tomography setup. PMID:27231600

  13. Spectral domain optical coherence tomography for in-vivo three-dimensional retinal imaging of small animals

    NASA Astrophysics Data System (ADS)

    Ruggeri, Marco; Wehbe, Hassan; Jiao, Shuliang; Gregori, Giovanni; Jockovich, Maria E.; Hackam, Abigail; Duan, Yuanli; Puliafito, Carmen A.

    2007-02-01

    The purpose of this study is to demonstrate the application of ultrahigh-resolution Spectral Domain Optical Coherence Tomography (SD-OCT) for non contact in vivo imaging of the retina of small animals and quantitative retinal information extraction using 3D segmentation of the OCT images. An ultrahigh-resolution SD-OCT system was specifically designed for in vivo retinal imaging of small animal. En face fundus image was constructed from the measured OCT data, which enables precise registration of the OCT images on the fundus. 3D segmentation algorithms were developed for the calculation of retinal thickness map. High quality OCT images of the retina of mice (B6/SJLF2 for normal retina, Rho -/- for photoreceptor degeneration and LH BETAT AG for retinoblastoma) and rats (Wistar for normal retina) were acquired, where all the retinal layers can be clearly recognized. The calculated retinal thickness map makes successful quantitative comparison of the retinal thickness distribution between normal and degenerative mouse retina. The capabilities of the OCT system provide a valuable tool for longitudinal studies of small animal models of ocular diseases.

  14. Three-dimensional pointwise comparison of human retinal optical property at 845 and 1060 nm using optical frequency domain imaging.

    PubMed

    Chen, Yueli; Burnes, Daina L; de Bruin, Martijn; Mujat, Mircea; de Boer, Johannes F

    2009-01-01

    To compare the optical properties of the human retina, 3-D volumetric images of the same eye are acquired with two nearly identical optical coherence tomography (OCT) systems at center wavelengths of 845 and 1060 nm using optical frequency domain imaging (OFDI). To characterize the contrast of individual tissue layers in the retina at these two wavelengths, the 3-D volumetric data sets are carefully spatially matched. The relative scattering intensities from different layers such as the nerve fiber, photoreceptor, pigment epithelium, and choroid are measured and a quantitative comparison is presented. OCT retinal imaging at 1060 nm is found to have a significantly better depth penetration but a reduced contrast between the retinal nerve fiber, the ganglion cell, and the inner plexiform layers compared to the OCT retinal imaging at 845 nm. PMID:19405746

  15. Three-dimensional pointwise comparison of human retinal optical property at 845 and 1060 nm using optical frequency domain imaging

    NASA Astrophysics Data System (ADS)

    Chen, Yueli; Burnes, Daina L.; de Bruin, Martijn; Mujat, Mircea; de Boer, Johannes F.

    2009-03-01

    To compare the optical properties of the human retina, 3-D volumetric images of the same eye are acquired with two nearly identical optical coherence tomography (OCT) systems at center wavelengths of 845 and 1060 nm using optical frequency domain imaging (OFDI). To characterize the contrast of individual tissue layers in the retina at these two wavelengths, the 3-D volumetric data sets are carefully spatially matched. The relative scattering intensities from different layers such as the nerve fiber, photoreceptor, pigment epithelium, and choroid are measured and a quantitative comparison is presented. OCT retinal imaging at 1060 nm is found to have a significantly better depth penetration but a reduced contrast between the retinal nerve fiber, the ganglion cell, and the inner plexiform layers compared to the OCT retinal imaging at 845 nm.

  16. Accuracy of volume measurement using 3D ultrasound and development of CT-3D US image fusion algorithm for prostate cancer radiotherapy

    SciTech Connect

    Baek, Jihye; Huh, Jangyoung; Hyun An, So; Oh, Yoonjin; Kim, Myungsoo; Kim, DongYoung; Chung, Kwangzoo; Cho, Sungho; Lee, Rena

    2013-02-15

    Purpose: To evaluate the accuracy of measuring volumes using three-dimensional ultrasound (3D US), and to verify the feasibility of the replacement of CT-MR fusion images with CT-3D US in radiotherapy treatment planning. Methods: Phantoms, consisting of water, contrast agent, and agarose, were manufactured. The volume was measured using 3D US, CT, and MR devices. A CT-3D US and MR-3D US image fusion software was developed using the Insight Toolkit library in order to acquire three-dimensional fusion images. The quality of the image fusion was evaluated using metric value and fusion images. Results: Volume measurement, using 3D US, shows a 2.8 {+-} 1.5% error, 4.4 {+-} 3.0% error for CT, and 3.1 {+-} 2.0% error for MR. The results imply that volume measurement using the 3D US devices has a similar accuracy level to that of CT and MR. Three-dimensional image fusion of CT-3D US and MR-3D US was successfully performed using phantom images. Moreover, MR-3D US image fusion was performed using human bladder images. Conclusions: 3D US could be used in the volume measurement of human bladders and prostates. CT-3D US image fusion could be used in monitoring the target position in each fraction of external beam radiation therapy. Moreover, the feasibility of replacing the CT-MR image fusion to the CT-3D US in radiotherapy treatment planning was verified.

  17. Enhancing retinal images by extracting structural information

    NASA Astrophysics Data System (ADS)

    Molodij, G.; Ribak, E. N.; Glanc, M.; Chenegros, G.

    2014-02-01

    High-resolution imaging of the retina has significant importance for science: physics and optics, biology, and medicine. The enhancement of images with poor contrast and the detection of faint structures require objective methods for assessing perceptual image quality. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce a framework for quality assessment based on the degradation of structural information. We implemented a new processing technique on a long sequence of retinal images of subjects with normal vision. We were able to perform a precise shift-and-add at the sub-pixel level in order to resolve the structures of the size of single cells in the living human retina. Last, we quantified the restoration reliability of the distorted images using an improved quality assessment. To that purpose, we used the single image restoration method based on the ergodic principle, which has originated in solar astronomy, to deconvolve aberrations after adaptive optics compensation.

  18. 3D Image Reconstructions and the Nyquist-Shannon Theorem

    NASA Astrophysics Data System (ADS)

    Ficker, T.; Martišek, D.

    2015-09-01

    Fracture surfaces are occasionally modelled by Fourier's two-dimensional series that can be converted into digital 3D reliefs mapping the morphology of solid surfaces. Such digital replicas may suffer from various artefacts when processed inconveniently. Spatial aliasing is one of those artefacts that may devalue Fourier's replicas. According to the Nyquist-Shannon sampling theorem the spatial aliasing occurs when Fourier's frequencies exceed the Nyquist critical frequency. In the present paper it is shown that the Nyquist frequency is not the only critical limit determining aliasing artefacts but there are some other frequencies that intensify aliasing phenomena and form an infinite set of points at which numerical results abruptly and dramatically change their values. This unusual type of spatial aliasing is explored and some consequences for 3D computer reconstructions are presented.

  19. Computation of optimized arrays for 3-D electrical imaging surveys

    NASA Astrophysics Data System (ADS)

    Loke, M. H.; Wilkinson, P. B.; Uhlemann, S. S.; Chambers, J. E.; Oxby, L. S.

    2014-12-01

    3-D electrical resistivity surveys and inversion models are required to accurately resolve structures in areas with very complex geology where 2-D models might suffer from artefacts. Many 3-D surveys use a grid where the number of electrodes along one direction (x) is much greater than in the perpendicular direction (y). Frequently, due to limitations in the number of independent electrodes in the multi-electrode system, the surveys use a roll-along system with a small number of parallel survey lines aligned along the x-direction. The `Compare R' array optimization method previously used for 2-D surveys is adapted for such 3-D surveys. Offset versions of the inline arrays used in 2-D surveys are included in the number of possible arrays (the comprehensive data set) to improve the sensitivity to structures in between the lines. The array geometric factor and its relative error are used to filter out potentially unstable arrays in the construction of the comprehensive data set. Comparisons of the conventional (consisting of dipole-dipole and Wenner-Schlumberger arrays) and optimized arrays are made using a synthetic model and experimental measurements in a tank. The tests show that structures located between the lines are better resolved with the optimized arrays. The optimized arrays also have significantly better depth resolution compared to the conventional arrays.

  20. Seeing More Is Knowing More: V3D Enables Real-Time 3D Visualization and Quantitative Analysis of Large-Scale Biological Image Data Sets

    NASA Astrophysics Data System (ADS)

    Peng, Hanchuan; Long, Fuhui

    Everyone understands seeing more is knowing more. However, for large-scale 3D microscopic image analysis, it has not been an easy task to efficiently visualize, manipulate and understand high-dimensional data in 3D, 4D or 5D spaces. We developed a new 3D+ image visualization and analysis platform, V3D, to meet this need. The V3D system provides 3D visualization of gigabyte-sized microscopy image stacks in real time on current laptops and desktops. V3D streamlines the online analysis, measurement and proofreading of complicated image patterns by combining ergonomic functions for selecting a location in an image directly in 3D space and for displaying biological measurements, such as from fluorescent probes, using the overlaid surface objects. V3D runs on all major computer platforms and can be enhanced by software plug-ins to address specific biological problems. To demonstrate this extensibility, we built a V3Dbased application, V3D-Neuron, to reconstruct complex 3D neuronal structures from high-resolution brain images. V3D-Neuron can precisely digitize the morphology of a single neuron in a fruitfly brain in minutes, with about a 17-fold improvement in reliability and tenfold savings in time compared with other neuron reconstruction tools. Using V3D-Neuron, we demonstrate the feasibility of building a high-resolution 3D digital atlas of neurite tracts in the fruitfly brain. V3D can be easily extended using a simple-to-use and comprehensive plugin interface.

  1. Imaging system for creating 3D block-face cryo-images of whole mice

    NASA Astrophysics Data System (ADS)

    Roy, Debashish; Breen, Michael; Salvado, Olivier; Heinzel, Meredith; McKinley, Eliot; Wilson, David

    2006-03-01

    We developed a cryomicrotome/imaging system that provides high resolution, high sensitivity block-face images of whole mice or excised organs, and applied it to a variety of biological applications. With this cryo-imaging system, we sectioned cryo-preserved tissues at 2-40 μm thickness and acquired high resolution brightfield and fluorescence images with microscopic in-plane resolution (as good as 1.2 μm). Brightfield images of normal and pathological anatomy show exquisite detail, especially in the abdominal cavity. Multi-planar reformatting and 3D renderings allow one to interrogate 3D structures. In this report, we present brightfield images of mouse anatomy, as well as 3D renderings of organs. For BPK mice model of polycystic kidney disease, we compared brightfield cryo-images and kidney volumes to MRI. The color images provided greater contrast and resolution of cysts as compared to in vivo MRI. We note that color cryo-images are closer to what a researcher sees in dissection, making it easier for them to interpret image data. The combination of field of view, depth of field, ultra high resolution and color/fluorescence contrast enables cryo-image volumes to provide details that cannot be found through in vivo imaging or other ex vivo optical imaging approaches. We believe that this novel imaging system will have applications that include identification of mouse phenotypes, characterization of diseases like blood vessel disease, kidney disease, and cancer, assessment of drug and gene therapy delivery and efficacy and validation of other imaging modalities.

  2. Holographic imaging of 3D objects on dichromated polymer systems

    NASA Astrophysics Data System (ADS)

    Lemelin, Guylain; Jourdain, Anne; Manivannan, Gurusamy; Lessard, Roger A.

    1996-01-01

    Conventional volume transmission holograms of a 3D scene were recorded on dichromated poly(acrylic acid) (DCPAA) films under 488 nm light. The holographic characterization and quality of reconstruction have been studied by varying the influencing parameters such as concentration of dichromate and electron donor, and the molecular weight of the polymer matrix. Ammonium and potassium dichromate have been employed to sensitize the poly(acrylic) matrix. the recorded hologram can be efficiently reconstructed either with red light or with low energy in the blue region without any post thermal or chemical processing.

  3. Flash trajectory imaging of target 3D motion

    NASA Astrophysics Data System (ADS)

    Wang, Xinwei; Zhou, Yan; Fan, Songtao; He, Jun; Liu, Yuliang

    2011-03-01

    We present a flash trajectory imaging technique which can directly obtain target trajectory and realize non-contact measurement of motion parameters by range-gated imaging and time delay integration. Range-gated imaging gives the range of targets and realizes silhouette detection which can directly extract targets from complex background and decrease the complexity of moving target image processing. Time delay integration increases information of one single frame of image so that one can directly gain the moving trajectory. In this paper, we have studied the algorithm about flash trajectory imaging and performed initial experiments which successfully obtained the trajectory of a falling badminton. Our research demonstrates that flash trajectory imaging is an effective approach to imaging target trajectory and can give motion parameters of moving targets.

  4. Dual-color 3D superresolution microscopy by combined spectral-demixing and biplane imaging.

    PubMed

    Winterflood, Christian M; Platonova, Evgenia; Albrecht, David; Ewers, Helge

    2015-07-01

    Multicolor three-dimensional (3D) superresolution techniques allow important insight into the relative organization of cellular structures. While a number of innovative solutions have emerged, multicolor 3D techniques still face significant technical challenges. In this Letter we provide a straightforward approach to single-molecule localization microscopy imaging in three dimensions and two colors. We combine biplane imaging and spectral-demixing, which eliminates a number of problems, including color cross-talk, chromatic aberration effects, and problems with color registration. We present 3D dual-color images of nanoscopic structures in hippocampal neurons with a 3D compound resolution routinely achieved only in a single color. PMID:26153696

  5. Retinal area detector from scanning laser ophthalmoscope (SLO) images for diagnosing retinal diseases.

    PubMed

    Haleem, Muhammad Salman; Han, Liangxiu; van Hemert, Jano; Li, Baihua; Fleming, Alan

    2015-07-01

    Scanning laser ophthalmoscopes (SLOs) can be used for early detection of retinal diseases. With the advent of latest screening technology, the advantage of using SLO is its wide field of view, which can image a large part of the retina for better diagnosis of the retinal diseases. On the other hand, during the imaging process, artefacts such as eyelashes and eyelids are also imaged along with the retinal area. This brings a big challenge on how to exclude these artefacts. In this paper, we propose a novel approach to automatically extract out true retinal area from an SLO image based on image processing and machine learning approaches. To reduce the complexity of image processing tasks and provide a convenient primitive image pattern, we have grouped pixels into different regions based on the regional size and compactness, called superpixels. The framework then calculates image based features reflecting textural and structural information and classifies between retinal area and artefacts. The experimental evaluation results have shown good performance with an overall accuracy of 92%. PMID:25167560

  6. 3D image reconstruction algorithms for cryo-electron-microscopy images of virus particles

    NASA Astrophysics Data System (ADS)

    Doerschuk, Peter C.; Johnson, John E.

    2000-11-01

    A statistical model for the object and the complete image formation process in cryo electron microscopy of viruses is presented. Using this model, maximum likelihood reconstructions of the 3D structure of viruses are computed using the expectation maximization algorithm and an example based on Cowpea mosaic virus is provided.

  7. 3D image display of fetal ultrasonic images by thin shell

    NASA Astrophysics Data System (ADS)

    Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen

    1999-05-01

    Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.

  8. Infrared imaging of the polymer 3D-printing process

    NASA Astrophysics Data System (ADS)

    Dinwiddie, Ralph B.; Kunc, Vlastimil; Lindal, John M.; Post, Brian; Smith, Rachel J.; Love, Lonnie; Duty, Chad E.

    2014-05-01

    Both mid-wave and long-wave IR cameras are used to measure various temperature profiles in thermoplastic parts as they are printed. Two significantly different 3D-printers are used in this study. The first is a small scale commercially available Solidoodle 3 printer, which prints parts with layer thicknesses on the order of 125μm. The second printer used is a "Big Area Additive Manufacturing" (BAAM) 3D-printer developed at Oak Ridge National Laboratory. The BAAM prints parts with a layer thicknesses of 4.06 mm. Of particular interest is the temperature of the previously deposited layer as the new hot layer is about to be extruded onto it. The two layers are expected have a stronger bond if the temperature of the substrate layer is above the glass transition temperature. This paper describes the measurement technique and results for a study of temperature decay and substrate layer temperature for ABS thermoplastic with and without the addition of chopped carbon fibers.

  9. V3D enables real-time 3D visualization and quantitative analysis of large-scale biological image data sets

    PubMed Central

    Peng, Hanchuan; Ruan, Zongcai; Long, Fuhui; Simpson, Julie H.; Myers, Eugene W.

    2010-01-01

    The V3D system provides three-dimensional (3D) visualization of gigabyte-sized microscopy image stacks in real time on current laptops and desktops. Combined with highly ergonomic features for selecting an X, Y, Z location of an image directly in 3D space and for visualizing overlays of a variety of surface objects, V3D streamlines the on-line analysis, measurement, and proofreading of complicated image patterns. V3D is cross-platform and can be enhanced by plug-ins. We built V3D-Neuron on top of V3D to reconstruct complex 3D neuronal structures from large brain images. V3D-Neuron enables us to precisely digitize the morphology of a single neuron in a fruit fly brain in minutes, with about 17-fold improvement in reliability and 10-fold savings in time compared to other neuron reconstruction tools. Using V3D-Neuron, we demonstrated the feasibility of building a 3D digital atlas of neurite tracts in the fruit fly brain. PMID:20231818

  10. Quality Prediction of Asymmetrically Distorted Stereoscopic 3D Images.

    PubMed

    Wang, Jiheng; Rehman, Abdul; Zeng, Kai; Wang, Shiqi; Wang, Zhou

    2015-11-01

    Objective quality assessment of distorted stereoscopic images is a challenging problem, especially when the distortions in the left and right views are asymmetric. Existing studies suggest that simply averaging the quality of the left and right views well predicts the quality of symmetrically distorted stereoscopic images, but generates substantial prediction bias when applied to asymmetrically distorted stereoscopic images. In this paper, we first build a database that contains both single-view and symmetrically and asymmetrically distorted stereoscopic images. We then carry out a subjective test, where we find that the quality prediction bias of the asymmetrically distorted images could lean toward opposite directions (overestimate or underestimate), depending on the distortion types and levels. Our subjective test also suggests that eye dominance effect does not have strong impact on the visual quality decisions of stereoscopic images. Furthermore, we develop an information content and divisive normalization-based pooling scheme that improves upon structural similarity in estimating the quality of single-view images. Finally, we propose a binocular rivalry-inspired multi-scale model to predict the quality of stereoscopic images from that of the single-view images. Our results show that the proposed model, without explicitly identifying image distortion types, successfully eliminates the prediction bias, leading to significantly improved quality prediction of the stereoscopic images. PMID:26087491

  11. Real-time imaging of tissue optical properties and surface profile using 3D-SSOP

    NASA Astrophysics Data System (ADS)

    Van de Giessen, Martijn; Angelo, Joseph; Vargas, Christina; Gioux, Sylvain

    2015-03-01

    Wide-field optical tissue characterization has a large clinical potential that is currently not exploited due to the lack of realtime imaging methods. In this work we propose 3D single shot optical properties imaging (3D-SSOP) a new acquisition and processing method for obtaining surface profile corrected tissue absorption and reduced scattering coefficient maps from a single image. A profile is projected that is sensitive to both optical properties and surface profile. With image processing, the two responses are separated and surface profile corrected tissue optical properties as with profile corrected spatial frequency domain imaging (3D-SFDI). Overall, 3D-SSOP estimates showed a small bias of -1.2% in both μa and μ's in comparison with 3D-SFDI. Standard deviations on flat surfaces for 3D-SSOP were 7% (μa) and 17% (μ's) lower than for 3D-SFDI. However, 3D-SSOP showed significant artifacts near edges, where spatial averaging caused inaccuracies in diffuse reflectance estimates, as well as the surface profile. In an in-vivo experiment of a hand optical property estimates were equivalent, but processing artifacts suppressed smaller details with 3D-SSOP. To our knowledge, this method is the first method to estimate surface profile corrected tissue optical properties from a single image. Therefore we expect this method to be an important step in bringing real-time wide-field tissue characterization to the operating room.

  12. Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation.

    PubMed

    Wang, Junchen; Suenaga, Hideyuki; Liao, Hongen; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

    2015-03-01

    Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability. PMID:25465067

  13. Realization of real-time interactive 3D image holographic display [Invited].

    PubMed

    Chen, Jhen-Si; Chu, Daping

    2016-01-20

    Realization of a 3D image holographic display supporting real-time interaction requires fast actions in data uploading, hologram calculation, and image projection. These three key elements will be reviewed and discussed, while algorithms of rapid hologram calculation will be presented with the corresponding results. Our vision of interactive holographic 3D displays will be discussed. PMID:26835944

  14. Perceptual quality measurement of 3D images based on binocular vision.

    PubMed

    Zhou, Wujie; Yu, Lu

    2015-07-20

    Three-dimensional (3D) technology has become immensely popular in recent years and widely adopted in various applications. Hence, perceptual quality measurement of symmetrically and asymmetrically distorted 3D images has become an important, fundamental, and challenging issue in 3D imaging research. In this paper, we propose a binocular-vision-based 3D image-quality measurement (IQM) metric. Consideration of the 3D perceptual properties of the primary visual cortex (V1) and the higher visual areas (V2) for 3D-IQM is the major technical contribution to this research. To be more specific, first, the metric simulates the receptive fields of complex cells (V1) using binocular energy response and binocular rivalry response and the higher visual areas (V2) using local binary patterns features. Then, three similarity scores of 3D perceptual properties between the reference and distorted 3D images are measured. Finally, by using support vector regression, three similarity scores are integrated into an overall 3D quality score. Experimental results for two public benchmark databases demonstrate that, in comparison with most current 2D and 3D metrics, the proposed metric achieves significantly higher consistency in alignment with subjective fidelity ratings. PMID:26367842

  15. 3-D Target Location from Stereoscopic SAR Images

    SciTech Connect

    DOERRY,ARMIN W.

    1999-10-01

    SAR range-Doppler images are inherently 2-dimensional. Targets with a height offset lay over onto offset range and azimuth locations. Just which image locations are laid upon depends on the imaging geometry, including depression angle, squint angle, and target bearing. This is the well known layover phenomenon. Images formed with different aperture geometries will exhibit different layover characteristics. These differences can be exploited to ascertain target height information, in a stereoscopic manner. Depending on the imaging geometries, height accuracy can be on the order of horizontal position accuracies, thereby rivaling the best IFSAR capabilities in fine resolution SAR images. All that is required for this to work are two distinct passes with suitably different geometries from any plain old SAR.

  16. Multiview image integration system for glassless 3D display

    NASA Astrophysics Data System (ADS)

    Ando, Takahisa; Mashitani, Ken; Higashino, Masahiro; Kanayama, Hideyuki; Murata, Haruhiko; Funazou, Yasuo; Sakamoto, Naohisa; Hazama, Hiroshi; Ebara, Yasuo; Koyamada, Koji

    2005-03-01

    We have developed a multi-view image integration system, which combines seven parallax video images into a single video image so that it fits the parallax barrier. The apertures of this barrier are not stripes but tiny rectangles that are arranged in the shape of stairs. Commodity hardware is used to satisfy a specification which requires that the resolution of each parallax video image is SXGA(1645×800 pixel resolution), the resulting integrated image is QUXGA-W(3840×2400 pixel resolution), and the frame rate is fifteen frames per second. The point is that the system can provide with QUXGA-W video image, which corresponds to 27MB, at 15fps, that is about 2Gbps. Using the integration system and a Liquid Crystal Display with the parallax barrier, we can enjoy an immersive live video image which supports seven viewpoints without special glasses. In addition, since the system can superimpose the CG images of the relevant seven viewpoints into the live video images, it is possible to communicate with remote users by sharing a virtual object.

  17. Intrinsic optical signal imaging of retinal physiology: a review

    NASA Astrophysics Data System (ADS)

    Yao, Xincheng; Wang, Benquan

    2015-09-01

    Intrinsic optical signal (IOS) imaging promises to be a noninvasive method for high-resolution examination of retinal physiology, which can advance the study and diagnosis of eye diseases. While specialized optical instruments are desirable for functional IOS imaging of retinal physiology, in depth understanding of multiple IOS sources in the complex retinal neural network is essential for optimizing instrument designs. We provide a brief overview of IOS studies and relationships in rod outer segment suspensions, isolated retinas, and intact eyes. Recent developments of line-scan confocal and functional optical coherence tomography (OCT) instruments have allowed in vivo IOS mapping of photoreceptor physiology. Further improvements of the line-scan confocal and functional OCT systems may provide a feasible solution to pursue functional IOS mapping of human photoreceptors. Some interesting IOSs have already been detected in inner retinal layers, but better development of the IOS instruments and software algorithms is required to achieve optimal physiological assessment of inner retinal neurons.

  18. 100-inch 3D real-image rear-projection display system based on Fresnel lens

    NASA Astrophysics Data System (ADS)

    Jang, Sun-Joo; Kim, Seung-Chul; Koo, Jung-Sik; Park, Jung-Il; Kim, Eun-Soo

    2004-11-01

    In this paper, as an approach for a wide 3D real image display system without special glasses, a 100" Fresnel lens-based 3D real-projection display system is implemented and its physical size is designed by 2800x2800x1600 mm3 in length, width and depth, respectively. In this display system, the conventional 2D video image is projected into the air through some projection optics and a pair of Fresnel lens and as a result, it can form a floating video image having a real depth. From some experiments with the test video images, the floated 3D video images with some depth have been realistically viewed, in which forward depth of the floated 3D image from the display screen is found to be 35~47 inches and the viewing angle to be 60 degrees, respectively. This feasibility test for the prototype of 100" Fresnel lens-based 3D real image rear-projection display systems suggests a possibility of its practical applications to the 3D advertisements, 3D animations, 3D games and so on.

  19. Light sheet adaptive optics microscope for 3D live imaging

    NASA Astrophysics Data System (ADS)

    Bourgenot, C.; Taylor, J. M.; Saunter, C. D.; Girkin, J. M.; Love, G. D.

    2013-02-01

    We report on the incorporation of adaptive optics (AO) into the imaging arm of a selective plane illumination microscope (SPIM). SPIM has recently emerged as an important tool for life science research due to its ability to deliver high-speed, optically sectioned, time-lapse microscope images from deep within in vivo selected samples. SPIM provides a very interesting system for the incorporation of AO as the illumination and imaging paths are decoupled and AO may be useful in both paths. In this paper, we will report the use of AO applied to the imaging path of a SPIM, demonstrating significant improvement in image quality of a live GFP-labeled transgenic zebrafish embryo heart using a modal, wavefront sensorless approach and a heart synchronization method. These experimental results are linked to a computational model showing that significant aberrations are produced by the tube holding the sample in addition to the aberration from the biological sample itself.

  20. Registration of multi-view apical 3D echocardiography images

    NASA Astrophysics Data System (ADS)

    Mulder, H. W.; van Stralen, M.; van der Zwaan, H. B.; Leung, K. Y. E.; Bosch, J. G.; Pluim, J. P. W.

    2011-03-01

    Real-time three-dimensional echocardiography (RT3DE) is a non-invasive method to visualize the heart. Disadvantageously, it suffers from non-uniform image quality and a limited field of view. Image quality can be improved by fusion of multiple echocardiography images. Successful registration of the images is essential for prosperous fusion. Therefore, this study examines the performance of different methods for intrasubject registration of multi-view apical RT3DE images. A total of 14 data sets was annotated by two observers who indicated the position of the apex and four points on the mitral valve ring. These annotations were used to evaluate registration. Multi-view end-diastolic (ED) as well as end-systolic (ES) images were rigidly registered in a multi-resolution strategy. The performance of single-frame and multi-frame registration was examined. Multi-frame registration optimizes the metric for several time frames simultaneously. Furthermore, the suitability of mutual information (MI) as similarity measure was compared to normalized cross-correlation (NCC). For initialization of the registration, a transformation that describes the probe movement was obtained by manually registering five representative data sets. It was found that multi-frame registration can improve registration results with respect to single-frame registration. Additionally, NCC outperformed MI as similarity measure. If NCC was optimized in a multi-frame registration strategy including ED and ES time frames, the performance of the automatic method was comparable to that of manual registration. In conclusion, automatic registration of RT3DE images performs as good as manual registration. As registration precedes image fusion, this method can contribute to improved quality of echocardiography images.

  1. ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

    2005-01-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

  2. ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    NASA Astrophysics Data System (ADS)

    Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

    2006-02-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

  3. Real-time auto-stereoscopic visualization of 3D medical images

    NASA Astrophysics Data System (ADS)

    Portoni, Luisa; Patak, Alexandre; Noirard, Pierre; Grossetie, Jean-Claude; van Berkel, Cees

    2000-04-01

    The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.

  4. Enhanced imaging colonoscopy facilitates dense motion-based 3D reconstruction.

    PubMed

    Alcantarilla, Pablo F; Bartoli, Adrien; Chadebecq, Francois; Tilmant, Christophe; Lepilliez, Vincent

    2013-01-01

    We propose a novel approach for estimating a dense 3D model of neoplasia in colonoscopy using enhanced imaging endoscopy modalities. Estimating a dense 3D model of neoplasia is important to make 3D measurements and to classify the superficial lesions in standard frameworks such as the Paris classification. However, it is challenging to obtain decent dense 3D models using computer vision techniques such as Structure-from-Motion due to the lack of texture in conventional (white light) colonoscopy. Therefore, we propose to use enhanced imaging endoscopy modalities such as Narrow Band Imaging and chromoendoscopy to facilitate the 3D reconstruction process. Thanks to the use of these enhanced endoscopy techniques, visualization is improved, resulting in more reliable feature tracks and 3D reconstruction results. We first build a sparse 3D model of neoplasia using Structure-from-Motion from enhanced endoscopy imagery. Then, the sparse reconstruction is densified using a Multi-View Stereo approach, and finally the dense 3D point cloud is transformed into a mesh by means of Poisson surface reconstruction. The obtained dense 3D models facilitate classification of neoplasia in the Paris classification, in which the 3D size and the shape of the neoplasia play a major role in the diagnosis. PMID:24111442

  5. Online reconstruction of 3D magnetic particle imaging data.

    PubMed

    Knopp, T; Hofmann, M

    2016-06-01

    Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s(-1). However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time. PMID:27182668

  6. Adaptive colour transformation of retinal images for stroke prediction.

    PubMed

    Unnikrishnan, Premith; Aliahmad, Behzad; Kawasaki, Ryo; Kumar, Dinesh

    2013-01-01

    Identifying lesions in the retinal vasculature using Retinal imaging is most often done on the green channel. However, the effect of colour and single channel analysis on feature extraction has not yet been studied. In this paper an adaptive colour transformation has been investigated and validated on retinal images associated with 10-year stroke prediction, using principle component analysis (PCA). Histogram analysis indicated that while each colour channel image had a uni-modal distribution, the second component of the PCA had a bimodal distribution, and showed significantly improved separation between the retinal vasculature and the background. The experiments showed that using adaptive colour transformation, the sensitivity and specificity were both higher (AUC 0.73) compared with when single green channel was used (AUC 0.63) for the same database and image features. PMID:24111451

  7. Adaptive optics with a micromachined membrane deformable mirror for high resolution retinal imaging

    NASA Astrophysics Data System (ADS)

    Zhu, Lijun

    The resolution of conventional retinal imaging technologies is limited by the optics of the human eye. In this dissertation, the aberrations of the eye and their compensation techniques are investigated for the purpose of high-resolution retinal imaging. Both computer modeling and adaptive optics experiments with the novel micromachined membrane deformable mirror (MMDM) device are performed. First, a new aspherical computer eye model is developed to study the aberrations of the eye and their effects on retinal imaging. The aberrations and point-spread functions of the eye are calculated and found to be pupil size dependent and space-variant. The aberration compensation is modeled using customized lens design techniques showing that high-resolution retinal images can be obtained with a dilated pupil through aberration compensation. Due to the space-variant nature and the individual variations of the eye aberrations, adaptive optics techniques are necessary for dynamic aberration compensation. Thus, an experimental adaptive optics retinal imaging system, based on a novel, low- cost, and compact MMDM, is constructed to investigate adaptive optics techniques for eye aberration compensation, where the aberrations are measured using a Hartmann-Shack wavefront sensor. Due to the difficulties in controlling the new MMDM device, a novel control algorithm is developed to generate the desired wavefront for aberration compensation of the eye. The MMDM is characterized and a closed-loop system algorithm is developed for eye aberration compensation in real-time. The system is tested with an artificial eye, showing that it can effectively compensate for low-order and to a certain extent for high-order aberrations of the eye. A diffraction-limited resolution is achieved when the aberrations are within the working range of the MMDM. Aberration compensation and retinal imaging experiments are also performed with real eyes, showing an improved imaging resolution. In addition, a preliminary investigation into a complementary adaptive optics approach of using image deconvolution techniques is also conducted to improve retinal image resolution when the aberrations of the eye can not be completely compensated for by the MMDM. Future research can be conducted based on this dissertation to obtain high-resolution 3-D retinal imaging.

  8. Imaging of human differentiated 3D neural aggregates using light sheet fluorescence microscopy

    PubMed Central

    Gualda, Emilio J.; Simão, Daniel; Pinto, Catarina; Alves, Paula M.; Brito, Catarina

    2014-01-01

    The development of three dimensional (3D) cell cultures represents a big step for the better understanding of cell behavior and disease in a more natural like environment, providing not only single but multiple cell type interactions in a complex 3D matrix, highly resembling physiological conditions. Light sheet fluorescence microscopy (LSFM) is becoming an excellent tool for fast imaging of such 3D biological structures. We demonstrate the potential of this technique for the imaging of human differentiated 3D neural aggregates in fixed and live samples, namely calcium imaging and cell death processes, showing the power of imaging modality compared with traditional microscopy. The combination of light sheet microscopy and 3D neural cultures will open the door to more challenging experiments involving drug testing at large scale as well as a better understanding of relevant biological processes in a more realistic environment. PMID:25161607

  9. Remote laboratory for phase-aided 3D microscopic imaging and metrology

    NASA Astrophysics Data System (ADS)

    Wang, Meng; Yin, Yongkai; Liu, Zeyi; He, Wenqi; Li, Boqun; Peng, Xiang

    2014-05-01

    In this paper, the establishment of a remote laboratory for phase-aided 3D microscopic imaging and metrology is presented. Proposed remote laboratory consists of three major components, including the network-based infrastructure for remote control and data management, the identity verification scheme for user authentication and management, and the local experimental system for phase-aided 3D microscopic imaging and metrology. The virtual network computer (VNC) is introduced to remotely control the 3D microscopic imaging system. Data storage and management are handled through the open source project eSciDoc. Considering the security of remote laboratory, the fingerprint is used for authentication with an optical joint transform correlation (JTC) system. The phase-aided fringe projection 3D microscope (FP-3DM), which can be remotely controlled, is employed to achieve the 3D imaging and metrology of micro objects.

  10. Opti-acoustic stereo imaging: on system calibration and 3-D target reconstruction.

    PubMed

    Negahdaripour, Shahriar; Sekkati, Hicham; Pirsiavash, Hamed

    2009-06-01

    Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from noisy image measurements. The recursive 3-D reconstruction method utilized as initial condition a closed-form solution that integrates the advantages of two other closed-form solutions, referred to as the range and azimuth solutions. Synthetic data tests are given to provide insight into the merits of the new target imaging and 3-D reconstruction paradigm, while experiments with real data confirm the findings based on computer simulations, and demonstrate the merits of this novel 3-D reconstruction paradigm. PMID:19380272

  11. Image quality of a cone beam O-arm 3D imaging system

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Weir, Victor; Lin, Jingying; Hsiung, Hsiang; Ritenour, E. Russell

    2009-02-01

    The O-arm is a cone beam imaging system designed primarily to support orthopedic surgery and is also used for image-guided and vascular surgery. Using a gantry that can be opened or closed, the O-arm can function as a 2-dimensional (2D) fluoroscopy device or collect 3-dimensional (3D) volumetric imaging data like a CT system. Clinical applications of the O-arm in spine surgical procedures, assessment of pedicle screw position, and kyphoplasty procedures show that the O-arm 3D mode provides enhanced imaging information compared to radiographs or fluoroscopy alone. In this study, the image quality of an O-arm system was quantitatively evaluated. A 20 cm diameter CATPHAN 424 phantom was scanned using the pre-programmed head protocols: small/medium (120 kVp, 100 mAs), large (120 kVp, 128 mAs), and extra-large (120 kVp, 160 mAs) in 3D mode. High resolution reconstruction mode (512×512×0.83 mm) was used to reconstruct images for the analysis of low and high contrast resolution, and noise power spectrum. MTF was measured using the point spread function. The results show that the O-arm image is uniform but with a noise pattern which cannot be removed by simply increasing the mAs. The high contrast resolution of the O-arm system was approximately 9 lp/cm. The system has a 10% MTF at 0.45 mm. The low-contrast resolution cannot be decided due to the noise pattern. For surgery where locations of a structure are emphasized over a survey of all image details, the image quality of the O-arm is well accepted clinically.

  12. Space Radar Image Isla Isabela in 3-D

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a three-dimensional view of Isabela, one of the Galapagos Islands located off the western coast of Ecuador, South America. This view was constructed by overlaying a Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) image on a digital elevation map produced by TOPSAR, a prototype airborne interferometric radar which produces simultaneous image and elevation data. The vertical scale in this image is exaggerated by a factor of 1.87. The SIR-C/X-SAR image was taken on the 40th orbit of space shuttle Endeavour. The image is centered at about 0.5 degree south latitude and 91 degrees west longitude and covers an area of 75 by 60 kilometers (47 by 37 miles). The radar incidence angle at the center of the image is about 20 degrees. The western Galapagos Islands, which lie about 1,200 kilometers (750 miles)west of Ecuador in the eastern Pacific, have six active volcanoes similar to the volcanoes found in Hawaii and reflect the volcanic processes that occur where the ocean floor is created. Since the time of Charles Darwin's visit to the area in 1835, there have been more than 60 recorded eruptions on these volcanoes. This SIR-C/X-SAR image of Alcedo and Sierra Negra volcanoes shows the rougher lava flows as bright features, while ash deposits and smooth pahoehoe lava flows appear dark. Vertical exaggeration of relief is a common tool scientists use to detect relationships between structure (for example, faults, and fractures) and topography. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI).

  13. Radar Imaging of Spheres in 3D using MUSIC

    SciTech Connect

    Chambers, D H; Berryman, J G

    2003-01-21

    We have shown that multiple spheres can be imaged by linear and planar EM arrays using only one component of polarization. The imaging approach involves calculating the SVD of the scattering response matrix, selecting a subset of singular values that represents noise, and evaluating the MUSIC functional. The noise threshold applied to the spectrum of singular values for optimal performance is typically around 1%. The resulting signal subspace includes more than one singular value per sphere. The presence of reflections from the ground improves height localization, even for a linear array parallel to the ground. However, the interference between direct and reflected energy modulates the field, creating periodic nulls that can obscure targets in typical images. These nulls are largely eliminated by normalizing the MUSIC functional with the broadside beam pattern of the array. The resulting images show excellent localization for 1 and 2 spheres. The performance for the 3 sphere configurations are complicated by shadowing effects and the greater range of the 3rd sphere in case 2. Two of the three spheres are easily located by MUSIC but the third is difficult to distinguish from other local maxima of the complex imaging functional. Improvement is seen when the linear array is replace with a planar array, which increases the effective aperture height. Further analysis of the singular values and their relationship to modes of scattering from the spheres, as well as better ways to exploit polarization, should improve performance. Work along these lines is currently being pursued by the authors.

  14. [The value of autofluorescence imaging in diagnosis of retinal diseases].

    PubMed

    Avetisov, S É; Kiseleva, T N; Vorob'eva, M V; Budzinskaia, M V; Vorob'eva-Pereverzina, O K; Avetisov, K S; Sheremet, N L; Eliseeva, É G

    2011-01-01

    Results of fundus autofluorescence imaging using confocal scanning laser ophthalmoscope HRA II ("Heidelberg Engeneering", Heidelberg, Germany) are presented. 106 patients with various retinal and optic nerve conditions were examined. The following conditions were diagnosed using autofluorescence imaging: early stage of age-related macular degeneration, macular hard and soft drusen, signs of retinitis pigmentosa, senile macular hole, central serous chorioretinopathy and optic disc drusen. PMID:22165102

  15. Realization of an aerial 3D image that occludes the background scenery.

    PubMed

    Kakeya, Hideki; Ishizuka, Shuta; Sato, Yuya

    2014-10-01

    In this paper we describe an aerial 3D image that occludes far background scenery based on coarse integral volumetric imaging (CIVI) technology. There have been many volumetric display devices that present floating 3D images, most of which have not reproduced the visual occlusion. CIVI is a kind of multilayered integral imaging and realizes an aerial volumetric image with visual occlusion by combining multiview and volumetric display technologies. The conventional CIVI, however, cannot show a deep space, for the number of layered panels is limited because of the low transmittance of each panel. To overcome this problem, we propose a novel optical design to attain an aerial 3D image that occludes far background scenery. In the proposed system, a translucent display panel with 120 Hz refresh rate is located between the CIVI system and the aerial 3D image. The system modulates between the aerial image mode and the background image mode. In the aerial image mode, the elemental images are shown on the CIVI display and the inserted translucent display is uniformly translucent. In the background image mode, the black shadows of the elemental images in a white background are shown on the CIVI display and the background scenery is displayed on the inserted translucent panel. By alternation of these two modes at 120 Hz, an aerial 3D image that visually occludes the far background scenery is perceived by the viewer. PMID:25322024

  16. Multithreaded real-time 3D image processing software architecture and implementation

    NASA Astrophysics Data System (ADS)

    Ramachandra, Vikas; Atanassov, Kalin; Aleksic, Milivoje; Goma, Sergio R.

    2011-03-01

    Recently, 3D displays and videos have generated a lot of interest in the consumer electronics industry. To make 3D capture and playback popular and practical, a user friendly playback interface is desirable. Towards this end, we built a real time software 3D video player. The 3D video player displays user captured 3D videos, provides for various 3D specific image processing functions and ensures a pleasant viewing experience. Moreover, the player enables user interactivity by providing digital zoom and pan functionalities. This real time 3D player was implemented on the GPU using CUDA and OpenGL. The player provides user interactive 3D video playback. Stereo images are first read by the player from a fast drive and rectified. Further processing of the images determines the optimal convergence point in the 3D scene to reduce eye strain. The rationale for this convergence point selection takes into account scene depth and display geometry. The first step in this processing chain is identifying keypoints by detecting vertical edges within the left image. Regions surrounding reliable keypoints are then located on the right image through the use of block matching. The difference in the positions between the corresponding regions in the left and right images are then used to calculate disparity. The extrema of the disparity histogram gives the scene disparity range. The left and right images are shifted based upon the calculated range, in order to place the desired region of the 3D scene at convergence. All the above computations are performed on one CPU thread which calls CUDA functions. Image upsampling and shifting is performed in response to user zoom and pan. The player also consists of a CPU display thread, which uses OpenGL rendering (quad buffers). This also gathers user input for digital zoom and pan and sends them to the processing thread.

  17. 3D Modeling from Multi-views Images for Cultural Heritage in Wat-Pho, Thailand

    NASA Astrophysics Data System (ADS)

    Soontranon, N.; Srestasathiern, P.; Lawawirojwong, S.

    2015-08-01

    In Thailand, there are several types of (tangible) cultural heritages. This work focuses on 3D modeling of the heritage objects from multi-views images. The images are acquired by using a DSLR camera which costs around 1,500 (camera and lens). Comparing with a 3D laser scanner, the camera is cheaper and lighter than the 3D scanner. Hence, the camera is available for public users and convenient for accessing narrow areas. The acquired images consist of various sculptures and architectures in Wat-Pho which is a Buddhist temple located behind the Grand Palace (Bangkok, Thailand). Wat-Pho is known as temple of the reclining Buddha and the birthplace of traditional Thai massage. To compute the 3D models, a diagram is separated into following steps; Data acquisition, Image matching, Image calibration and orientation, Dense matching and Point cloud processing. For the initial work, small heritages less than 3 meters height are considered for the experimental results. A set of multi-views images of an interested object is used as input data for 3D modeling. In our experiments, 3D models are obtained from MICMAC (open source) software developed by IGN, France. The output of 3D models will be represented by using standard formats of 3D point clouds and triangulated surfaces such as .ply, .off, .obj, etc. To compute for the efficient 3D models, post-processing techniques are required for the final results e.g. noise reduction, surface simplification and reconstruction. The reconstructed 3D models can be provided for public access such as website, DVD, printed materials. The high accurate 3D models can also be used as reference data of the heritage objects that must be restored due to deterioration of a lifetime, natural disasters, etc.

  18. 3-D capacitance density imaging of fluidized bed

    DOEpatents

    Fasching, George E.

    1990-01-01

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved.

  19. An Image-Based Technique for 3d Building Reconstruction Using Multi-View Uav Images

    NASA Astrophysics Data System (ADS)

    Alidoost, F.; Arefi, H.

    2015-12-01

    Nowadays, with the development of the urban areas, the automatic reconstruction of the buildings, as an important objects of the city complex structures, became a challenging topic in computer vision and photogrammetric researches. In this paper, the capability of multi-view Unmanned Aerial Vehicles (UAVs) images is examined to provide a 3D model of complex building façades using an efficient image-based modelling workflow. The main steps of this work include: pose estimation, point cloud generation, and 3D modelling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM) is applied on UAV images and a dense point cloud is generated. Then, a mesh model of points is calculated using Delaunay 2.5D triangulation and refined to obtain an accurate model of building. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough details of building based on visual assessment.

  20. Visualization of 3D images from multiple texel images created from fused LADAR/digital imagery

    NASA Astrophysics Data System (ADS)

    Killpack, Cody C.; Budge, Scott E.

    2015-05-01

    The ability to create 3D models, using registered texel images (fused ladar and digital imagery), is an important topic in remote sensing. These models are automatically generated by matching multiple texel images into a single common reference frame. However, rendering a sequence of independently registered texel images often provides challenges. Although accurately registered, the model textures are often incorrectly overlapped and interwoven when using standard rendering techniques. Consequently, corrections must be done after all the primitives have been rendered, by determining the best texture for any viewable fragment in the model. Determining the best texture is difficult, as each texel image remains independent after registration. The depth data is not merged to form a single 3D mesh, thus eliminating the possibility of generating a fused texture atlas. It is therefore necessary to determine which textures are overlapping and how to best combine them dynamically during the render process. The best texture for a particular pixel can be defined using 3D geometric criteria, in conjunction with a real-time, view-dependent ranking algorithm. As a result, overlapping texture fragments can now be hidden, exposed, or blended according to their computed measure of reliability.

  1. A Featureless Approach to 3D Polyhedral Building Modeling from Aerial Images

    PubMed Central

    Hammoudi, Karim; Dornaika, Fadi

    2011-01-01

    This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach. PMID:22346575

  2. A featureless approach to 3D polyhedral building modeling from aerial images.

    PubMed

    Hammoudi, Karim; Dornaika, Fadi

    2011-01-01

    This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach. PMID:22346575

  3. 3D spectral imaging system for anterior chamber metrology

    NASA Astrophysics Data System (ADS)

    Anderson, Trevor; Segref, Armin; Frisken, Grant; Frisken, Steven

    2015-03-01

    Accurate metrology of the anterior chamber of the eye is useful for a number of diagnostic and clinical applications. In particular, accurate corneal topography and corneal thickness data is desirable for fitting contact lenses, screening for diseases and monitoring corneal changes. Anterior OCT systems can be used to measure anterior chamber surfaces, however accurate curvature measurements for single point scanning systems are known to be very sensitive to patient movement. To overcome this problem we have developed a parallel 3D spectral metrology system that captures simultaneous A-scans on a 2D lateral grid. This approach enables estimates of the elevation and curvature of anterior and posterior corneal surfaces that are robust to sample movement. Furthermore, multiple simultaneous surface measurements greatly improve the ability to register consecutive frames and enable aggregate measurements over a finer lateral grid. A key element of our approach has been to exploit standard low cost optical components including lenslet arrays and a 2D sensor to provide a path towards low cost implementation. We demonstrate first prototypes based on 6 Mpixel sensor using a 250 μm pitch lenslet array with 300 sample beams to achieve an RMS elevation accuracy of 1μm with 95 dB sensitivity and a 7.0 mm range. Initial tests on Porcine eyes, model eyes and calibration spheres demonstrate the validity of the concept. With the next iteration of designs we expect to be able to achieve over 1000 simultaneous A-scans in excess of 75 frames per second.

  4. Space Radar Image of Kilauea, Hawaii in 3-D

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a three-dimensional perspective view of a false-color image of the eastern part of the Big Island of Hawaii. It was produced using all three radar frequencies -- X-band, C-band and L-band -- from the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) flying on the space shuttle Endeavour, overlaid on a U.S. Geological Survey digital elevation map. Visible in the center of the image in blue are the summit crater (Kilauea Caldera) which contains the smaller Halemaumau Crater, and the line of collapse craters below them that form the Chain of Craters Road. The image was acquired on April 12, 1994 during orbit 52 of the space shuttle. The area shown is approximately 34 by 57 kilometers (21 by 35 miles) with the top of the image pointing toward northwest. The image is centered at about 155.25 degrees west longitude and 19.5 degrees north latitude. The false colors are created by displaying three radar channels of different frequency. Red areas correspond to high backscatter at L-HV polarization, while green areas exhibit high backscatter at C-HV polarization. Finally, blue shows high return at X-VV polarization. Using this color scheme, the rain forest appears bright on the image, while the green areas correspond to lower vegetation. The lava flows have different colors depending on their types and are easily recognizable due to their shapes. The flows at the top of the image originated from the Mauna Loa volcano. Kilauea volcano has been almost continuously active for more than the last 11 years. Field teams that were on the ground specifically to support these radar observations report that there was vigorous surface activity about 400 meters (one-quartermile) inland from the coast. A moving lava flow about 200 meters (650 feet) in length was observed at the time of the shuttle overflight, raising the possibility that subsequent images taken during this mission will show changes in the landscape. Currently, most of the lava that is erupted travels the 8 kilometers (5 miles) from the Pu'u O'o crater (the active vent) just outside this image to the coast through a series of lava tubes, but in the past there have been many large lava flows that have traveled this distance, destroying houses and parts of the Hawaii Volcanoes National Park. This SIR-C/X-SAR image shows two types of lava flows that are common to Hawaiian volcanoes. Pahoehoe lava flows are relatively smooth, and appear very dark blue because much of the radar energy is reflected away from the radar. In contrast other lava flows are relatively rough and bounce much of the radar energy back to the radar, making that part of the image bright blue. This radar image is valuable because it allows scientists to study an evolving lava flow field from the Pu'u O'o vent. Much of the area on the northeast side (right) of the volcano is covered with tropical rain forest, and because trees reflect a lot of the radar energy, the forest appears bright in this radar scene. The linear feature running from Kilauea Crater to the right of the image is Highway 11leading to the city of Hilo which is located just beyond the right edge of this image. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrte.v. (DLR), the major partner in science, operations and data processing of X-SAR.

  5. Registration and 3D visualization of large microscopy images

    NASA Astrophysics Data System (ADS)

    Mosaliganti, Kishore; Pan, Tony; Sharp, Richard; Ridgway, Randall; Iyengar, Srivathsan; Gulacy, Alexandra; Wenzel, Pamela; de Bruin, Alain; Machiraju, Raghu; Huang, Kun; Leone, Gustavo; Saltz, Joel

    2006-03-01

    Inactivation of the retinoblastoma gene in mouse embryos causes tissue infiltrations into critical sections of the placenta, which has been shown to affect fetal survivability. Our collaborators in cancer genetics are extremely interested in examining the three dimensional nature of these infiltrations given a stack of two dimensional light microscopy images. Three sets of wildtype and mutant placentas was sectioned serially and digitized using a commercial light microscopy scanner. Each individual placenta dataset consisted of approximately 1000 images totaling 700 GB in size, which were registered into a volumetric dataset using National Library of Medicine's (NIH/NLM) Insight Segmentation and Registration Toolkit (ITK). This paper describes our method for image registration to aid in volume visualization of tissue level intermixing for both wildtype and Rb - specimens. The registration process faces many challenges arising from the large image sizes, damages during sectioning, staining gradients both within and across sections, and background noise. These issues limit the direct application of standard registration techniques due to frequent convergence to local solutions. In this work, we develop a mixture of automated and semi-automated enhancements with ground-truth validation for the mutual information-based registration algorithm. Our final volume renderings clearly show tissue intermixing differences between both wildtype and Rb - specimens which are not obvious prior to registration.

  6. Determining 3-D motion and structure from image sequences

    NASA Technical Reports Server (NTRS)

    Huang, T. S.

    1982-01-01

    A method of determining three-dimensional motion and structure from two image frames is presented. The method requires eight point correspondences between the two frames, from which motion and structure parameters are determined by solving a set of eight linear equations and a singular value decomposition of a 3x3 matrix. It is shown that the solution thus obtained is unique.

  7. 3D lidar imaging for detecting and understanding plant responses and canopy structure.

    PubMed

    Omasa, Kenji; Hosoi, Fumiki; Konishi, Atsumi

    2007-01-01

    Understanding and diagnosing plant responses to stress will benefit greatly from three-dimensional (3D) measurement and analysis of plant properties because plant responses are strongly related to their 3D structures. Light detection and ranging (lidar) has recently emerged as a powerful tool for direct 3D measurement of plant structure. Here the use of 3D lidar imaging to estimate plant properties such as canopy height, canopy structure, carbon stock, and species is demonstrated, and plant growth and shape responses are assessed by reviewing the development of lidar systems and their applications from the leaf level to canopy remote sensing. In addition, the recent creation of accurate 3D lidar images combined with natural colour, chlorophyll fluorescence, photochemical reflectance index, and leaf temperature images is demonstrated, thereby providing information on responses of pigments, photosynthesis, transpiration, stomatal opening, and shape to environmental stresses; these data can be integrated with 3D images of the plants using computer graphics techniques. Future lidar applications that provide more accurate dynamic estimation of various plant properties should improve our understanding of plant responses to stress and of interactions between plants and their environment. Moreover, combining 3D lidar with other passive and active imaging techniques will potentially improve the accuracy of airborne and satellite remote sensing, and make it possible to analyse 3D information on ecophysiological responses and levels of various substances in agricultural and ecological applications and in observations of the global biosphere. PMID:17030540

  8. Synthesis of image sequences for Korean sign language using 3D shape model

    NASA Astrophysics Data System (ADS)

    Hong, Mun-Ho; Choi, Chang-Seok; Kim, Chang-Seok; Jeon, Joon-Hyeon

    1995-05-01

    This paper proposes a method for offering information and realizing communication to the deaf-mute. The deaf-mute communicates with another person by means of sign language, but most people are unfamiliar with it. This method enables to convert text data into the corresponding image sequences for Korean sign language (KSL). Using a general 3D shape model of the upper body leads to generating the 3D motions of KSL. It is necessary to construct the general 3D shape model considering the anatomical structure of the human body. To obtain a personal 3D shape model, this general model is to adjust to the personal base images. Image synthesis for KSL consists of deforming a personal 3D shape model and texture-mapping the personal images onto the deformed model. The 3D motions for KSL have the facial expressions and the 3D movements of the head, trunk, arms and hands and are parameterized for easily deforming the model. These motion parameters of the upper body are extracted from a skilled signer's motion for each KSL and are stored to the database. Editing the parameters according to the inputs of text data yields to generate the image sequences of 3D motions.

  9. A new multi-planar reconstruction method using voxel based beamforming for 3D ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Ju, Hyunseok; Kang, Jinbum; Song, Ilseob; Yoo, Yangmo

    2015-03-01

    For multi-planar reconstruction in 3D ultrasound imaging, direct and separable 3D scan conversion (SC) have been used for transforming the ultrasound data acquired in the 3D polar coordinate system to the 3D Cartesian coordinate system. These 3D SC methods can visualize an arbitrary plane for 3D ultrasound volume data. However, they suffer from blurring and blocking artifacts due to resampling during SC. In this paper, a new multi-planar reconstruction method based on voxel based beamforming (VBF) is proposed for reducing blurring and blocking artifacts. In VBF, unlike direct and separable 3D SC, each voxel on an arbitrary imaging plane is directly reconstructed by applying the focusing delay to radio-frequency (RF) data so that the blurring and blocking artifacts can be removed. From the phantom study, the proposed VBF method showed the higher contrast and less blurring compared to the separable and direct 3D SC methods. This result is consistent with the measured information entropy contrast (IEC) values, i.e., 98.9 vs. 42.0 vs. 47.9, respectively. In addition, the 3D SC methods and VBF method were implemented on a high-end GPU by using CUDA programming. The execution times for the VBF and direct 3D SC methods are 1656.1ms, 1633.3ms and 1631.4ms, which are I/O bounded. These results indicate that the proposed VBF method can improve image quality of 3D ultrasound B-mode imaging by removing blurring and blocking artifacts associated with 3D scan conversion and show the feasibility of pseudo-real-time operation.

  10. A 3-D fluorescence imaging system incorporating structured illumination technology

    NASA Astrophysics Data System (ADS)

    Antos, L.; Emord, P.; Luquette, B.; McGee, B.; Nguyen, D.; Phipps, A.; Phillips, D.; Helguera, M.

    2010-02-01

    A currently available 2-D high-resolution, optical molecular imaging system was modified by the addition of a structured illumination source, OptigridTM, to investigate the feasibility of providing depth resolution along the optical axis. The modification involved the insertion of the OptigridTM and a lens in the path between the light source and the image plane, as well as control and signal processing software. Projection of the OptigridTM onto the imaging surface at an angle, was resolved applying the Scheimpflug principle. The illumination system implements modulation of the light source and provides a framework for capturing depth resolved mages. The system is capable of in-focus projection of the OptigridTM at different spatial frequencies, and supports the use of different lenses. A calibration process was developed for the system to achieve consistent phase shifts of the OptigridTM. Post-processing extracted depth information using depth modulation analysis using a phantom block with fluorescent sheets at different depths. An important aspect of this effort was that it was carried out by a multidisciplinary team of engineering and science students as part of a capstone senior design program. The disciplines represented are mechanical engineering, electrical engineering and imaging science. The project was sponsored by a financial grant from New York State with equipment support from two industrial concerns. The students were provided with a basic imaging concept and charged with developing, implementing, testing and validating a feasible proof-of-concept prototype system that was returned to the originator of the concept for further evaluation and characterization.

  11. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country and high resolution satellite images are costly. In this study, proposed method is based on only simple video recording of area. Thus this proposed method is suitable for 3D city modeling. Photo-realistic, scalable, geo-referenced virtual 3D city model is useful for various kinds of applications such as for planning in navigation, tourism, disasters management, transportations, municipality, urban and environmental managements, real-estate industry. Thus this study will provide a good roadmap for geomatics community to create photo-realistic virtual 3D city model by using close range photogrammetry.

  12. Space Radar Image of Mammoth, California in 3-D

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a three-dimensional perspective of Mammoth Mountain, California. This view was constructed by overlaying a Spaceborne Imaging Radar-C (SIR-C) radar image on a U.S. Geological Survey digital elevation map. Vertical exaggeration is 1.87 times. The image is centered at 37.6 degrees north, 119.0 degrees west. It was acquired from the Spaceborne Imaging Radar C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) onboard space shuttle Endeavour on its 67th orbit on April 13, 1994. In this color representation, red is C-band HV-polarization, green is C-band VV-polarization and blue is the ratio of C-band VV to C-band HV. Blue areas are smooth, and yellow areas are rock out-crops with varying amounts of snow and vegetation. Crowley Lake is in the foreground, and Highway 395 crosses in the middle of the image. Mammoth Mountain is shown in the upper right. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI).

  13. Space Radar Image of Long Valley, California in 3-D

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This three-dimensional perspective view of Long Valley, California was created from data taken by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This image was constructed by overlaying a color composite SIR-C radar image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle. The two data passes are compared to obtain elevation information. The interferometry data were acquired on April 13,1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR instrument. The color composite radar image was taken in October and was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is the large dark feature in the foreground. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.v. (DLR), the major partner in science, operations and data processing of X-SAR.

  14. Space Radar Image of Long Valley, California - 3D view

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a three-dimensional perspective view of Long Valley, California by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This view was constructed by overlaying a color composite SIR-C image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle and, which then, are compared to obtain elevation information. The data were acquired on April 13, 1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR radar instrument. The color composite radar image was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is off the image to the left. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.V.(DLR), the major partner in science, operations and data processing of X-SAR.

  15. Space Radar Image of Missoula, Montana in 3-D

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a three-dimensional perspective view of Missoula, Montana, created by combining two spaceborne radar images using a technique known as interferometry. Visualizations like this are useful because they show scientists the shapes of the topographic features such as mountains and valleys. This technique helps to clarify the relationships of the different types of materials on the surface detected by the radar. The view is looking north-northeast. The blue circular area at the lower left corner is a bend of the Bitterroot River just before it joins the Clark Fork, which runs through the city. Crossing the Bitterroot River is the bridge of U.S. Highway 93. Highest mountains in this image are at elevations of 2,200 meters (7,200 feet). The city is about 975 meters (3,200 feet) above sea level. The bright yellow areas are urban and suburban zones, dark brown and blue-green areas are grasslands, bright green areas are farms, light brown and purple areas are scrub and forest, and bright white and blue areas are steep rocky slopes. The two radar images were taken on successive days by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) onboard the space shuttle Endeavour in October 1994. The digital elevation map was produced using radar interferometry, a process in which radar data are acquired on different passes of the space shuttle. The two data passes are compared to obtain elevation information. Radar image data are draped over the topography to provide the color with the following assignments: red is L-band vertically transmitted, vertically received; green is C-band vertically transmitted, vertically received; and blue are differences seen in the L-band data between the two days. This image is centered near 46.9 degrees north latitude and 114.1 degrees west longitude. No vertical exaggeration factor has been applied to the data. SIR-C/X-SAR, a joint mission of the German, Italian and United States space agencies, is part of NASA's Mission to Planet Earth program.

  16. Determining 3D flow fields via multi-camera light field imaging.

    PubMed

    Truscott, Tadd T; Belden, Jesse; Nielson, Joseph R; Daily, David J; Thomson, Scott L

    2013-01-01

    In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture (1). Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet. PMID:23486112

  17. Snapshot 3D optical coherence tomography system using image mappingspectrometry

    PubMed Central

    Nguyen, Thuc-Uyen; Pierce, Mark C; Higgins, Laura; Tkaczyk, Tomasz S

    2013-01-01

    A snapshot 3-Dimensional Optical Coherence Tomography system was developed using Image MappingSpectrometry. This system can give depth information (Z) at different spatial positions (XY) withinone camera integration time to potentially reduce motion artifact and enhance throughput. Thecurrent (x,y,λ) datacube of (85×356×117) provides a 3Dvisualization of sample with 400 μm depth and 13.4μm in transverse resolution. Axial resolution of 16.0μm can also be achieved in this proof-of-concept system. We present ananalysis of the theoretical constraints which will guide development of future systems withincreased imaging depth and improved axial and lateral resolutions. PMID:23736629

  18. A Workstation for Interactive Display and Quantitative Analysis of 3-D and 4-D Biomedical Images

    PubMed Central

    Robb, R.A.; Heffeman, P.B.; Camp, J.J.; Hanson, D.P.

    1986-01-01

    The capability to extract objective and quantitatively accurate information from 3-D radiographic biomedical images has not kept pace with the capabilities to produce the images themselves. This is rather an ironic paradox, since on the one hand the new 3-D and 4-D imaging capabilities promise significant potential for providing greater specificity and sensitivity (i.e., precise objective discrimination and accurate quantitative measurement of body tissue characteristics and function) in clinical diagnostic and basic investigative imaging procedures than ever possible before, but on the other hand, the momentous advances in computer and associated electronic imaging technology which have made these 3-D imaging capabilities possible have not been concomitantly developed for full exploitation of these capabilities. Therefore, we have developed a powerful new microcomputer-based system which permits detailed investigations and evaluation of 3-D and 4-D (dynamic 3-D) biomedical images. The system comprises a special workstation to which all the information in a large 3-D image data base is accessible for rapid display, manipulation, and measurement. The system provides important capabilities for simultaneously representing and analyzing both structural and functional data and their relationships in various organs of the body. This paper provides a detailed description of this system, as well as some of the rationale, background, theoretical concepts, and practical considerations related to system implementation. ImagesFigure 5Figure 7Figure 8Figure 9Figure 10Figure 11Figure 12Figure 13Figure 14Figure 15Figure 16

  19. Evaluation of a new method for stenosis quantification from 3D x-ray angiography images

    NASA Astrophysics Data System (ADS)

    Betting, Fabienne; Moris, Gilles; Knoplioch, Jerome; Trousset, Yves L.; Sureda, Francisco; Launay, Laurent

    2001-05-01

    A new method for stenosis quantification from 3D X-ray angiography images has been evaluated on both phantom and clinical data. On phantoms, for the parts larger or equal to 3 mm, the standard deviation of the measurement error has always found to be less or equal to 0.4 mm, and the maximum measurement error less than 0.17 mm. No clear relationship has been observed between the performances of the quantification method and the acquisition FoV. On clinical data, the 3D quantification method proved to be more robust to vessel bifurcations than its 3D equivalent. On a total of 15 clinical cases, the differences between 2D and 3D quantification were always less than 0.7 mm. The conclusion is that stenosis quantification from 3D X-4ay angiography images is an attractive alternative to quantification from 2D X-ray images.

  20. Retinal image restoration by means of blind deconvolution

    NASA Astrophysics Data System (ADS)

    Marrugo, Andrés G.; Šorel, Michal; Šroubek, Filip; Millán, María S.

    2011-11-01

    Retinal imaging plays a key role in the diagnosis and management of ophthalmologic disorders, such as diabetic retinopathy, glaucoma, and age-related macular degeneration. Because of the acquisition process, retinal images often suffer from blurring and uneven illumination. This problem may seriously affect disease diagnosis and progression assessment. Here we present a method for color retinal image restoration by means of multichannel blind deconvolution. The method is applied to a pair of retinal images acquired within a lapse of time, ranging from several minutes to months. It consists of a series of preprocessing steps to adjust the images so they comply with the considered degradation model, followed by the estimation of the point-spread function and, ultimately, image deconvolution. The preprocessing is mainly composed of image registration, uneven illumination compensation, and segmentation of areas with structural changes. In addition, we have developed a procedure for the detection and visualization of structural changes. This enables the identification of subtle developments in the retina not caused by variation in illumination or blur. The method was tested on synthetic and real images. Encouraging experimental results show that the method is capable of significant restoration of degraded retinal images.

  1. 3D image-based scatter estimation and correction for multi-detector CT imaging

    NASA Astrophysics Data System (ADS)

    Petersilka, M.; Allmendinger, T.; Stierstorfer, K.

    2014-03-01

    The aim of this work is to implement and evaluate a 3D image-based approach for the estimation of scattered radiation in multi-detector CT. Based on a reconstructed CT image volume, the scattered radiation contribution is calculated in 3D fan-beam geometry in the framework of an extended point-scatter kernel (PSK) model of scattered radiation. The PSK model is based on the calculation of elemental scatter contributions propagating the rays from the focal spot to the detector across the object for defined interaction points on a 3D fan beam grid. Each interaction point in 3D leads to an individual elemental 2D scatter distribution on the detector. The sum of all elemental contributions represents the total scatter intensity distribution on the detector. Our proposed extended PSK depends on the scattering angle (defined by the interaction point and the considered detector channel) and the line integral between the interaction point on a 3D fan beam ray and the intersection of the same ray with the detector. The PSK comprises single- and multiple scattering as well as the angular selectivity characteristics of the anti-scatter grid on detector. Our point-scatter kernels were obtained from a low-noise Monte-Carlo simulation of water-equivalent spheres with different radii for a particular CT scanner geometry. The model allows obtaining noise-free scatter intensity distribution estimates with a lower computational load compared to Monte-Carlo methods. In this work, we give a description of the algorithm and the proposed PSK. Furthermore, we compare resulting scatter intensity distributions (obtained for numerical phantoms) to Monte-Carlo results.

  2. Intensity-based segmentation and visualization of cells in 3D microscopic images using the GPU

    NASA Astrophysics Data System (ADS)

    Kang, Mi-Sun; Lee, Jeong-Eom; Jeon, Woong-ki; Choi, Heung-Kook; Kim, Myoung-Hee

    2013-02-01

    3D microscopy images contain abundant astronomical data, rendering 3D microscopy image processing time-consuming and laborious on a central processing unit (CPU). To solve these problems, many people crop a region of interest (ROI) of the input image to a small size. Although this reduces cost and time, there are drawbacks at the image processing level, e.g., the selected ROI strongly depends on the user and there is a loss in original image information. To mitigate these problems, we developed a 3D microscopy image processing tool on a graphics processing unit (GPU). Our tool provides efficient and various automatic thresholding methods to achieve intensity-based segmentation of 3D microscopy images. Users can select the algorithm to be applied. Further, the image processing tool provides visualization of segmented volume data and can set the scale, transportation, etc. using a keyboard and mouse. However, the 3D objects visualized fast still need to be analyzed to obtain information for biologists. To analyze 3D microscopic images, we need quantitative data of the images. Therefore, we label the segmented 3D objects within all 3D microscopic images and obtain quantitative information on each labeled object. This information can use the classification feature. A user can select the object to be analyzed. Our tool allows the selected object to be displayed on a new window, and hence, more details of the object can be observed. Finally, we validate the effectiveness of our tool by comparing the CPU and GPU processing times by matching the specification and configuration.

  3. Algorithm of pulmonary emphysema extraction using thoracic 3D CT images

    NASA Astrophysics Data System (ADS)

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2007-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  4. Algorithm of pulmonary emphysema extraction using low dose thoracic 3D CT images

    NASA Astrophysics Data System (ADS)

    Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Omatsu, H.; Tominaga, K.; Eguchi, K.; Moriyama, N.

    2006-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to 100 thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  5. Hyperspectral image compression based on the framework of DSC using 3D-wavelet and LDPC

    NASA Astrophysics Data System (ADS)

    Wu, Jiaji; Jiang, Kun; Fang, Yong; Jiao, Licheng

    2009-08-01

    In this paper, we propose a method based on both 3D-wavelet transform and low-density parity-check codes to realize the compression of hyperspectral images on the framework of DSC (Distributed Source Coding). The new approach which combines DSC and 3D-wavelet transform technique makes it possible to realize low encoding complexity at the encoder and achieve efficient performance of hyperspectral image compression. The experimental results for hyperspectral image coding show that the new method can performs better than 3D-SPIHT and can outperform than 2D-SPIHT and JPEG2000.

  6. Applications of azo-based probes for imaging retinal hypoxia.

    PubMed

    Uddin, Md Imam; Evans, Stephanie M; Craft, Jason R; Marnett, Lawrence J; Uddin, Md Jashim; Jayagopal, Ashwath

    2015-04-01

    We report the design and synthesis of an activatable molecular imaging probe to detect hypoxia in mouse models of retinal vascular diseases. Hypoxia of the retina has been associated with the initiation and progression of blinding retinal vascular diseases including age-related macular degeneration, diabetic retinopathy, and retinopathy of prematurity. In vivo retinal imaging of hypoxia may be useful for early detection and timely treatment of retinal diseases. To achieve this goal, we synthesized HYPOX-3, a near-infrared (NIR) imaging agent coupled to a dark quencher, Black Hole Quencher 3 (BHQ3), which has been previously reported to contain a hypoxia-sensitive cleavable azo-bond. HYPOX-3 was cleaved in hypoxic retinal cell culture and animal models, enabling detection of hypoxia with high signal-to-noise ratios without acute toxicity. HYPOX-3 fluorescences in hypoxic cells and tissues and was undetectable under normoxia. These imaging agents are promising candidates for imaging retinal hypoxia in preclinical disease models and patients. PMID:25893047

  7. 3D face reconstruction from limited images based on differential evolution

    NASA Astrophysics Data System (ADS)

    Wang, Qun; Li, Jiang; Asari, Vijayan K.; Karim, Mohammad A.

    2011-09-01

    3D face modeling has been one of the greatest challenges for researchers in computer graphics for many years. Various methods have been used to model the shape and texture of faces under varying illumination and pose conditions from a single given image. In this paper, we propose a novel method for the 3D face synthesis and reconstruction by using a simple and efficient global optimizer. A 3D-2D matching algorithm which employs the integration of the 3D morphable model (3DMM) and the differential evolution (DE) algorithm is addressed. In 3DMM, the estimation process of fitting shape and texture information into 2D images is considered as the problem of searching for the global minimum in a high dimensional feature space, in which optimization is apt to have local convergence. Unlike the traditional scheme used in 3DMM, DE appears to be robust against stagnation in local minima and sensitiveness to initial values in face reconstruction. Benefitting from DE's successful performance, 3D face models can be created based on a single 2D image with respect to various illuminating and pose contexts. Preliminary results demonstrate that we are able to automatically create a virtual 3D face from a single 2D image with high performance. The validation process shows that there is only an insignificant difference between the input image and the 2D face image projected by the 3D model.

  8. Mutual information as a measure of image quality for 3D dynamic lung imaging with EIT

    PubMed Central

    Crabb, M G; Davidson, J L; Little, R; Wright, P; Morgan, A R; Miller, C A; Naish, J H; Parker, G J M; Kikinis, R; McCann, H; Lionheart, W R B

    2014-01-01

    We report on a pilot study of dynamic lung electrical impedance tomography (EIT) at the University of Manchester. Low-noise EIT data at 100 frames per second (fps) were obtained from healthy male subjects during controlled breathing, followed by magnetic resonance imaging (MRI) subsequently used for spatial validation of the EIT reconstruction. The torso surface in the MR image and electrode positions obtained using MRI fiducial markers informed the construction of a 3D finite element model extruded along the caudal-distal axis of the subject. Small changes in the boundary that occur during respiration were accounted for by incorporating the sensitivity with respect to boundary shape into a robust temporal difference reconstruction algorithm. EIT and MRI images were co-registered using the open source medical imaging software, 3D Slicer. A quantitative comparison of quality of different EIT reconstructions was achieved through calculation of the mutual information with a lung-segmented MR image. EIT reconstructions using a linear shape correction algorithm reduced boundary image artefacts, yielding better contrast of the lungs, and had 10% greater mutual information compared with a standard linear EIT reconstruction. PMID:24710978

  9. Anesthesiology training using 3D imaging and virtual reality

    NASA Astrophysics Data System (ADS)

    Blezek, Daniel J.; Robb, Richard A.; Camp, Jon J.; Nauss, Lee A.

    1996-04-01

    Current training for regional nerve block procedures by anesthesiology residents requires expert supervision and the use of cadavers; both of which are relatively expensive commodities in today's cost-conscious medical environment. We are developing methods to augment and eventually replace these training procedures with real-time and realistic computer visualizations and manipulations of the anatomical structures involved in anesthesiology procedures, such as nerve plexus injections (e.g., celiac blocks). The initial work is focused on visualizations: both static images and rotational renderings. From the initial results, a coherent paradigm for virtual patient and scene representation will be developed.

  10. 3D Synchrotron Imaging of a Directionally Solidified Ternary Eutectic

    NASA Astrophysics Data System (ADS)

    Dennstedt, Anne; Helfen, Lukas; Steinmetz, Philipp; Nestler, Britta; Ratke, Lorenz

    2016-03-01

    For the first time, the microstructure of directionally solidified ternary eutectics is visualized in three dimensions, using a high-resolution technique of X-ray tomography at the ESRF. The microstructure characterization is conducted with a photon energy, allowing to clearly discriminate the three phases Ag2Al, Al2Cu, and α-Aluminum solid solution. The reconstructed images illustrate the three-dimensional arrangement of the phases. The Ag2Al lamellae perform splitting and merging as well as nucleation and disappearing events during directional solidification.

  11. 3D imaging of cone photoreceptors over extended time periods using optical coherence tomography with adaptive optics

    NASA Astrophysics Data System (ADS)

    Kocaoglu, Omer P.; Lee, Sangyeol; Jonnal, Ravi S.; Wang, Qiang; Herde, Ashley E.; Besecker, Jason; Gao, Weihua; Miller, Donald T.

    2011-03-01

    Optical coherence tomography with adaptive optics (AO-OCT) is a highly sensitive, noninvasive method for 3D imaging of the microscopic retina. The purpose of this study is to advance AO-OCT technology by enabling repeated imaging of cone photoreceptors over extended periods of time (days). This sort of longitudinal imaging permits monitoring of 3D cone dynamics in both normal and diseased eyes, in particular the physiological processes of disc renewal and phagocytosis, which are disrupted by retinal diseases such as age related macular degeneration and retinitis pigmentosa. For this study, the existing AO-OCT system at Indiana underwent several major hardware and software improvements to optimize system performance for 4D cone imaging. First, ultrahigh speed imaging was realized using a Basler Sprint camera. Second, a light source with adjustable spectrum was realized by integration of an Integral laser (Femto Lasers, λc=800nm, ▵λ=160nm) and spectral filters in the source arm. For cone imaging, we used a bandpass filter with λc=809nm and ▵λ=81nm (2.6 μm nominal axial resolution in tissue, and 167 KHz A-line rate using 1,408 px), which reduced the impact of eye motion compared to previous AO-OCT implementations. Third, eye motion artifacts were further reduced by custom ImageJ plugins that registered (axially and laterally) the volume videos. In two subjects, cone photoreceptors were imaged and tracked over a ten day period and their reflectance and outer segment (OS) lengths measured. High-speed imaging and image registration/dewarping were found to reduce eye motion to a fraction of a cone width (1 μm root mean square). The pattern of reflections in the cones was found to change dramatically and occurred on a spatial scale well below the resolution of clinical instruments. Normalized reflectance of connecting cilia (CC) and OS posterior tip (PT) of an exemplary cone was 54+/-4, 47+/-4, 48+/-6, 50+/-5, 56+/-1% and 46+/-4, 53+/-4, 52+/-6, 50+/-5, 44+/-1% for days #1,3,6,8,10 respectively. OS length of the same cone was 28.9, 26.4, 26.4, 30.6, and 28.1 ìm for days #1,3,6,8,10 respectively. It is plausible these changes are an optical correlate of the natural process of OS renewal and shedding.

  12. Computer-assisted 3D design software for teaching neuro-ophthalmology of the oculomotor system and training new retinal surgery techniques

    NASA Astrophysics Data System (ADS)

    Glittenberg, Carl; Binder, Susanne

    2004-07-01

    Purpose: To create a more effective method of demonstrating complex subject matter in ophthalmology with the use of high end, 3-D, computer aided animation and interactive multimedia technologies. Specifically, to explore the possibilities of demonstrating the complex nature of the neuroophthalmological basics of the human oculomotor system in a clear and non confusing way, and to demonstrate new forms of retinal surgery in a manner that makes the procedures easier to understand for other retinal surgeons. Methods and Materials: Using Reflektions 4.3, Monzoom Pro 4.5, Cinema 4D XL 5.03, Cinema 4D XL 8 Studio Bundle, Mediator 4.0, Mediator Pro 5.03, Fujitsu-Siemens Pentium III and IV, Gericom Webgine laptop, M.G.I. Video Wave 1.0 and 5, Micrografix Picture Publisher 6.0 and 8, Amorphium 1.0, and Blobs for Windows, we created 3-D animations showing the origin, insertion, course, main direction of pull, and auxiliary direction of pull of the six extra-ocular eye muscles. We created 3-D animations that (a) show the intra-cranial path of the relevant oculomotor cranial nerves and which muscles are supplied by them, (b) show which muscles are active in each of the ten lines of sight, (c) demonstrate the various malfunctions of oculomotor systems, as well as (d) show the surgical techniques and the challenges in radial optic neurotomies and subretinal surgeries. Most of the 3-D animations were integrated in interactive multimedia teaching programs. Their effectiveness was compared to conventional teaching methods in a comparative study performed at the University of Vienna. We also performed a survey to examine the response of students being taught with the interactive programs. We are currently in the process of placing most of the animations in an interactive web site in order to make them freely available to everyone who is interested. Results: Although learning how to use complex 3-D computer animation and multimedia authoring software can be very time consuming and frustrating, we found that once the programs are mastered they can be used to create 3-D animations that drastically improve the quality of medical demonstrations. The comparative study showed a significant advantage of using these technologies over conventional teaching methods. The feedback from medical students, doctors, and retinal surgeons was overwhelmingly positive. A strong interest was expressed to have more subjects and techniques demonstrated in this fashion. Conclusion: 3-D computer technologies should be used in the demonstration of all complex medical subjects. More effort and resources need to be given to the development of these technologies that can improve the understanding of medicine for students, doctors, and patients alike.

  13. Image quality improvement for a 3D structure exhibiting multiple 2D patterns and its implementation.

    PubMed

    Hirayama, Ryuji; Nakayama, Hirotaka; Shiraki, Atsushi; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi

    2016-04-01

    A three-dimensional (3D) structure designed by our proposed algorithm can simultaneously exhibit multiple two-dimensional patterns. The 3D structure provides multiple patterns having directional characteristics by distributing the effects of the artefacts. In this study, we proposed an iterative algorithm to improve the image quality of the exhibited patterns and have verified the effectiveness of the proposed algorithm using numerical simulations. Moreover, we fabricated different 3D glass structures (an octagonal prism, a cube and a sphere) using the proposed algorithm. All 3D structures exhibit four patterns, and different patterns can be observed depending on the viewing direction. PMID:27137021

  14. Snapshot hyperspectral retinal camera with the Image Mapping Spectrometer (IMS).

    PubMed

    Gao, Liang; Smith, R Theodore; Tkaczyk, Tomasz S

    2012-01-01

    We present a snapshot hyperspectral retinal camera with the Image Mapping Spectrometer (IMS) for eye imaging applications. The resulting system is capable of simultaneously acquiring 48 spectral channel images in the range 470 nm-650 nm with frame rate at 5.2 fps. The spatial sampling of each measured spectral scene is 350 × 350 pixels. The advantages of this snapshot device are elimination of the eye motion artifacts and pixel misregistration problems in traditional scanning-based hyperspectral retinal cameras, and real-time imaging of oxygen saturation dynamics with sub-second temporal resolution. The spectral imaging performance is demonstrated in a human retinal imaging experiment in vivo. The absorption spectral signatures of oxy-hemoglobin and macular pigments were successfully acquired by using this device. PMID:22254167

  15. Terahertz Lasers Reveal Information for 3D Images

    NASA Technical Reports Server (NTRS)

    2013-01-01

    After taking off her shoes and jacket, she places them in a bin. She then takes her laptop out of its case and places it in a separate bin. As the items move through the x-ray machine, the woman waits for a sign from security personnel to pass through the metal detector. Today, she was lucky; she did not encounter any delays. The man behind her, however, was asked to step inside a large circular tube, raise his hands above his head, and have his whole body scanned. If you have ever witnessed a full-body scan at the airport, you may have witnessed terahertz imaging. Terahertz wavelengths are located between microwave and infrared on the electromagnetic spectrum. When exposed to these wavelengths, certain materials such as clothing, thin metal, sheet rock, and insulation become transparent. At airports, terahertz radiation can illuminate guns, knives, or explosives hidden underneath a passenger s clothing. At NASA s Kennedy Space Center, terahertz wavelengths have assisted in the inspection of materials like insulating foam on the external tanks of the now-retired space shuttle. "The foam we used on the external tank was a little denser than Styrofoam, but not much," says Robert Youngquist, a physicist at Kennedy. The problem, he explains, was that "we lost a space shuttle by having a chunk of foam fall off from the external fuel tank and hit the orbiter." To uncover any potential defects in the foam covering, such as voids or air pockets, that could keep the material from staying in place, NASA employed terahertz imaging to see through the foam. For many years, the technique ensured the integrity of the material on the external tanks.

  16. 2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.

    PubMed

    Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

    2014-01-01

    3D imaging has a significant impact on many challenges in life sciences, because biology is a 3-dimensional phenomenon. Current 3D imaging-technologies (various types MRI, PET, SPECT) are labeled, i.e. they trace the localization of a specific compound in the body. In contrast, 3D MALDI mass spectrometry-imaging (MALDI-MSI) is a label-free method imaging the spatial distribution of molecular compounds. It complements 3D imaging labeled methods, immunohistochemistry, and genetics-based methods. However, 3D MALDI-MSI cannot tap its full potential due to the lack of statistical methods for analysis and interpretation of large and complex 3D datasets. To overcome this, we established a complete and robust 3D MALDI-MSI pipeline combined with efficient computational data analysis methods for 3D edge preserving image denoising, 3D spatial segmentation as well as finding colocalized m/z values, which will be reviewed here in detail. Furthermore, we explain, why the integration and correlation of the MALDI imaging data with other imaging modalities allows to enhance the interpretation of the molecular data and provides visualization of molecular patterns that may otherwise not be apparent. Therefore, a 3D data acquisition workflow is described generating a set of 3 different dimensional images representing the same anatomies. First, an in-vitro MRI measurement is performed which results in a three-dimensional image modality representing the 3D structure of the measured object. After sectioning the 3D object into N consecutive slices, all N slices are scanned using an optical digital scanner, enabling for performing the MS measurements. Scanning the individual sections results into low-resolution images, which define the base coordinate system for the whole pipeline. The scanned images conclude the information from the spatial (MRI) and the mass spectrometric (MALDI-MSI) dimension and are used for the spatial three-dimensional reconstruction of the object performed by image registration techniques. Different strategies for automatic serial image registration applied to MS datasets are outlined in detail. The third image modality is histology driven, i.e. a digital scan of the histological stained slices in high-resolution. After fusion of reconstructed scan images and MRI the slice-related coordinates of the mass spectra can be propagated into 3D-space. After image registration of scan images and histological stained images, the anatomical information from histology is fused with the mass spectra from MALDI-MSI. As a result of the described pipeline we have a set of 3 dimensional images representing the same anatomies, i.e. the reconstructed slice scans, the spectral images as well as corresponding clustering results, and the acquired MRI. Great emphasis is put on the fact that the co-registered MRI providing anatomical details improves the interpretation of 3D MALDI images. The ability to relate mass spectrometry derived molecular information with in vivo and in vitro imaging has potentially important implications. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. PMID:23467008

  17. 3D printing of intracranial artery stenosis based on the source images of magnetic resonance angiograph

    PubMed Central

    Liu, Jia; Li, Ming-Li; Sun, Zhao-Yong; Chen, Jie

    2014-01-01

    Background and purpose Three dimensional (3D) printing techniques for brain diseases have not been widely studied. We attempted to ‘print’ the segments of intracranial arteries based on magnetic resonance imaging. Methods Three dimensional magnetic resonance angiography (MRA) was performed on two patients with middle cerebral artery (MCA) stenosis. Using scale-adaptive vascular modeling, 3D vascular models were constructed from the MRA source images. The magnified (ten times) regions of interest (ROI) of the stenotic segments were selected and fabricated by a 3D printer with a resolution of 30 µm. A survey to 8 clinicians was performed to evaluate the accuracy of 3D printing results as compared with MRA findings (4 grades, grade 1: consistent with MRA and provide additional visual information; grade 2: consistent with MRA; grade 3: not consistent with MRA; grade 4: not consistent with MRA and provide probable misleading information). If a 3D printing vessel segment was ideally matched to the MRA findings (grade 2 or 1), a successful 3D printing was defined. Results Seven responders marked “grade 1” to 3D printing results, while one marked “grade 4”. Therefore, 87.5% of the clinicians considered the 3D printing were successful. Conclusions Our pilot study confirms the feasibility of using 3D printing technique in the research field of intracranial artery diseases. Further investigations are warranted to optimize this technique and translate it into clinical practice. PMID:25333049

  18. Live imaging and analysis of postnatal mouse retinal development

    PubMed Central

    2013-01-01

    Background The explanted, developing rodent retina provides an efficient and accessible preparation for use in gene transfer and pharmacological experimentation. Many of the features of normal development are retained in the explanted retina, including retinal progenitor cell proliferation, heterochronic cell production, interkinetic nuclear migration, and connectivity. To date, live imaging in the developing retina has been reported in non-mammalian and mammalian whole-mount samples. An integrated approach to rodent retinal culture/transfection, live imaging, cell tracking, and analysis in structurally intact explants greatly improves our ability to assess the kinetics of cell production. Results In this report, we describe the assembly and maintenance of an in vitro, CO2-independent, live mouse retinal preparation that is accessible by both upright and inverted, 2-photon or confocal microscopes. The optics of this preparation permit high-quality and multi-channel imaging of retinal cells expressing fluorescent reporters for up to 48h. Tracking of interkinetic nuclear migration within individual cells, and changes in retinal progenitor cell morphology are described. Follow-up, hierarchical cluster screening revealed that several different dependent variable measures can be used to identify and group movement kinetics in experimental and control samples. Conclusions Collectively, these methods provide a robust approach to assay multiple features of rodent retinal development using live imaging. PMID:23758927

  19. Detection of retinal nerve fiber layer defects in retinal fundus images using Gabor filtering

    NASA Astrophysics Data System (ADS)

    Hayashi, Yoshinori; Nakagawa, Toshiaki; Hatanaka, Yuji; Aoyama, Akira; Kakogawa, Masakatsu; Hara, Takeshi; Fujita, Hiroshi; Yamamoto, Tetsuya

    2007-03-01

    Retinal nerve fiber layer defect (NFLD) is one of the most important findings for the diagnosis of glaucoma reported by ophthalmologists. However, such changes could be overlooked, especially in mass screenings, because ophthalmologists have limited time to search for a number of different changes for the diagnosis of various diseases such as diabetes, hypertension and glaucoma. Therefore, the use of a computer-aided detection (CAD) system can improve the results of diagnosis. In this work, a technique for the detection of NFLDs in retinal fundus images is proposed. In the preprocessing step, blood vessels are "erased" from the original retinal fundus image by using morphological filtering. The preprocessed image is then transformed into a rectangular array. NFLD regions are observed as vertical dark bands in the transformed image. Gabor filtering is then applied to enhance the vertical dark bands. False positives (FPs) are reduced by a rule-based method which uses the information of the location and the width of each candidate region. The detected regions are back-transformed into the original configuration. In this preliminary study, 71% of NFLD regions are detected with average number of FPs of 3.2 per image. In conclusion, we have developed a technique for the detection of NFLDs in retinal fundus images. Promising results have been obtained in this initial study.

  20. Automatic 3D Ultrasound Calibration for Image Guided Therapy Using Intramodality Image Registration

    PubMed Central

    Schlosser, Jeffrey; Kirmizibayrak, Can; Shamdasani, Vijay; Metz, Steve; Hristov, Dimitre

    2013-01-01

    Many real time ultrasound (US) guided therapies can benefit from management of motion-induced anatomical changes with respect to a previously acquired computerized anatomy model. Spatial calibration is a prerequisite to transforming US image information to the reference frame of the anatomy model. We present a new method for calibrating 3D US volumes using intramodality image registration, derived from the hand eye calibration technique. The method is fully automated by implementing data rejection based on sensor displacements, automatic registration over overlapping image regions, and a self-consistency error metric evaluated continuously during calibration. We also present a novel method for validating US calibrations based on measurement of physical phantom displacements within US images. Both calibration and validation can be performed on arbitrary phantoms. Results indicate that normalized mutual information and localized cross correlation produce the most accurate 3D US registrations for calibration. Volumetric image alignment is more accurate and reproducible than point selection for validating the calibrations, yielding <1.5 mm root mean square error, a significant improvement relative to previously reported hand eye US calibration results. Comparison of two different phantoms for calibration and for validation revealed significant differences for validation (p=0.003) but not for calibration (p=0.795). PMID:24099806

  1. OVERALL PROCEDURES PROTOCOL AND PATIENT ENROLLMENT PROTOCOL: TESTING FEASIBILITY OF 3D ULTRASOUND DATA ACQUISITION AND RELIABILITY OF DATA RETRIEVAL FROM STORED 3D IMAGES

    EPA Science Inventory

    The purpose of this study is to examine the feasibility of collecting, transmitting,

    and analyzing 3-D ultrasound data in the context of a multi-center study of pregnant

    women. The study will also examine the reliability of measurements obtained from 3-D

    imag
    ...

  2. Software for browsing sectioned images of a dog body and generating a 3D model.

    PubMed

    Seo Park, Jin; Wook Jung, Yong

    2016-01-01

    The goals of this study were (1) to provide accessible and instructive browsing software for sectioned images and a portable document format (PDF) file that includes three-dimensional (3D) models of an entire dog body and (2) to develop techniques for segmentation and 3D modeling that would enable an investigator to perform these tasks without the aid of a computer engineer. To achieve these goals, relatively important or large structures in the sectioned images were outlined to generate segmented images. The sectioned and segmented images were then packaged into browsing software. In this software, structures in the sectioned images are shown in detail and in real color. After 3D models were made from the segmented images, the 3D models were exported into a PDF file. In this format, the 3D models could be manipulated freely. The browsing software and PDF file are available for study by students, for lecture for teachers, and for training for clinicians. These files will be helpful for anatomical study by and clinical training of veterinary students and clinicians. Furthermore, these techniques will be useful for researchers who study two-dimensional images and 3D models. Anat Rec, 299:81-87, 2016. © 2015 Wiley Periodicals, Inc. PMID:26219434

  3. Space Radar Image of Death Valley in 3-D

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This picture is a three-dimensional perspective view of Death Valley, California. This view was constructed by overlaying a SIR-C radar image on a U.S. Geological Survey digital elevation map. The SIR-C image is centered at 36.629 degrees north latitude and 117.069 degrees west longitude. We are looking at Stove Pipe Wells, which is the bright rectangle located in the center of the picture frame. Our vantage point is located atop a large alluvial fan centered at the mouth of Cottonwood Canyon. In the foreground on the left, we can see the sand dunes near Stove Pipe Wells. In the background on the left, the Valley floor gradually falls in elevation toward Badwater, the lowest spot in the United States. In the background on the right we can see Tucki Mountain. This SIR-C/X-SAR supersite is an area of extensive field investigations and has been visited by both Space Radar Lab astronaut crews. Elevations in the Valley range from 70 meters (230 feet) below sea level, the lowest in the United States, to more than 3,300 meters (10,800 feet) above sea level. Scientists are using SIR-C/X-SAR data from Death Valley to help the answer a number of different questions about Earth's geology. One question concerns how alluvial fans are formed and change through time under the influence of climatic changes and earthquakes. Alluvial fans are gravel deposits that wash down from the mountains over time. They are visible in the image as circular, fan-shaped bright areas extending into the darker valley floor from the mountains. Information about the alluvial fans helps scientists study Earth's ancient climate. Scientists know the fans are built up through climatic and tectonic processes and they will use the SIR-C/X-SAR data to understand the nature and rates of weathering processes on the fans, soil formation and the transport of sand and dust by the wind. SIR-C/X-SAR's sensitivity to centimeter-scale (inch-scale) roughness provides detailed maps of surface texture. Such information can be used to study the occurrence and movement of dust storms and sand dunes. The goal of these studies is to gain a better understanding of the record of past climatic changes and the effects of those changes on a sensitive environment. This may lead to a better ability to predict future response of the land to different potential global climate-change scenarios. Vertical exaggeration is 1.87 times; exaggeration of relief is a common tool scientists use to detect relationships between structure (for example, faults and fractures) and topography. Death Valley is also one of the primary calibration sites for SIR-C/X-SAR. In the lower right quadrant of the picture frame two bright dots can be seen which form a line extending to Stove Pipe Wells. These dots are corner reflectors that have been set up to calibrate the radar as the shuttle passes overhead. Thirty triangular-shaped reflectors (they look like aluminum pyramids) have been deployed by the calibration team from JPL over a 40- by 40-kilometer (25- by 25-mile) area in and around Death Valley. The signatures of these reflectors were analyzed by JPL scientists to calibrate the image used in this picture. The calibration team here also deployed transponders (electronic reflectors) and receivers to measure the radar signals from SIR-C/X-SAR on the ground. SIR-C/X-SAR radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, in conjunction with aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fur Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI).

  4. Hands-on guide for 3D image creation for geological purposes

    NASA Astrophysics Data System (ADS)

    Frehner, Marcel; Tisato, Nicola

    2013-04-01

    Geological structures in outcrops or hand specimens are inherently three dimensional (3D), and therefore better understandable if viewed in 3D. While 3D models can easily be created, manipulated, and looked at from all sides on the computer screen (e.g., using photogrammetry or laser scanning data), 3D visualizations for publications or conference posters are much more challenging as they have to live in a 2D-world (i.e., on a sheet of paper). Perspective 2D visualizations of 3D models do not fully transmit the "feeling and depth of the third dimension" to the audience; but this feeling is desirable for a better examination and understanding in 3D of the structure under consideration. One of the very few possibilities to generate real 3D images, which work on a 2D display, is by using so-called stereoscopic images. Stereoscopic images are two images of the same object recorded from two slightly offset viewpoints. Special glasses and techniques have to be used to make sure that one image is seen only by one eye, and the other image is seen by the other eye, which together lead to the "3D effect". Geoscientists are often familiar with such 3D images. For example, geomorphologists traditionally view stereographic orthophotos by employing a mirror-steroscope. Nowadays, petroleum-geoscientists examine high-resolution 3D seismic data sets in special 3D visualization rooms. One of the methods for generating and viewing a stereoscopic image, which does not require a high-tech viewing device, is to create a so-called anaglyph. The principle is to overlay two images saturated in red and cyan, respectively. The two images are then viewed through red-cyan-stereoscopic glasses. This method is simple and cost-effective, but has some drawbacks in preserving colors accurately. A similar method is used in 3D movies, where polarized light or shuttering techniques are used to separate the left from the right image, which allows preserving the original colors. The advantage of red-cyan anaglyphs is their simplicity and the possibility to print them on normal paper or project them using a conventional projector. Producing 3D stereoscopic images is much easier than commonly thought. Our hands-on poster provides an easy-to-use guide for producing 3D stereoscopic images. Few simple rules-of-thumb are presented that define how photographs of any scene or object have to be shot to produce good-looking 3D images. We use the free software Stereophotomaker (http://stereo.jpn.org/eng/stphmkr) to produce anaglyphs and provide red-cyan 3D glasses for viewing them. Our hands-on poster is easy to adapt and helps any geologist to present his/her field or hand specimen photographs in a much more fashionable 3D way for future publications or conference posters.

  5. New views of male pelvic anatomy: role of computer-generated 3D images.

    PubMed

    Venuti, Judith M; Imielinska, Celina; Molholt, Pat

    2004-04-01

    There is considerable controversy concerning the role of cadaveric dissection in teaching gross anatomy and the potential of using 3D computer-generated images to substitute for actual laboratory dissections. There are currently few high-quality 3D virtual models of anatomy available to evaluate the utility of computer-generated images. Existing 3D models are frequently of structures that are easily examined in three dimensions by removal from the cadaver, i.e., the heart, skull, and brain. We have focused on developing a 3D model of the pelvis, a region that is conceptually difficult and relatively inaccessible for student dissection. We feel students will benefit tremendously from 3D views of the pelvic anatomy. We generated 3D models of the male pelvic anatomy from hand-segmented color Visible Human Male cryosection data, reconstructed and visualized by Columbia University's in-house 3D Vesalius trade mark Visualizer.(1) These 3D models depict the anatomy of the region in a realistic true-to-life color and texture. They can be used to create 3D anatomical scenes, with arbitrary complexity, where the component anatomical structures are displayed in correct 3D anatomical relationships. Moreover, a sequence of 3D scenes can be defined to simulate actual dissection. Structures can be added in a layered sequence from the bony framework to build from the "inside-out" or disassembled much like a true laboratory dissection from the "outside-in." These 3D reconstructed anatomical models can provide views of the structures from new perspectives and have the potential to improve understanding of the anatomical relationships of the pelvic region (http://www.cellbiology.lsuhsc.edu/People/Faculty/Venuti_Figures/movie_index.html). PMID:15042576

  6. Imaging topological radar for 3D imaging in cultural heritage reproduction and restoration

    NASA Astrophysics Data System (ADS)

    Poggi, Claudio; Guarneri, Massimiliano; Fornetti, Giorgio; Ferri de Collibus, Mario; De Dominicis, Luigi; Paglia, Emiliano; Ricci, Roberto

    2005-10-01

    We present the last results obtained by using our Imaging Topological Radar (ITR), an high resolution laser scanner aimed at reconstruction 3D digital models of real targets, either single objects or complex scenes. The system, based on amplitude modulation ranging technique, enables to obtain simultaneously a shade-free, high resolution, photographic-like picture and accurate range data in the form of a range image, with resolution depending mainly on the laser modulation frequency (current best performance are ~100μm). The complete target surface is reconstructed from sampled points by using specifically developed software tools. The system has been successfully applied to scan different types of real surfaces (stone, wood, alloy, bones) and is suitable of relevant applications in different fields, ranging from industrial machining to medical diagnostics. We present some relevant examples of 3D reconstruction in the heritage field. Such results were obtained during recent campaigns carried out in situ in various Italian historical and archaeological sites (S. Maria Antiqua in Roman Forum, "Grotta dei cervi" Porto Badisco - Lecce, South Italy). The presented 3D models will be used by cultural heritage conservation authorities for restoration purpose and will available on the Internet for remote inspection.

  7. High-throughput imaging: Focusing in on drug discovery in 3D.

    PubMed

    Li, Linfeng; Zhou, Qiong; Voss, Ty C; Quick, Kevin L; LaBarbera, Daniel V

    2016-03-01

    3D organotypic culture models such as organoids and multicellular tumor spheroids (MCTS) are becoming more widely used for drug discovery and toxicology screening. As a result, 3D culture technologies adapted for high-throughput screening formats are prevalent. While a multitude of assays have been reported and validated for high-throughput imaging (HTI) and high-content screening (HCS) for novel drug discovery and toxicology, limited HTI/HCS with large compound libraries have been reported. Nonetheless, 3D HTI instrumentation technology is advancing and this technology is now on the verge of allowing for 3D HCS of thousands of samples. This review focuses on the state-of-the-art high-throughput imaging systems, including hardware and software, and recent literature examples of 3D organotypic culture models employing this technology for drug discovery and toxicology screening. PMID:26608110

  8. Reducing Non-Uniqueness in Satellite Gravity Inversion using 3D Object Oriented Image Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Fadel, I.; van der Meijde, M.; Kerle, N.

    2013-12-01

    Non-uniqueness of satellite gravity interpretation has been usually reduced by using a priori information from various sources, e.g. seismic tomography models. The reduction in non-uniqueness has been based on velocity-density conversion formulas or user interpretation for 3D subsurface structures (objects) in seismic tomography models. However, these processes introduce additional uncertainty through the conversion relations due to the dependency on the other physical parameters such as temperature and pressure, or through the bias in the interpretation due to user choices and experience. In this research, a new methodology is introduced to extract the 3D subsurface structures from 3D geophysical data using a state-of-art 3D Object Oriented Image Analysis (OOA) technique. 3D OOA is tested using a set of synthetic models that simulate the real situation in the study area of this research. Then, 3D OOA is used to extract 3D subsurface objects from a real 3D seismic tomography model. The extracted 3D objects are used to reconstruct a forward model and its response is compared with the measured satellite gravity. Finally, the result of the forward modelling, based on the extracted 3D objects, is used to constrain the inversion process of satellite gravity data. Through this work, a new object-based approach is introduced to interpret and extract the 3D subsurface objects from 3D geophysical data. This can be used to constrain modelling and inversion of potential field data using the extracted 3D subsurface structures from other methods. In summary, a new approach is introduced to constrain inversion of satellite gravity measurements and enhance interpretation capabilities.

  9. An image encryption algorithm based on 3D cellular automata and chaotic maps

    NASA Astrophysics Data System (ADS)

    Del Rey, A. Martín; Sánchez, G. Rodríguez

    2015-05-01

    A novel encryption algorithm to cipher digital images is presented in this work. The digital image is rendering into a three-dimensional (3D) lattice and the protocol consists of two phases: the confusion phase where 24 chaotic Cat maps are applied and the diffusion phase where a 3D cellular automata is evolved. The encryption method is shown to be secure against the most important cryptanalytic attacks.

  10. [Content-based automatic retinal image recognition and retrieval system].

    PubMed

    Zhang, Jiumei; Du, Jianjun; Cheng, Xia; Cao, Hongliang

    2013-04-01

    This paper is aimed to fulfill a prototype system used to classify and retrieve retinal image automatically. With the content-based image retrieval (CBIR) technology, a method to represent the retinal characteristics mixing the fundus image color (gray) histogram with bright, dark region features and other local comprehensive information was proposed. The method uses kernel principal component analysis (KPCA) to further extract nonlinear features and dimensionality reduced. It also puts forward a measurement method using support vector machine (SVM) on KPCA weighted distance in similarity measure aspect. Testing 300 samples with this prototype system randomly, we obtained the total image number of wrong retrieved 32, and the retrieval rate 89.33%. It showed that the identification rate of the system for retinal image was high. PMID:23858770

  11. Double depth-enhanced 3D integral imaging in projection-type system without diffuser

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Jiao, Xiao-xue; Sun, Yu; Xie, Yan; Liu, Shao-peng

    2015-05-01

    Integral imaging is a three dimensional (3D) display technology without any additional equipment. A new system is proposed in this paper which consists of the elemental images of real images in real mode (RIRM) and the ones of virtual images in real mode (VIRM). The real images in real mode are the same as the conventional integral images. The virtual images in real mode are obtained by changing the coordinates of the corresponding points in elemental images which can be reconstructed by the lens array in virtual space. In order to reduce the spot size of the reconstructed images, the diffuser in conventional integral imaging is given up in the proposed method. Then the spot size is nearly 1/20 of that in the conventional system. And an optical integral imaging system is constructed to confirm that our proposed method opens a new way for the application of the passive 3D display technology.

  12. Real Time Quantitative 3-D Imaging of Diffusion Flame Species

    NASA Technical Reports Server (NTRS)

    Kane, Daniel J.; Silver, Joel A.

    1997-01-01

    A low-gravity environment, in space or ground-based facilities such as drop towers, provides a unique setting for study of combustion mechanisms. Understanding the physical phenomena controlling the ignition and spread of flames in microgravity has importance for space safety as well as better characterization of dynamical and chemical combustion processes which are normally masked by buoyancy and other gravity-related effects. Even the use of so-called 'limiting cases' or the construction of 1-D or 2-D models and experiments fail to make the analysis of combustion simultaneously simple and accurate. Ideally, to bridge the gap between chemistry and fluid mechanics in microgravity combustion, species concentrations and temperature profiles are needed throughout the flame. However, restrictions associated with performing measurements in reduced gravity, especially size and weight considerations, have generally limited microgravity combustion studies to the capture of flame emissions on film or video laser Schlieren imaging and (intrusive) temperature measurements using thermocouples. Given the development of detailed theoretical models, more sophisticated studies are needed to provide the kind of quantitative data necessary to characterize the properties of microgravity combustion processes as well as provide accurate feedback to improve the predictive capabilities of the computational models. While there have been a myriad of fluid mechanical visualization studies in microgravity combustion, little experimental work has been completed to obtain reactant and product concentrations within a microgravity flame. This is largely due to the fact that traditional sampling methods (quenching microprobes using GC and/or mass spec analysis) are too heavy, slow, and cumbersome for microgravity experiments. Non-intrusive optical spectroscopic techniques have - up until now - also required excessively bulky, power hungry equipment. However, with the advent of near-IR diode lasers, the possibility now exists to obtain reactant and product concentrations and temperatures non-intrusively in microgravity combustion studies. Over the past ten years, Southwest Sciences has focused its research on the high sensitivity, quantitative detection of gas phase species using diode lasers. Our research approach combines three innovations in an experimental system resulting in a new capability for nonintrusive measurement of major combustion species. FM spectroscopy or high frequency Wavelength Modulation Spectroscopy (WMS) have recently been applied to sensitive absorption measurements at Southwest Sciences and in other laboratories using GaAlAs or InGaAsP diode lasers in the visible or near-infrared as well as lead-salt lasers in the mid-infrared spectral region. Because these lasers exhibit essentially no source noise at the high detection frequencies employed with this technique, the achievement of sensitivity approaching the detector shot noise limit is possible.

  13. Orthogonal moments for determining correspondence between vessel bifurcations for retinal image registration.

    PubMed

    Patankar, Sanika S; Kulkarni, Jayant V

    2015-05-01

    Retinal image registration is a necessary step in diagnosis and monitoring of Diabetes Retinopathy (DR), which is one of the leading causes of blindness. Long term diabetes affects the retinal blood vessels and capillaries eventually causing blindness. This progressive damage to retina and subsequent blindness can be prevented by periodic retinal screening. The extent of damage caused by DR can be assessed by comparing retinal images captured during periodic retinal screenings. During image acquisition at the time of periodic screenings translation, rotation and scale (TRS) are introduced in the retinal images. Therefore retinal image registration is an essential step in automated system for screening, diagnosis, treatment and evaluation of DR. This paper presents an algorithm for registration of retinal images using orthogonal moment invariants as features for determining the correspondence between the dominant points (vessel bifurcations) in the reference and test retinal images. As orthogonal moments are invariant to TRS; moment invariants features around a vessel bifurcation are unaltered due to TRS and can be used to determine the correspondence between reference and test retinal images. The vessel bifurcation points are located in segmented, thinned (mono pixel vessel width) retinal images and labeled in corresponding grayscale retinal images. The correspondence between vessel bifurcations in reference and test retinal image is established based on moment invariants features. Further the TRS in test retinal image with respect to reference retinal image is estimated using similarity transformation. The test retinal image is aligned with reference retinal image using the estimated registration parameters. The accuracy of registration is evaluated in terms of mean error and standard deviation of the labeled vessel bifurcation points in the aligned images. The experimentation is carried out on DRIVE database, STARE database, VARIA database and database provided by local government hospital in Pune, India. The experimental results exhibit effectiveness of the proposed algorithm for registration of retinal images. PMID:25837489

  14. 3-D scalable medical image compression with optimized volume of interest coding.

    PubMed

    Sanchez, Victor; Abugharbieh, Rafeef; Nasiopoulos, Panos

    2010-10-01

    We present a novel 3-D scalable compression method for medical images with optimized volume of interest (VOI) coding. The method is presented within the framework of interactive telemedicine applications, where different remote clients may access the compressed 3-D medical imaging data stored on a central server and request the transmission of different VOIs from an initial lossy to a final lossless representation. The method employs the 3-D integer wavelet transform and a modified EBCOT with 3-D contexts to create a scalable bit-stream. Optimized VOI coding is attained by an optimization technique that reorders the output bit-stream after encoding, so that those bits belonging to a VOI are decoded at the highest quality possible at any bit-rate, while allowing for the decoding of background information with peripherally increasing quality around the VOI. The bit-stream reordering procedure is based on a weighting model that incorporates the position of the VOI and the mean energy of the wavelet coefficients. The background information with peripherally increasing quality around the VOI allows for placement of the VOI into the context of the 3-D image. Performance evaluations based on real 3-D medical imaging data showed that the proposed method achieves a higher reconstruction quality, in terms of the peak signal-to-noise ratio, than that achieved by 3D-JPEG2000 with VOI coding, when using the MAXSHIFT and general scaling-based methods. PMID:20562038

  15. Dicom Color Medical Image Compression using 3D-SPIHT for Pacs Application.

    PubMed

    Kesavamurthy, T; Rani, Subha

    2008-06-01

    The proposed algorithm presents an application of 3D-SPIHT algorithm to color volumetric dicom medical images using 3D wavelet decomposition and a 3D spatial dependence tree. The wavelet decomposition is accomplished with biorthogonal 9/7 filters. 3D-SPIHT is the modern-day benchmark for three dimensional image compressions. The three-dimensional coding is based on the observation that the sequences of images are contiguous in the temporal axis and there is no motion between slices. Therefore, the 3D discrete wavelet transform can fully exploit the inter-slices correlations. The set partitioning techniques involve a progressive coding of the wavelet coefficients. The 3D-SPIHT is implemented and the Rate-distortion (Peak Signal-to-Noise Ratio (PSNR) vs. bit rate) performances are presented for volumetric medical datasets by using biorthogonal 9/7. The results are compared with the previous results of JPEG 2000 standards. Results show that 3D-SPIHT method exploits the color space relationships as well as maintaining the full embeddedness required by color image sequences compression and gives better performance in terms of the PSNR and compression ratio than the JPEG 2000. The results suggest an effective practical implementation for PACS applications. PMID:23675076

  16. Integration of virtual and real scenes within an integral 3D imaging environment

    NASA Astrophysics Data System (ADS)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  17. 3D segmentation and image annotation for quantitative diagnosis in lung CT images with pulmonary lesions

    NASA Astrophysics Data System (ADS)

    Li, Suo; Zhu, Yanjie; Sun, Jianyong; Zhang, Jianguo

    2013-03-01

    Pulmonary nodules and ground glass opacities are highly significant findings in high-resolution computed tomography (HRCT) of patients with pulmonary lesion. The appearances of pulmonary nodules and ground glass opacities show a relationship with different lung diseases. According to corresponding characteristic of lesion, pertinent segment methods and quantitative analysis are helpful for control and treat diseases at an earlier and potentially more curable stage. Currently, most of the studies have focused on two-dimensional quantitative analysis of these kinds of deceases. Compared to two-dimensional images, three-dimensional quantitative analysis can take full advantage of isotropic image data acquired by using thin slicing HRCT in space and has better quantitative precision for clinical diagnosis. This presentation designs a computer-aided diagnosis component to segment 3D disease areas of nodules and ground glass opacities in lung CT images, and use AIML (Annotation and image makeup language) to annotate the segmented 3D pulmonary lesions with information of quantitative measurement which may provide more features and information to the radiologists in clinical diagnosis.

  18. 3D reconstructions with pixel-based images are made possible by digitally clearing plant and animal tissue

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Reconstruction of 3D images from a series of 2D images has been restricted by the limited capacity to decrease the opacity of surrounding tissue. Commercial software that allows color-keying and manipulation of 2D images in true 3D space allowed us to produce 3D reconstructions from pixel based imag...

  19. Ridge-based retinal image registration algorithm involving OCT fundus images

    NASA Astrophysics Data System (ADS)

    Li, Ying; Gregori, Giovanni; Knighton, Robert W.; Lujan, Brandon J.; Rosenfeld, Philip J.; Lam, Byron L.

    2011-03-01

    This paper proposes an algorithm for retinal image registration involving OCT fundus images (OFIs). The first application of the algorithm is to register OFIs with color fundus photographs; such registration between multimodal retinal images can help correlate features across imaging modalities, which is important for both clinical and research purposes. The second application is to perform the montage of several OFIs, which allows us to construct 3D OCT images over a large field of view out of separate OCT datasets. We use blood vessel ridges as registration features. The brute force search and an Iterative Closest Point (ICP) algorithm are employed for image pair registration. Global alignment to minimize the distance between matching pixel pairs is used to obtain the montage of OFIs. Quality of OFIs is the big limitation factor of the registration algorithm. In the first experiment, the effect of manual OFI enhancement on registration was evaluated for the affine model on 11 image pairs from diseased eyes. The average root mean square error (RMSE) decreases from 58 μm to 40 μm. This indicates that the registration algorithm is robust to manual enhancement. In the second experiment for the montage of OFIs, the algorithm was tested on 6 sets from healthy eyes and 6 sets from diseased eyes, each set having 8 partially overlapping SD-OCT images. Visual evaluation showed that the montage performance was acceptable for normal cases, and not good for abnormal cases due to low visibility of blood vessels. The average RMSE for a typical montage case from a healthy eye is 2.3 pixels (69 μm).

  20. The application of camera calibration in range-gated 3D imaging technology

    NASA Astrophysics Data System (ADS)

    Liu, Xiao-quan; Wang, Xian-wei; Zhou, Yan

    2013-09-01

    Range-gated laser imaging technology was proposed in 1966 by LF Gillespiethe in U.S. Army Night Vision Laboratory(NVL). Using pulse laser and intensified charge-coupled device(ICCD) as light source and detector respectively, range-gated laser imaging technology can realize space-slice imaging while restraining the atmospheric backs-catter, and in turn detect the target effectively, by controlling the delay between the laser pulse and strobe. Owing to the constraints of the development of key components such as narrow pulse laser and gated imaging devices, the research has been progressed slowly in the next few decades. Until the beginning of this century, as the hardware technology continues to mature, this technology has developed rapidly in fields such as night vision, underwater imaging, biomedical imaging, three-dimensional imaging, especially range-gated three-dimensional(3-D) laser imaging field purposing of access to target spatial information. 3-D reconstruction is the processing of restoration of 3-D objects visible surface geometric structure from three-dimensional(2-D) image. Range-gated laser imaging technology can achieve gated imaging of slice space to form a slice image, and in turn provide the distance information corresponding to the slice image. But to inverse the information of 3-D space, we need to obtain the imaging visual field of system, that is, the focal length of the system. Then based on the distance information of the space slice, the spatial information of each unit space corresponding to each pixel can be inversed. Camera calibration is an indispensable step in 3-D reconstruction, including analysis of the internal structure of camera parameters and the external parameters . In order to meet the technical requirements of the range-gated 3-D imaging, this paper intends to study the calibration of the zoom lens system. After summarizing the camera calibration technique comprehensively, a classic calibration method based on line is selected. One-to-one correspondence between visual filed and focal length of system is obtained and offers effective visual field information for the matching of imaging filed and illumination filed in range-gated 3-D imaging technology. On the basis of the experimental results, combined with the depth of field theory, the application of camera calibration in range-gated 3-D imaging technology is futher studied.

  1. Applications of Panoramic Images: from 720° Panorama to Interior 3d Models of Augmented Reality

    NASA Astrophysics Data System (ADS)

    Lee, I.-C.; Tsai, F.

    2015-05-01

    A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The results presented in this paper demonstrate the potential of using panoramic images to generate 3D point clouds and 3D models. However, it is currently a manual and labor-intensive process. A research is being carried out to Increase the degree of automation of these procedures.

  2. Reconstructing photorealistic 3D models from image sequence using domain decomposition method

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei

    2009-11-01

    In the fields of industrial design, artistic design and heritage conservation, physical objects are usually digitalized by reverse engineering through some 3D scanning methods. Structured light and photogrammetry are two main methods to acquire 3D information, and both are expensive. Even if these expensive instruments are used, photorealistic 3D models are seldom available. In this paper, a new method to reconstruction photorealistic 3D models using a single camera is proposed. A square plate glued with coded marks is used to place the objects, and a sequence of about 20 images is taken. From the coded marks, the images are calibrated, and a snake algorithm is used to segment object from the background. A rough 3d model is obtained using shape from silhouettes algorithm. The silhouettes are decomposed into a combination of convex curves, which are used to partition the rough 3d model into some convex mesh patches. For each patch, the multi-view photo consistency constraints and smooth regulations are expressed as a finite element formulation, which can be resolved locally, and the information can be exchanged along the patches boundaries. The rough model is deformed into a fine 3d model through such a domain decomposition finite element method. The textures are assigned to each element mesh, and a photorealistic 3D model is got finally. A toy pig is used to verify the algorithm, and the result is exciting.

  3. 3D reconstruction of SEM images by use of optical photogrammetry software.

    PubMed

    Eulitz, Mona; Reiss, Gebhard

    2015-08-01

    Reconstruction of the three-dimensional (3D) surface of an object to be examined is widely used for structure analysis in science and many biological questions require information about their true 3D structure. For Scanning Electron Microscopy (SEM) there has been no efficient non-destructive solution for reconstruction of the surface morphology to date. The well-known method of recording stereo pair images generates a 3D stereoscope reconstruction of a section, but not of the complete sample surface. We present a simple and non-destructive method of 3D surface reconstruction from SEM samples based on the principles of optical close range photogrammetry. In optical close range photogrammetry a series of overlapping photos is used to generate a 3D model of the surface of an object. We adapted this method to the special SEM requirements. Instead of moving a detector around the object, the object itself was rotated. A series of overlapping photos was stitched and converted into a 3D model using the software commonly used for optical photogrammetry. A rabbit kidney glomerulus was used to demonstrate the workflow of this adaption. The reconstruction produced a realistic and high-resolution 3D mesh model of the glomerular surface. The study showed that SEM micrographs are suitable for 3D reconstruction by optical photogrammetry. This new approach is a simple and useful method of 3D surface reconstruction and suitable for various applications in research and teaching. PMID:26073969

  4. Digital holography particle image velocimetry for the measurement of 3D t-3c flows

    NASA Astrophysics Data System (ADS)

    Shen, Gongxin; Wei, Runjie

    2005-10-01

    In this paper a digital in-line holographic recording and reconstruction system was set up and used in the particle image velocimetry for the 3D t-3c (the three-component (3c), velocity vector field measurements in a three-dimensional (3D), space field with time history ( t)) flow measurements that made up of the new full-flow field experimental technique—digital holographic particle image velocimetry (DHPIV). The traditional holographic film was replaced by a CCD chip that records instantaneously the interference fringes directly without the darkroom processing, and the virtual image slices in different positions were reconstructed by computation using Fresnel-Kirchhoff integral method from the digital holographic image. Also a complex field signal filter (analyzing image calculated by its intensity and phase from real and image parts in fast fourier transform (FFT)) was applied in image reconstruction to achieve the thin focus depth of image field that has a strong effect with the vertical velocity component resolution. Using the frame-straddle CCD device techniques, the 3c velocity vector was computed by 3D cross-correlation through space interrogation block matching through the reconstructed image slices with the digital complex field signal filter. Then the 3D-3c-velocity field (about 20 000 vectors), 3D-streamline and 3D-vorticiry fields, and the time evolution movies (30 field/s) for the 3D t-3c flows were displayed by the experimental measurement using this DHPIV method and techniques.

  5. Mixed reality orthognathic surgical simulation by entity model manipulation and 3D-image display

    NASA Astrophysics Data System (ADS)

    Shimonagayoshi, Tatsunari; Aoki, Yoshimitsu; Fushima, Kenji; Kobayashi, Masaru

    2005-12-01

    In orthognathic surgery, the framing of 3D-surgical planning that considers the balance between the front and back positions and the symmetry of the jawbone, as well as the dental occlusion of teeth, is essential. In this study, a support system for orthodontic surgery to visualize the changes in the mandible and the occlusal condition and to determine the optimum position in mandibular osteotomy has been developed. By integrating the operating portion of a tooth model that is to determine the optimum occlusal position by manipulating the entity tooth model and the 3D-CT skeletal images (3D image display portion) that are simultaneously displayed in real-time, the determination of the mandibular position and posture in which the improvement of skeletal morphology and occlusal condition is considered, is possible. The realistic operation of the entity model and the virtual 3D image display enabled the construction of a surgical simulation system that involves augmented reality.

  6. Evaluation of image compression for computer-aided diagnosis of breast tumors in 3D sonography

    NASA Astrophysics Data System (ADS)

    Chen, We-Min; Huang, Yu-Len; Tao, Chi-Chuan; Chen, Dar-Ren; Moon, Woo-Kyung

    2006-03-01

    Medical imaging examinations form the basis for physicians diagnosing diseases, as evidenced by the increasing use of digital medical images for picture archiving and communications systems (PACS). However, with enlarged medical image databases and rapid growth of patients' case reports, PACS requires image compression to accelerate the image transmission rate and conserve disk space for diminishing implementation costs. For this purpose, JPEG and JPEG2000 have been accepted as legal formats for the digital imaging and communications in medicine (DICOM). The high compression ratio is felt to be useful for medical imagery. Therefore, this study evaluates the compression ratios of JPEG and JPEG2000 standards for computer-aided diagnosis (CAD) of breast tumors in 3-D medical ultrasound (US) images. The 3-D US data sets with various compression ratios are compressed using the two efficacious image compression standards. The reconstructed data sets are then diagnosed by a previous proposed CAD system. The diagnostic accuracy is measured based on receiver operating characteristic (ROC) analysis. Namely, the ROC curves are used to compare the diagnostic performance of two or more reconstructed images. Analysis results ensure a comparison of the compression ratios by using JPEG and JPEG2000 for 3-D US images. Results of this study provide the possible bit rates using JPEG and JPEG2000 for 3-D breast US images.

  7. Development of goniophotometric imaging system for recording reflectance spectra of 3D objects

    NASA Astrophysics Data System (ADS)

    Tonsho, Kazutaka; Akao, Y.; Tsumura, Norimichi; Miyake, Yoichi

    2001-12-01

    In recent years, it is required to develop a system for 3D capture of archives in museums and galleries. In visualizing of 3D object, it is important to reproduce both color and glossiness accurately. Our final goal is to construct digital archival systems in museum and internet or virtual museum via World Wide Web. To achieve our goal, we have developed gonio-photometric imaging system by using high accurate multi-spectral camera and 3D digitizer. In this paper, gonio-photometric imaging method is introduced for recording 3D object. 5-bands images of the object are taken under 7 different illuminants angles. The 5-band image sequences are then analyzed on the basis of both dichromatic reflection model and Phong model to extract gonio-photometric property of the object. The images of the 3D object under illuminants with arbitrary spectral radiant distribution, illuminating angles, and visual points are rendered by using OpenGL with the 3D shape and gonio-photometric property.

  8. 3D and 4D magnetic susceptibility tomography based on complex MR images

    DOEpatents

    Chen, Zikuan; Calhoun, Vince D

    2014-11-11

    Magnetic susceptibility is the physical property for T2*-weighted magnetic resonance imaging (T2*MRI). The invention relates to methods for reconstructing an internal distribution (3D map) of magnetic susceptibility values, .chi. (x,y,z), of an object, from 3D T2*MRI phase images, by using Computed Inverse Magnetic Resonance Imaging (CIMRI) tomography. The CIMRI technique solves the inverse problem of the 3D convolution by executing a 3D Total Variation (TV) regularized iterative convolution scheme, using a split Bregman iteration algorithm. The reconstruction of .chi. (x,y,z) can be designed for low-pass, band-pass, and high-pass features by using a convolution kernel that is modified from the standard dipole kernel. Multiple reconstructions can be implemented in parallel, and averaging the reconstructions can suppress noise. 4D dynamic magnetic susceptibility tomography can be implemented by reconstructing a 3D susceptibility volume from a 3D phase volume by performing 3D CIMRI magnetic susceptibility tomography at each snapshot time.

  9. A high resolution and high speed 3D imaging system and its application on ATR

    NASA Astrophysics Data System (ADS)

    Lu, Thomas T.; Chao, Tien-Hsin

    2006-04-01

    The paper presents an advanced 3D imaging system based on a combination of stereo vision and light projection methods. A single digital camera is used to take only one shot of the object and reconstruct the 3D model of an object. The stereo vision is achieved by employing a prism and mirror setup to split the views and combine them side by side in the camera. The advantage of this setup is its simple system architecture, easy synchronization, fast 3D imaging speed and high accuracy. The 3D imaging algorithms and potential applications are discussed. For ATR applications, it is critically important to extract maximum information for the potential targets and to separate the targets from the background and clutter noise. The added dimension of a 3D model provides additional features of surface profile, range information of the target. It is capable of removing the false shadow from camouflage and reveal the 3D profile of the object. It also provides arbitrary viewing angles and distances for training the filter bank for invariant ATR. The system architecture can be scaled to take large objects and to perform area 3D modeling onboard a UAV.

  10. Label free cell tracking in 3D tissue engineering constructs with high resolution imaging

    NASA Astrophysics Data System (ADS)

    Smith, W. A.; Lam, K.-P.; Dempsey, K. P.; Mazzocchi-Jones, D.; Richardson, J. B.; Yang, Y.

    2014-02-01

    Within the field of tissue engineering there is an emphasis on studying 3-D live tissue structures. Consequently, to investigate and identify cellular activities and phenotypes in a 3-D environment for all in vitro experiments, including shape, migration/proliferation and axon projection, it is necessary to adopt an optical imaging system that enables monitoring 3-D cellular activities and morphology through the thickness of the construct for an extended culture period without cell labeling. This paper describes a new 3-D tracking algorithm developed for Cell-IQ®, an automated cell imaging platform, which has been equipped with an environmental chamber optimized to enable capturing time-lapse sequences of live cell images over a long-term period without cell labeling. As an integral part of the algorithm, a novel auto-focusing procedure was developed for phase contrast microscopy equipped with 20x and 40x objectives, to provide a more accurate estimation of cell growth/trajectories by allowing 3-D voxels to be computed at high spatiotemporal resolution and cell density. A pilot study was carried out in a phantom system consisting of horizontally aligned nanofiber layers (with precise spacing between them), to mimic features well exemplified in cellular activities of neuronal growth in a 3-D environment. This was followed by detailed investigations concerning axonal projections and dendritic circuitry formation in a 3-D tissue engineering construct. Preliminary work on primary animal neuronal cells in response to chemoattractant and topographic cue within the scaffolds has produced encouraging results.

  11. Building 3D aerial image in photoresist with reconstructed mask image acquired with optical microscope

    NASA Astrophysics Data System (ADS)

    Chou, C. S.; Tang, Y. P.; Chu, F. S.; Huang, W. C.; Liu, R. G.; Gau, T. S.

    2012-03-01

    Calibration of mask images on wafer becomes more important as features shrink. Two major types of metrology have been commonly adopted. One is to measure the mask image with scanning electron microscope (SEM) to obtain the contours on mask and then simulate the wafer image with optical simulator. The other is to use an optical imaging tool Aerial Image Measurement System (AIMSTM) to emulate the image on wafer. However, the SEM method is indirect. It just gathers planar contours on a mask with no consideration of optical characteristics such as 3D topography structures. Hence, the image on wafer is not predicted precisely. Though the AIMSTM method can be used to directly measure the intensity at the near field of a mask but the image measured this way is not quite the same as that on the wafer due to reflections and refractions in the films on wafer. Here, a new approach is proposed to emulate the image on wafer more precisely. The behavior of plane waves with different oblique angles is well known inside and between planar film stacks. In an optical microscope imaging system, plane waves can be extracted from the pupil plane with a coherent point source of illumination. Once plane waves with a specific coherent illumination are analyzed, the partially coherent component of waves could be reconstructed with a proper transfer function, which includes lens aberration, polarization, reflection and refraction in films. It is a new method that we can transfer near light field of a mask into an image on wafer without the disadvantages of indirect SEM measurement such as neglecting effects of mask topography, reflections and refractions in the wafer film stacks. Furthermore, with this precise latent image, a separated resist model also becomes more achievable.

  12. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    NASA Astrophysics Data System (ADS)

    Dhou, S.; Hurwitz, M.; Mishra, P.; Cai, W.; Rottmann, J.; Li, R.; Williams, C.; Wagar, M.; Berbeco, R.; Ionascu, D.; Lewis, J. H.

    2015-05-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery.

  13. 3D X-ray imaging methods in support catheter ablations of cardiac arrhythmias.

    PubMed

    Stárek, Zdeněk; Lehar, František; Jež, Jiří; Wolf, Jiří; Novák, Miroslav

    2014-10-01

    Cardiac arrhythmias are a very frequent illness. Pharmacotherapy is not very effective in persistent arrhythmias and brings along a number of risks. Catheter ablation has became an effective and curative treatment method over the past 20 years. To support complex arrhythmia ablations, the 3D X-ray cardiac cavities imaging is used, most frequently the 3D reconstruction of CT images. The 3D cardiac rotational angiography (3DRA) represents a modern method enabling to create CT like 3D images on a standard X-ray machine equipped with special software. Its advantage lies in the possibility to obtain images during the procedure, decreased radiation dose and reduction of amount of the contrast agent. The left atrium model is the one most frequently used for complex atrial arrhythmia ablations, particularly for atrial fibrillation. CT data allow for creation and segmentation of 3D models of all cardiac cavities. Recently, a research has been made proving the use of 3DRA to create 3D models of other cardiac (right ventricle, left ventricle, aorta) and non-cardiac structures (oesophagus). They can be used during catheter ablation of complex arrhythmias to improve orientation during the construction of 3D electroanatomic maps, directly fused with 3D electroanatomic systems and/or fused with fluoroscopy. An intensive development in the 3D model creation and use has taken place over the past years and they became routinely used during catheter ablations of arrhythmias, mainly atrial fibrillation ablation procedures. Further development may be anticipated in the future in both the creation and use of these models. PMID:24964905

  14. Study on Construction of 3d Building Based on Uav Images

    NASA Astrophysics Data System (ADS)

    Xie, F.; Lin, Z.; Gui, D.; Lin, H.

    2012-07-01

    Based on the characteristics of Unmanned Aerial Vehicle (UAV) system for low altitude aerial photogrammetry and the need of three dimensional (3D)city modeling, a method of fast 3D building modeling using the images from UAV carrying four-combined camera is studied. Firstly, by contrasting and analyzing the mosaic structures of the existing four-combined cameras, a new type of four-combined camera with special design of overlap images is designed, which improves the self-calibration function to achieve the high precision imaging by automatically eliminating the error of machinery deformation and the time lag with every exposure, and further reduce the weight of the imaging system. Secondly, several-angle images including vertical images and oblique images gotten by the UAV system are used for the detail measure of building surfaces and the texture extraction. Finally, two tests that are aerial photography with large scale mapping of 1:1000 and 3D building construction in Shandong University of Science and Technology and aerial photography with large scale mapping of 1:500 and 3D building construction in Henan University of Urban Construction, provide authentication model for construction of 3D building based on combined wide-angle camera images from UAV system. It is demonstrated that the UAV system for low altitude aerial photogrammetry can be used in the construction of 3D building production, and the technology solution in this paper offers a new, fast and technical plan for the 3D expression of the city landscape, fine modeling and visualization.

  15. Gonio photometric imaging for recording of reflectance spectra of 3D objects

    NASA Astrophysics Data System (ADS)

    Miyake, Yoichi; Tsumura, Norimichi; Haneishi, Hideaki; Hayashi, Junichiro

    2002-06-01

    In recent years, it is required to develop a system for 3D capture of archives in museums and galleries. In visualizing of 3D object, it is important to reproduce both color and glossiness accurately. Our final goal is to construct digital archival systems in museum and Internet or virtual museum via World Wide Web. To archive our goal, we have developed the multi-spectral imaging systems to record and estimate reflectance spectra of the art paints based on principal component analysis and Wiener estimation method. In this paper, Gonio photometric imaging method is introduced for recording of 3D object. Five-band images of the object are taken under seven different illuminants angles. The set of five-band images are then analyzed on the basis of both dichromatic reflection model and Phong model to extract Gonio photometric information of the object. Prediction of reproduced images of the object under several illuminants and illumination angles is demonstrated and images that are synthesized with 3D wire frame image taken by 3D digitizer are also presented.

  16. In vivo 3D PIXE-micron-CT imaging of Drosophila melanogaster using a contrast agent

    NASA Astrophysics Data System (ADS)

    Matsuyama, Shigeo; Hamada, Naoki; Ishii, Keizo; Nozawa, Yuichiro; Ohkura, Satoru; Terakawa, Atsuki; Hatori, Yoshinobu; Fujiki, Kota; Fujiwara, Mitsuhiro; Toyama, Sho

    2015-04-01

    In this study, we developed a three-dimensional (3D) computed tomography (CT) in vivo imaging system for imaging small insects with micrometer resolution. The 3D CT imaging system, referred to as 3D PIXE-micron-CT (PIXEμCT), uses characteristic X-rays produced by ion microbeam bombardment of a metal target. PIXEμCT was used to observe the body organs and internal structure of a living Drosophila melanogaster. Although the organs of the thorax were clearly imaged, the digestive organs in the abdominal cavity could not be clearly discerned initially, with the exception of the rectum and the Malpighian tubule. To enhance the abdominal images, a barium sulfate powder radiocontrast agent was added. For the first time, 3D images of the ventriculus of a living D. melanogaster were obtained. Our results showed that PIXEμCT can provide in vivo 3D-CT images that reflect correctly the structure of individual living organs, which is expected to be very useful in biological research.

  17. Free-Breathing 3D Whole Heart Black Blood Imaging with Motion Sensitized Driven Equilibrium

    PubMed Central

    Srinivasan, Subashini; Hu, Peng; Kissinger, Kraig V.; Goddu, Beth; Goepfert, Lois; Schmidt, Ehud J.; Kozerke, Sebastian; Nezafat, Reza

    2012-01-01

    Purpose To assess the efficacy and robustness of motion sensitized driven equilibrium (MSDE) for blood suppression in volumetric 3D whole heart cardiac MR. Materials and Methods To investigate the efficacy of MSDE on blood suppression and myocardial SNR loss on different imaging sequences. 7 healthy adult subjects were imaged using 3D ECG-triggered MSDE-prep T1-weighted turbo spin echo (TSE), and spoiled gradient echo (GRE), after optimization of MSDE parameters in a pilot study of 5 subjects. Imaging artifacts, myocardial and blood SNR were assessed. Subsequently, the feasibility of isotropic spatial resolution MSDE-prep black-blood was assessed in 6 subjects. Finally, 15 patients with known or suspected cardiovascular disease were recruited to be imaged using conventional multi-slice 2D DIR TSE imaging sequence and 3D MSDE-prep spoiled GRE. Results The MSDE-prep yields significant blood suppression (75-92%), enabling a volumetric 3D black-blood assessment of the whole heart with significantly improved visualization of the chamber walls. The MSDE-prep also allowed successful acquisition of black-blood images with isotropic spatial resolution. In the patient study, 3D black-blood MSDE-prep and DIR resulted in similar blood suppression in LV and RV walls but the MSDE prep had superior myocardial signal and wall sharpness. Conclusion MSDE-prep allows volumetric black-blood imaging of the heart. PMID:22517477

  18. Small SWAP 3D imaging flash ladar for small tactical unmanned air systems

    NASA Astrophysics Data System (ADS)

    Bird, Alan; Anderson, Scott A.; Wojcik, Michael; Budge, Scott E.

    2015-05-01

    The Space Dynamics Laboratory (SDL), working with Naval Research Laboratory (NRL) and industry leaders Advanced Scientific Concepts (ASC) and Hood Technology Corporation, has developed a small SWAP (size, weight, and power) 3D imaging flash ladar (LAser Detection And Ranging) sensor system concept design for small tactical unmanned air systems (STUAS). The design utilizes an ASC 3D flash ladar camera and laser in a Hood Technology gyro-stabilized gimbal system. The design is an autonomous, intelligent, geo-aware sensor system that supplies real-time 3D terrain and target images. Flash ladar and visible camera data are processed at the sensor using a custom digitizer/frame grabber with compression. Mounted in the aft housing are power, controls, processing computers, and GPS/INS. The onboard processor controls pointing and handles image data, detection algorithms and queuing. The small SWAP 3D imaging flash ladar sensor system generates georeferenced terrain and target images with a low probability of false return and <10 cm range accuracy through foliage in real-time. The 3D imaging flash ladar is designed for a STUAS with a complete system SWAP estimate of <9 kg, <0.2 m3 and <350 W power. The system is modeled using LadarSIM, a MATLAB® and Simulink®- based ladar system simulator designed and developed by the Center for Advanced Imaging Ladar (CAIL) at Utah State University. We will present the concept design and modeled performance predictions.

  19. 3D city models completion by fusing lidar and image data

    NASA Astrophysics Data System (ADS)

    Grammatikopoulos, L.; Kalisperakis, I.; Petsa, E.; Stentoumis, C.

    2015-05-01

    A fundamental step in the generation of visually detailed 3D city models is the acquisition of high fidelity 3D data. Typical approaches employ DSM representations usually derived from Lidar (Light Detection and Ranging) airborne scanning or image based procedures. In this contribution, we focus on the fusion of data from both these methods in order to enhance or complete them. Particularly, we combine an existing Lidar and orthomosaic dataset (used as reference), with a new aerial image acquisition (including both vertical and oblique imagery) of higher resolution, which was carried out in the area of Kallithea, in Athens, Greece. In a preliminary step, a digital orthophoto and a DSM is generated from the aerial images in an arbitrary reference system, by employing a Structure from Motion and dense stereo matching framework. The image-to-Lidar registration is performed by 2D feature (SIFT and SURF) extraction and matching among the two orthophotos. The established point correspondences are assigned with 3D coordinates through interpolation on the reference Lidar surface, are then backprojected onto the aerial images, and finally matched with 2D image features located in the vicinity of the backprojected 3D points. Consequently, these points serve as Ground Control Points with appropriate weights for final orientation and calibration of the images through a bundle adjustment solution. By these means, the aerial imagery which is optimally aligned to the reference dataset can be used for the generation of an enhanced and more accurately textured 3D city model.

  20. Sparse multipass 3D SAR imaging: applications to the GOTCHA data set

    NASA Astrophysics Data System (ADS)

    Austin, Christian D.; Ertin, Emre; Moses, Randolph L.

    2009-05-01

    Typically in SAR imaging, there is insufficient data to form well-resolved three-dimensional (3D) images using traditional Fourier image reconstruction; furthermore, scattering centers do not persist over wide-angles. In this work, we examine 3D non-coherent wide-angle imaging on the GOTCHA Air Force Research Laboratory (AFRL) data set; this data set consists of multipass complete circular aperture radar data from a scene at AFRL, with each pass varying in elevation as a result of aircraft flight dynamics . We compare two algorithms capable of forming well-resolved 3D images over this data set: regularized lp least-squares inversion, and non-uniform multipass interferometric SAR (IFSAR).

  1. 3D building reconstruction from aerial CCD image and sparse laser sample data

    NASA Astrophysics Data System (ADS)

    Hongjian, You; Shiqiang, Zhang

    2006-06-01

    An approach for 3D building reconstruction automatically based on aerial CCD image and sparse laser scanning sample points is presented in this paper. The geometry shape of a building is shown very clearly in the aerial high-resolution CCD image, so we use Laplacian sharpening operator and threshold segmentation to extract the edges of CCD image first, and then pixel connectivity is used to extract the linear features in the CCD image. Bi-direction projection histogram and line matching are proposed to extract the contours of buildings. The height of the building is determined from sparse laser sample points which are within the contour of the buildings extracted from CCD image; therefore the 3D information of each building is reconstructed. We reconstruct 3D buildings correctly by this approach using real aerial CCD and sparse laser rangefinder data.

  2. Radon transform based automatic metal artefacts generation for 3D threat image projection

    NASA Astrophysics Data System (ADS)

    Megherbi, Najla; Breckon, Toby P.; Flitton, Greg T.; Mouton, Andre

    2013-10-01

    Threat Image Projection (TIP) plays an important role in aviation security. In order to evaluate human security screeners in determining threats, TIP systems project images of realistic threat items into the images of the passenger baggage being scanned. In this proof of concept paper, we propose a 3D TIP method which can be integrated within new 3D Computed Tomography (CT) screening systems. In order to make the threat items appear as if they were genuinely located in the scanned bag, appropriate CT metal artefacts are generated in the resulting TIP images according to the scan orientation, the passenger bag content and the material of the inserted threat items. This process is performed in the projection domain using a novel methodology based on the Radon Transform. The obtained results using challenging 3D CT baggage images are very promising in terms of plausibility and realism.

  3. Technical Note: Characterization of custom 3D printed multimodality imaging phantoms

    SciTech Connect

    Bieniosek, Matthew F.; Lee, Brian J.; Levin, Craig S.

    2015-10-15

    Purpose: Imaging phantoms are important tools for researchers and technicians, but they can be costly and difficult to customize. Three dimensional (3D) printing is a widely available rapid prototyping technique that enables the fabrication of objects with 3D computer generated geometries. It is ideal for quickly producing customized, low cost, multimodal, reusable imaging phantoms. This work validates the use of 3D printed phantoms by comparing CT and PET scans of a 3D printed phantom and a commercial “Micro Deluxe” phantom. This report also presents results from a customized 3D printed PET/MRI phantom, and a customized high resolution imaging phantom with sub-mm features. Methods: CT and PET scans of a 3D printed phantom and a commercial Micro Deluxe (Data Spectrum Corporation, USA) phantom with 1.2, 1.6, 2.4, 3.2, 4.0, and 4.8 mm diameter hot rods were acquired. The measured PET and CT rod sizes, activities, and attenuation coefficients were compared. A PET/MRI scan of a custom 3D printed phantom with hot and cold rods was performed, with photon attenuation and normalization measurements performed with a separate 3D printed normalization phantom. X-ray transmission scans of a customized two level high resolution 3D printed phantom with sub-mm features were also performed. Results: Results show very good agreement between commercial and 3D printed micro deluxe phantoms with less than 3% difference in CT measured rod diameter, less than 5% difference in PET measured rod diameter, and a maximum of 6.2% difference in average rod activity from a 10 min, 333 kBq/ml (9 μCi/ml) Siemens Inveon (Siemens Healthcare, Germany) PET scan. In all cases, these differences were within the measurement uncertainties of our setups. PET/MRI scans successfully identified 3D printed hot and cold rods on PET and MRI modalities. X-ray projection images of a 3D printed high resolution phantom identified features as small as 350 μm wide. Conclusions: This work shows that 3D printed phantoms can be functionally equivalent to commercially available phantoms. They are a viable option for quickly distributing and fabricating low cost, customized phantoms.

  4. An effective automated system for grading severity of retinal arteriovenous nicking in colour retinal images.

    PubMed

    Roy, Pallab Kanti; Nguyen, Uyen T V; Bhuiyan, Alauddin; Ramamohanarao, Kotagiri

    2014-01-01

    Retinal arteriovenous (AV) nicking is a precursor for hypertension, stroke and other cardiovascular diseases. In this paper, an effective method is proposed for the analysis of retinal venular widths to automatically classify the severity level of AV nicking. We use combination of intensity and edge information of the vein to compute its widths. The widths at various sections of the vein near the crossover point are then utilized to train a random forest classifier to classify the severity of AV nicking. We analyzed 47 color retinal images obtained from two population based studies for quantitative evaluation of the proposed method. We compare the detection accuracy of our method with a recently published four class AV nicking classification method. Our proposed method shows 64.51% classification accuracy in-contrast to the reported classification accuracy of 49.46% by the state of the art method. PMID:25571443

  5. Resolution modification and context based image processing for retinal prosthesis

    NASA Astrophysics Data System (ADS)

    Naghdy, Golshah; Beston, Chris; Seo, Jong-Mo; Chung, Hum

    2006-08-01

    This paper focuses on simulating image processing algorithms and exploring issues related to reducing high resolution images to 25 x 25 pixels suitable for the retinal implant. Field of view (FoV) is explored, and a novel method of virtual eye movement discussed. Several issues beyond the normal model of human vision are addressed through context based processing.

  6. Near-infrared optical imaging of human brain based on the semi-3D reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Ming; Meng, Wei; Qin, Zhuanping; Zhou, Xiaoqing; Zhao, Huijuan; Gao, Feng

    2013-03-01

    In the non-invasive brain imaging with near-infrared light, precise head model is of great significance to the forward model and the image reconstruction. To deal with the individual difference of human head tissues and the problem of the irregular curvature, in this paper, we extracted head structure with Mimics software from the MRI image of a volunteer. This scheme makes it possible to assign the optical parameters to every layer of the head tissues reasonably and solve the diffusion equation with the finite-element analysis. During the solution of the inverse problem, a semi-3D reconstruction algorithm is adopted to trade off the computation cost and accuracy between the full 3-D and the 2-D reconstructions. In this scheme, the changes in the optical properties of the inclusions are assumed either axially invariable or confined to the imaging plane, while the 3-D nature of the photon migration is still retained. This therefore leads to a 2-D inverse issue with the matched 3-D forward model. Simulation results show that comparing to the 3-D reconstruction algorithm, the Semi-3D reconstruction algorithm cut 27% the calculation time consumption.

  7. A Novel 3D Building Damage Detection Method Using Multiple Overlapping UAV Images

    NASA Astrophysics Data System (ADS)

    Sui, H.; Tu, J.; Song, Z.; Chen, G.; Li, Q.

    2014-09-01

    In this paper, a novel approach is presented that applies multiple overlapping UAV imagesto building damage detection. Traditional building damage detection method focus on 2D changes detection (i.e., those only in image appearance), whereas the 2D information delivered by the images is often not sufficient and accurate when dealing with building damage detection. Therefore the detection of building damage in 3D feature of scenes is desired. The key idea of 3D building damage detection is the 3D Change Detection using 3D point cloud obtained from aerial images through Structure from motion (SFM) techniques. The approach of building damage detection discussed in this paper not only uses the height changes of 3D feature of scene but also utilizes the image's shape and texture feature. Therefore, this method fully combines the 2D and 3D information of the real world to detect the building damage. The results, tested through field study, demonstrate that this method is feasible and effective in building damage detection. It has also shown that the proposed method is easily applicable and suited well for rapid damage assessment after natural disasters.

  8. Real-time 3D surface-image-guided beam setup in radiotherapy of breast cancer

    SciTech Connect

    Djajaputra, David; Li Shidong

    2005-01-01

    We describe an approach for external beam radiotherapy of breast cancer that utilizes the three-dimensional (3D) surface information of the breast. The surface data of the breast are obtained from a 3D optical camera that is rigidly mounted on the ceiling of the treatment vault. This 3D camera utilizes light in the visible range therefore it introduces no ionization radiation to the patient. In addition to the surface topographical information of the treated area, the camera also captures gray-scale information that is overlaid on the 3D surface image. This allows us to visualize the skin markers and automatically determine the isocenter position and the beam angles in the breast tangential fields. The field sizes and shapes of the tangential, supraclavicular, and internal mammary gland fields can all be determined according to the 3D surface image of the target. A least-squares method is first introduced for the tangential-field setup that is useful for compensation of the target shape changes. The entire process of capturing the 3D surface data and subsequent calculation of beam parameters typically requires less than 1 min. Our tests on phantom experiments and patient images have achieved the accuracy of 1 mm in shift and 0.5 deg. in rotation. Importantly, the target shape and position changes in each treatment session can both be corrected through this real-time image-guided system.

  9. Wearable 3-D Photoacoustic Tomography for Functional Brain Imaging in Behaving Rats

    PubMed Central

    Tang, Jianbo; Coleman, Jason E.; Dai, Xianjin; Jiang, Huabei

    2016-01-01

    Understanding the relationship between brain function and behavior remains a major challenge in neuroscience. Photoacoustic tomography (PAT) is an emerging technique that allows for noninvasive in vivo brain imaging at micrometer-millisecond spatiotemporal resolution. In this article, a novel, miniaturized 3D wearable PAT (3D-wPAT) technique is described for brain imaging in behaving rats. 3D-wPAT has three layers of fully functional acoustic transducer arrays. Phantom imaging experiments revealed that the in-plane X-Y spatial resolutions were ~200 μm for each acoustic detection layer. The functional imaging capacity of 3D-wPAT was demonstrated by mapping the cerebral oxygen saturation via multi-wavelength irradiation in behaving hyperoxic rats. In addition, we demonstrated that 3D-wPAT could be used for monitoring sensory stimulus-evoked responses in behaving rats by measuring hemodynamic responses in the primary visual cortex during visual stimulation. Together, these results show the potential of 3D-wPAT for brain study in behaving rodents. PMID:27146026

  10. Dynamic visual image modeling for 3D synthetic scenes in agricultural engineering

    NASA Astrophysics Data System (ADS)

    Gao, Li; Yan, Juntao; Li, Xiaobo; Ji, Yatai; Li, Xin

    The dynamic visual image modeling for 3D synthetic scenes by using dynamic multichannel binocular visual image based on the mobile self-organizing network. Technologies of 3D modeling synthetic scenes have been widely used in kinds of industries. The main purpose of this paper is to use multiple networks of dynamic visual monitors and sensors to observe an unattended area, to use the advantages of mobile network in rural areas for improving existing mobile network information service further and providing personalized information services. The goal of displaying is to provide perfect representation of synthetic scenes. Using low-power dynamic visual monitors and temperature/humidity sensor or GPS installed in the node equipment, monitoring data will be sent at scheduled time. Then through the mobile self-organizing network, 3D model is rebuilt by synthesizing the returned images. On this basis, we formalize a novel algorithm for multichannel binocular visual 3D images based on fast 3D modeling. Taking advantage of these low prices mobile, mobile self-organizing networks can get a large number of video from where is not suitable for human observation or unable to reach, and accurately synthetic 3D scene. This application will play a great role in promoting its application in agriculture.

  11. Wearable 3-D Photoacoustic Tomography for Functional Brain Imaging in Behaving Rats.

    PubMed

    Tang, Jianbo; Coleman, Jason E; Dai, Xianjin; Jiang, Huabei

    2016-01-01

    Understanding the relationship between brain function and behavior remains a major challenge in neuroscience. Photoacoustic tomography (PAT) is an emerging technique that allows for noninvasive in vivo brain imaging at micrometer-millisecond spatiotemporal resolution. In this article, a novel, miniaturized 3D wearable PAT (3D-wPAT) technique is described for brain imaging in behaving rats. 3D-wPAT has three layers of fully functional acoustic transducer arrays. Phantom imaging experiments revealed that the in-plane X-Y spatial resolutions were ~200 μm for each acoustic detection layer. The functional imaging capacity of 3D-wPAT was demonstrated by mapping the cerebral oxygen saturation via multi-wavelength irradiation in behaving hyperoxic rats. In addition, we demonstrated that 3D-wPAT could be used for monitoring sensory stimulus-evoked responses in behaving rats by measuring hemodynamic responses in the primary visual cortex during visual stimulation. Together, these results show the potential of 3D-wPAT for brain study in behaving rodents. PMID:27146026

  12. A practical salient region feature based 3D multi-modality registration method for medical images

    NASA Astrophysics Data System (ADS)

    Hahn, Dieter A.; Wolz, Gabriele; Sun, Yiyong; Hornegger, Joachim; Sauer, Frank; Kuwert, Torsten; Xu, Chenyang

    2006-03-01

    We present a novel representation of 3D salient region features and its integration into a hybrid rigid-body registration framework. We adopt scale, translation and rotation invariance properties of those intrinsic 3D features to estimate a transform between underlying mono- or multi-modal 3D medical images. Our method combines advantageous aspects of both feature- and intensity-based approaches and consists of three steps: an automatic extraction of a set of 3D salient region features on each image, a robust estimation of correspondences and their sub-pixel accurate refinement with outliers elimination. We propose a region-growing based approach for the extraction of 3D salient region features, a solution to the problem of feature clustering and a reduction of the correspondence search space complexity. Results of the developed algorithm are presented for both mono- and multi-modal intra-patient 3D image pairs (CT, PET and SPECT) that have been acquired for change detection, tumor localization, and time based intra-person studies. The accuracy of the method is clinically evaluated by a medical expert with an approach that measures the distance between a set of selected corresponding points consisting of both anatomical and functional structures or lesion sites. This demonstrates the robustness of the proposed method to image overlap, missing information and artefacts. We conclude by discussing potential medical applications and possibilities for integration into a non-rigid registration framework.

  13. Flatbed-type 3D display systems using integral imaging method

    NASA Astrophysics Data System (ADS)

    Hirayama, Yuzo; Nagatani, Hiroyuki; Saishu, Tatsuo; Fukushima, Rieko; Taira, Kazuki

    2006-10-01

    We have developed prototypes of flatbed-type autostereoscopic display systems using one-dimensional integral imaging method. The integral imaging system reproduces light beams similar of those produced by a real object. Our display architecture is suitable for flatbed configurations because it has a large margin for viewing distance and angle and has continuous motion parallax. We have applied our technology to 15.4-inch displays. We realized horizontal resolution of 480 with 12 parallaxes due to adoption of mosaic pixel arrangement of the display panel. It allows viewers to see high quality autostereoscopic images. Viewing the display from angle allows the viewer to experience 3-D images that stand out several centimeters from the surface of the display. Mixed reality of virtual 3-D objects and real objects are also realized on a flatbed display. In seeking reproduction of natural 3-D images on the flatbed display, we developed proprietary software. The fast playback of the CG movie contents and real-time interaction are realized with the aid of a graphics card. Realization of the safety 3-D images to the human beings is very important. Therefore, we have measured the effects on the visual function and evaluated the biological effects. For example, the accommodation and convergence were measured at the same time. The various biological effects are also measured before and after the task of watching 3-D images. We have found that our displays show better results than those to a conventional stereoscopic display. The new technology opens up new areas of application for 3-D displays, including arcade games, e-learning, simulations of buildings and landscapes, and even 3-D menus in restaurants.

  14. Imaging of retinal ganglion cells in glaucoma: pitfalls and challenges.

    PubMed

    Werkmeister, R M; Cherecheanu, A Popa; Garhofer, G; Schmidl, D; Schmetterer, L

    2013-08-01

    Imaging has gained a key role in modern glaucoma management. Traditionally, interest was directed toward the appearance of the optic nerve head and the retinal nerve fiber layer. With the improvement of the resolution of optical coherence tomography, the ganglion cell complex has also become routinely accessible in the clinic. Further advances have been made in understanding the structure-function relationship in glaucoma. Nevertheless, direct imaging of the retinal ganglion cells in glaucoma would be advantageous. With the currently used techniques, this goal cannot be achieved, because the transversal resolution is limited by aberrations of the eye. The use of adaptive optics has significantly improved transversal resolution, and the imaging of several cell types including cones and astrocytes has become possible. Imaging of retinal ganglion cells, however, still remains a problem, because of the transparency of these cells. However, the visualization of retinal ganglion cells and their dendrites has been achieved in animal models. Furthermore, attempts have been made to visualize the apoptosis of retinal ganglion cells in vivo. Implementation of these techniques in clinical practice will probably improve glaucoma care and facilitate the development of neuroprotective strategies. PMID:23512142

  15. Influence of Retinal Image Shifts and Extra-Retinal Eye Movement Signals on Binocular Rivalry Alternations

    PubMed Central

    Kalisvaart, Joke P.; Goossens, Jeroen

    2013-01-01

    Previous studies have indicated that saccadic eye movements correlate positively with perceptual alternations in binocular rivalry, presumably because the foveal image changes resulting from saccades, rather than the eye movement themselves, cause switches in awareness. Recently, however, we found evidence that retinal image shifts elicit so-called onset rivalry and not percept switches as such. These findings raise the interesting question whether onset rivalry may account for correlations between saccades and percept switches. We therefore studied binocular rivalry when subjects made eye movements across a visual stimulus and compared it with the rivalry in a replay condition in which subjects maintained fixation while the same retinal displacements were reproduced by stimulus displacements on the screen. We used dichoptic random-dot motion stimuli viewed through a stereoscope, and measured eye and eyelid movements with scleral search-coils. Positive correlations between retinal image shifts and perceptual switches were observed for both saccades and stimulus jumps, but only for switches towards the subjects' preferred eye at stimulus onset. A similar asymmetry was observed for blink-induced stimulus interruptions. Moreover, for saccades, amplitude appeared crucial as the positive correlation persisted for small stimulus jumps, but not for small saccades (amplitudes < 1). These findings corroborate our tenet that saccades elicit a form of onset rivalry, and that rivalry is modulated by extra-retinal eye movement signals. PMID:23593494

  16. New techniques of determining focus position in gamma knife operation using 3D image reconstruction

    NASA Astrophysics Data System (ADS)

    Xiong, Yingen; Wang, Dezong; Zhou, Quan

    1994-09-01

    In this paper, new techniques of determining the focus of a disease position in a gamma knife operation are presented. In these techniques, the transparent 3D color image of the human body organ is reconstructed using a new three-dimensional reconstruction method, and then the position, the area, and the volume of focus of a disease such as cancer or a tumor are calculated. They are used in the gamma knife operation. The CT pictures are input into a digital image processing system. The useful information is extracted and the original data are obtained. Then the transparent 3D color image is reconstructed using these original data. By using this transparent 3D color image, the positions of the human body organ and the focus of a disease are determined in a coordinate system. While the 3D image is reconstructed, the area and the volume of human body organ and focus of a disease can be calculated at the same time. It is expressed through actual application that the positions of human body organ and focus of a disease can be determined exactly by using the transparent 3D color image. It is very useful in gamma knife operation or other surgical operation. The techniques presented in this paper have great application value.

  17. 3D weighting in cone beam image reconstruction algorithms: ray-driven vs. pixel-driven.

    PubMed

    Tang, Xiangyang; Nilsen, Roy A; Smolin, Alex; Lifland, Ilya; Samsonov, Dmitry; Taha, Basel

    2008-01-01

    A 3D weighting scheme have been proposed previously to reconstruct images at both helical and axial scans in stat-of-the-art volumetric CT scanners for diagnostic imaging. Such a 3D weighting can be implemented in the manner of either ray-driven or pixel-drive, depending on the available computation resources. An experimental study is conducted in this paper to evaluate the difference between the ray-driven and pixel-driven implementations of the 3D weighting from the perspective of image quality, while their computational complexity is analyzed theoretically. Computer simulated data and several phantoms, such as the helical body phantom and humanoid chest phantom, are employed in the experimental study, showing that both the ray-driven and pixel-driven 3D weighting provides superior image quality for diagnostic imaging in clinical applications. With the availability of image reconstruction engine at increasing computational power, it is believed that the pixel-driven 3D weighting will be dominantly employed in state-of-the-art volumetric CT scanners over clinical applications. PMID:19163264

  18. High speed spectral domain optical coherence tomography for retinal imaging at 500,000 A‑lines per second

    PubMed Central

    An, Lin; Li, Peng; Shen, Tueng T.; Wang, Ruikang

    2011-01-01

    We present a new development of ultrahigh speed spectral domain optical coherence tomography (SDOCT) for human retinal imaging at 850 nm central wavelength by employing two high-speed line scan CMOS cameras, each running at 250 kHz. Through precisely controlling the recording and reading time periods of the two cameras, the SDOCT system realizes an imaging speed at 500,000 A-lines per second, while maintaining both high axial resolution (~8 μm) and acceptable depth ranging (~2.5 mm). With this system, we propose two scanning protocols for human retinal imaging. The first is aimed to achieve isotropic dense sampling and fast scanning speed, enabling a 3D imaging within 0.72 sec for a region covering 4x4 mm2. In this case, the B-frame rate is 700 Hz and the isotropic dense sampling is 500 A-lines along both the fast and slow axes. This scanning protocol minimizes the motion artifacts, thus making it possible to perform two directional averaging so that the signal to noise ratio of the system is enhanced while the degradation of its resolution is minimized. The second protocol is designed to scan the retina in a large field of view, in which 1200 A-lines are captured along both the fast and slow axes, covering 10 mm2, to provide overall information about the retinal status. Because of relatively long imaging time (4 seconds for a 3D scan), the motion artifact is inevitable, making it difficult to interpret the 3D data set, particularly in a way of depth-resolved en-face fundus images. To mitigate this difficulty, we propose to use the relatively high reflecting retinal pigmented epithelium layer as the reference to flatten the original 3D data set along both the fast and slow axes. We show that the proposed system delivers superb performance for human retina imaging. PMID:22025983

  19. High speed spectral domain optical coherence tomography for retinal imaging at 500,000 A‑lines per second.

    PubMed

    An, Lin; Li, Peng; Shen, Tueng T; Wang, Ruikang

    2011-10-01

    We present a new development of ultrahigh speed spectral domain optical coherence tomography (SDOCT) for human retinal imaging at 850 nm central wavelength by employing two high-speed line scan CMOS cameras, each running at 250 kHz. Through precisely controlling the recording and reading time periods of the two cameras, the SDOCT system realizes an imaging speed at 500,000 A-lines per second, while maintaining both high axial resolution (~8 μm) and acceptable depth ranging (~2.5 mm). With this system, we propose two scanning protocols for human retinal imaging. The first is aimed to achieve isotropic dense sampling and fast scanning speed, enabling a 3D imaging within 0.72 sec for a region covering 4x4 mm(2). In this case, the B-frame rate is 700 Hz and the isotropic dense sampling is 500 A-lines along both the fast and slow axes. This scanning protocol minimizes the motion artifacts, thus making it possible to perform two directional averaging so that the signal to noise ratio of the system is enhanced while the degradation of its resolution is minimized. The second protocol is designed to scan the retina in a large field of view, in which 1200 A-lines are captured along both the fast and slow axes, covering 10 mm(2), to provide overall information about the retinal status. Because of relatively long imaging time (4 seconds for a 3D scan), the motion artifact is inevitable, making it difficult to interpret the 3D data set, particularly in a way of depth-resolved en-face fundus images. To mitigate this difficulty, we propose to use the relatively high reflecting retinal pigmented epithelium layer as the reference to flatten the original 3D data set along both the fast and slow axes. We show that the proposed system delivers superb performance for human retina imaging. PMID:22025983

  20. Constraining 3D Process Sedimentological Models to Geophysical Data Using Image Quilting

    NASA Astrophysics Data System (ADS)

    Tahmasebi, P.; Da Pra, A.; Pontiggia, M.; Caers, J.

    2014-12-01

    3D process geological models, whether for carbonate or sedimentological systems, have been proposed for modeling realistic subsurface heterogeneity. The problem with such forward process models is that they are not constrained to any subsurface data whether to wells or geophysical surveys. We propose a new method for realistic geological modeling of complex heterogeneity by hybridizing 3D process modeling of geological deposition with conditioning by means of a novel multiple-point geostatistics (MPS) technique termed image quilting (IQ). Image quilting is a pattern-based techniques that stiches together patterns extracted from training images to generate stochastic realizations that look like the training image. In this paper, we illustrate how 3D process model realizations can be used as training images in image quilting. To constrain the realization to seismic data we first interpret each facies in the geophysical data. These interpretation, while overly smooth and not reflecting finer scale variation are used as auxiliary variables in the generation of the image quilting realizations. To condition to well data, we first perform a kriging of the well data to generate a kriging map and kriging variance. The kriging map is used as additional auxiliary variable while the kriging variance is used as a weight given to the kriging derived auxiliary variable. We present an application to a giant offshore reservoir. Starting from seismic advanced attribute analysis and sedimentological interpretation, we build the 3D sedimentological process based model and use it as non-stationary training image for conditional image quilting.

  1. A new gold-standard dataset for 2D/3D image registration evaluation

    NASA Astrophysics Data System (ADS)

    Pawiro, Supriyanto; Markelj, Primoz; Gendrin, Christelle; Figl, Michael; Stock, Markus; Bloch, Christoph; Weber, Christoph; Unger, Ewald; Nöbauer, Iris; Kainberger, Franz; Bergmeister, Helga; Georg, Dietmar; Bergmann, Helmar; Birkfellner, Wolfgang

    2010-02-01

    In this paper, we propose a new gold standard data set for the validation of 2D/3D image registration algorithms for image guided radiotherapy. A gold standard data set was calculated using a pig head with attached fiducial markers. We used several imaging modalities common in diagnostic imaging or radiotherapy which include 64-slice computed tomography (CT), magnetic resonance imaging (MRI) using T1, T2 and proton density (PD) sequences, and cone beam CT (CBCT) imaging data. Radiographic data were acquired using kilovoltage (kV) and megavoltage (MV) imaging techniques. The image information reflects both anatomy and reliable fiducial marker information, and improves over existing data sets by the level of anatomical detail and image data quality. The markers of three dimensional (3D) and two dimensional (2D) images were segmented using Analyze 9.0 (AnalyzeDirect, Inc) and an in-house software. The projection distance errors (PDE) and the expected target registration errors (TRE) over all the image data sets were found to be less than 1.7 mm and 1.3 mm, respectively. The gold standard data set, obtained with state-of-the-art imaging technology, has the potential to improve the validation of 2D/3D registration algorithms for image guided therapy.

  2. F3D Image Processing and Analysis for Many - and Multi-core Platforms

    SciTech Connect

    2014-10-01

    F3D is written in OpenCL, so it achieve[sic] platform-portable parallelism on modern mutli-core CPUs and many-core GPUs. The interface and mechanims to access F3D core are written in Java as a plugin for Fiji/ImageJ to deliver several key image-processing algorithms necessary to remove artifacts from micro-tomography data. The algorithms consist of data parallel aware filters that can efficiently utilizes[sic] resources and can work on out of core datasets and scale efficiently across multiple accelerators. Optimizing for data parallel filters, streaming out of core datasets, and efficient resource and memory and data managements over complex execution sequence of filters greatly expedites any scientific workflow with image processing requirements. F3D performs several different types of 3D image processing operations, such as non-linear filtering using bilateral filtering and/or median filtering and/or morphological operators (MM). F3D gray-level MM operators are one-pass constant time methods that can perform morphological transformations with a line-structuring element oriented in discrete directions. Additionally, MM operators can be applied to gray-scale images, and consist of two parts: (a) a reference shape or structuring element, which is translated over the image, and (b) a mechanism, or operation, that defines the comparisons to be performed between the image and the structuring element. This tool provides a critical component within many complex pipelines such as those for performing automated segmentation of image stacks. F3D is also called a "descendent" of Quant-CT, another software we developed in the past. These two modules are to be integrated in a next version. Further details were reported in: D.M. Ushizima, T. Perciano, H. Krishnan, B. Loring, H. Bale, D. Parkinson, and J. Sethian. Structure recognition from high-resolution images of ceramic composites. IEEE International Conference on Big Data, October 2014.

  3. Novel time- and depth-stamped imaging for 3D-PIV (particle image velocimetry) using correlation image sensor

    NASA Astrophysics Data System (ADS)

    Komiya, Kenji; Kurihara, Toru; Ando, Shigeru

    2012-03-01

    We propose a novel and extremely efficient scheme of 3D-Particle Image Velocimetry (3D-PIV), simultaneous time-stamped and depth-stamped imaging, using correlation image sensor (CIS) and a structured illumination. In conventional PIV measurements, 3-D positions of numerous tiny particles inserted in a fluid field must be detected using multiple high-speed cameras. Resultant huge amount of data volume increases the computational cost and reduces the reliability of velocity field estimation. These problems can be solved if a single-frame 4D (3D position and time) trajectory imaging can be realized. The CIS developed by us is the device which outputs the temporal correlation between incident light intensity and two sets of three-phase (3P) reference signal common for whole pixels. When particles are imaged in a frame using a 3P reference signal, CIS records the time information as a phase distribution along their trajectories. CIS can also capture the depth information by exploiting the structured illumination and another 3P reference signal. Combination of these methods provides the time- and depth-stamped imaging. We describe the principle, theoretical foundations, and analysis algorithms. Several experimental results for evaluating accuracy and resolution are shown.

  4. Image selection in photogrammetric multi-view stereo methods for metric and complete 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Hosseininaveh Ahmadabadian, Ali; Robson, Stuart; Boehm, Jan; Shortis, Mark

    2013-04-01

    Multi-View Stereo (MVS) as a low cost technique for precise 3D reconstruction can be a rival for laser scanners if the scale of the model is resolved. A fusion of stereo imaging equipment with photogrammetric bundle adjustment and MVS methods, known as photogrammetric MVS, can generate correctly scaled 3D models without using any known object distances. Although a huge number of stereo images (e.g. 200 high resolution images from a small object) captured of the object contains redundant data that allows detailed and accurate 3D reconstruction, the capture and processing time is increased when a vast amount of high resolution images are employed. Moreover, some parts of the object are often missing due to the lack of coverage of all areas. These problems demand a logical selection of the most suitable stereo camera views from the large image dataset. This paper presents a method for clustering and choosing optimal stereo or optionally single images from a large image dataset. The approach focusses on the two key steps of image clustering and iterative image selection. The method is developed within a software application called Imaging Network Designer (IND) and tested by the 3D recording of a gearbox and three metric reference objects. A comparison is made between IND and CMVS, which is a free package for selecting vantage images. The final 3D models obtained from the IND and CMVS approaches are compared with datasets generated with an MMDx Nikon Laser scanner. Results demonstrate that IND can provide a better image selection for MVS than CMVS in terms of surface coordinate uncertainty and completeness.

  5. Analysis and Processing the 3D-Range-Image-Data for Robot Monitoring

    NASA Astrophysics Data System (ADS)

    Kohoutek, Tobias

    2008-09-01

    Industrial robots are commonly used for physically stressful jobs in complex environments. In any case collisions with heavy and high dynamic machines need to be prevented. For this reason the operational range has to be monitored precisely, reliably and meticulously. The advantage of the SwissRanger® SR-3000 is that it delivers intensity images and 3D-information simultaneously of the same scene that conveniently allows 3D-monitoring. Due to that fact automatic real time collision prevention within the robots working space is possible by working with 3D-coordinates.

  6. Integration of Imaging Analysis and 3D Laser Relief of Artworks: A Powerful Diagnostic Tool

    NASA Astrophysics Data System (ADS)

    Marras, L.; Fontana, R.; Gambino, M. C.; Greco, M.; Materazzi, M.; Pampaloni, E.; Pelagotti, A.; Pezzati, L.; Poggi, P.

    When analysing a work of art, imaging data from multiple sources can be effectively integrated and added to a 3D digital model, in order to form an improved multi-dimensional dataset. Herein we present the IR-colour reflectography, the UV fluorescence and the 3D microprofilometry diagnostic devices, developed at INOA, and we discuss the integration of 2D and 3D datasets, the former giving information of the surface (or nearly sub-surface) properties of a painting, the latter giving shape information of the surface itself.

  7. Real-time 3D adaptive filtering for portable imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often not able to run with sufficient performance on a portable platform. In recent years, advanced multicore DSPs have been introduced that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms like 3D adaptive filtering, improving the image quality of portable medical imaging devices. In this study, the performance of a 3D adaptive filtering algorithm on a digital signal processor (DSP) is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec.

  8. 2D imaging and 3D sensing data acquisition and mutual registration for painting conservation

    NASA Astrophysics Data System (ADS)

    Fontana, Raffaella; Gambino, Maria Chiara; Greco, Marinella; Marras, Luciano; Pampaloni, Enrico M.; Pelagotti, Anna; Pezzati, Luca; Poggi, Pasquale

    2004-12-01

    We describe the application of 2D and 3D data acquisition and mutual registration to the conservation of paintings. RGB color image acquisition, IR and UV fluorescence imaging, together with the more recent hyperspectral imaging (32 bands) are among the most useful techniques in this field. They generally are meant to provide information on the painting materials, on the employed techniques and on the object state of conservation. However, only when the various images are perfectly registered on each other and on the 3D model, no ambiguity is possible and safe conclusions may be drawn. We present the integration of 2D and 3D measurements carried out on two different paintings: "Madonna of the Yarnwinder" by Leonardo da Vinci, and "Portrait of Lionello d'Este", by Pisanello, both painted in the XV century.

  9. 2D imaging and 3D sensing data acquisition and mutual registration for painting conservation

    NASA Astrophysics Data System (ADS)

    Fontana, Raffaella; Gambino, Maria Chiara; Greco, Marinella; Marras, Luciano; Pampaloni, Enrico M.; Pelagotti, Anna; Pezzati, Luca; Poggi, Pasquale

    2005-01-01

    We describe the application of 2D and 3D data acquisition and mutual registration to the conservation of paintings. RGB color image acquisition, IR and UV fluorescence imaging, together with the more recent hyperspectral imaging (32 bands) are among the most useful techniques in this field. They generally are meant to provide information on the painting materials, on the employed techniques and on the object state of conservation. However, only when the various images are perfectly registered on each other and on the 3D model, no ambiguity is possible and safe conclusions may be drawn. We present the integration of 2D and 3D measurements carried out on two different paintings: "Madonna of the Yarnwinder" by Leonardo da Vinci, and "Portrait of Lionello d'Este", by Pisanello, both painted in the XV century.

  10. Synthesis of 3D Model of a Magnetic Field-Influenced Body from a Single Image

    NASA Technical Reports Server (NTRS)

    Wang, Cuilan; Newman, Timothy; Gallagher, Dennis

    2006-01-01

    A method for recovery of a 3D model of a cloud-like structure that is in motion and deforming but approximately governed by magnetic field properties is described. The method allows recovery of the model from a single intensity image in which the structure's silhouette can be observed. The method exploits envelope theory and a magnetic field model. Given one intensity image and the segmented silhouette in the image, the method proceeds without human intervention to produce the 3D model. In addition to allowing 3D model synthesis, the method's capability to yield a very compact description offers further utility. Application of the method to several real-world images is demonstrated.

  11. Operator guidance in 2D echocardiography via 3D model to image registration

    NASA Astrophysics Data System (ADS)

    Bergmeir, Christoph; Subramanian, Navneeth

    2009-02-01

    Ubiquitous use of 2D ultrasound (US) is limited by the difficulty in interpretation of images for an untrained operator. We present a solution for operator guidance through visual cues via registration of US to a 3D model. The method is demonstrated on 2D echocardiography data, where we are able to localize the scan plane in relation to the standard planes on the 3D model. Our algorithm operates by pre-processing both the US and CT images to the most basic information- muscle, blood pool - using classification. Subsequently, these labels are registered using the match cardinality metric for binary labeled images. We evaluated our method on four parasternal long-axis and three parasternal short-axis images from different patients. Results show that our system is able to correctly distinguish between the different US standard views and is able to localize the scan on the 3D model, correctly on five out of seven cases.

  12. 3D image copyright protection based on cellular automata transform and direct smart pixel mapping

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Wei; Kim, Seok-Tae; Lee, In-Kwon

    2014-10-01

    We propose a three-dimensional (3D) watermarking system with the direct smart pixel mapping algorithm to improve the resolution of the reconstructed 3D watermark plane images. The depth-converted elemental image array (EIA) is obtained through the computational pixel mapping method. In the watermark embedding process, the depth-converted EIA is first scrambled by using the Arnold transform, which is then embedded in the middle frequency of the cellular automata (CA) transform. Compared with conventional computational integral imaging reconstruction (CIIR) methods, this proposed scheme gives us a higher resolution of the reconstructed 3D plane images by using the quality-enhanced depth-converted EIA. The proposed method, which can obtain many transform planes for embedding watermark data, uses CA transforms with various gateway values. To prove the effectiveness of the proposed method, we present the results of our preliminary experiments.

  13. 3D topography of biologic tissue by multiview imaging and structured light illumination

    NASA Astrophysics Data System (ADS)

    Liu, Peng; Zhang, Shiwu; Xu, Ronald

    2014-02-01

    Obtaining three-dimensional (3D) information of biologic tissue is important in many medical applications. This paper presents two methods for reconstructing 3D topography of biologic tissue: multiview imaging and structured light illumination. For each method, the working principle is introduced, followed by experimental validation on a diabetic foot model. To compare the performance characteristics of these two imaging methods, a coordinate measuring machine (CMM) is used as a standard control. The wound surface topography of the diabetic foot model is measured by multiview imaging and structured light illumination methods respectively and compared with the CMM measurements. The comparison results show that the structured light illumination method is a promising technique for 3D topographic imaging of biologic tissue.

  14. Nonrigid registration and classification of the kidneys in 3D dynamic contrast enhanced (DCE) MR images

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Ghafourian, Pegah; Sharma, Puneet; Salman, Khalil; Martin, Diego; Fei, Baowei

    2012-02-01

    We have applied image analysis methods in the assessment of human kidney perfusion based on 3D dynamic contrast-enhanced (DCE) MRI data. This approach consists of 3D non-rigid image registration of the kidneys and fuzzy C-mean classification of kidney tissues. The proposed registration method reduced motion artifacts in the dynamic images and improved the analysis of kidney compartments (cortex, medulla, and cavities). The dynamic intensity curves show the successive transition of the contrast agent through kidney compartments. The proposed method for motion correction and kidney compartment classification may be used to improve the validity and usefulness of further model-based pharmacokinetic analysis of kidney function.

  15. Detection of retinal nerve fiber layer defects on retinal fundus images for early diagnosis of glaucoma

    NASA Astrophysics Data System (ADS)

    Muramatsu, Chisako; Hayashi, Yoshinori; Sawada, Akira; Hatanaka, Yuji; Hara, Takeshi; Yamamoto, Tetsuya; Fujita, Hiroshi

    2010-01-01

    Retinal nerve fiber layer defect (NFLD) is a major sign of glaucoma, which is the second leading cause of blindness in the world. Early detection of NFLDs is critical for improved prognosis of this progressive, blinding disease. We have investigated a computerized scheme for detection of NFLDs on retinal fundus images. In this study, 162 images, including 81 images with 99 NFLDs, were used. After major blood vessels were removed, the images were transformed so that the curved paths of retinal nerves become approximately straight on the basis of ellipses, and the Gabor filters were applied for enhancement of NFLDs. Bandlike regions darker than the surrounding pixels were detected as candidates of NFLDs. For each candidate, image features were determined and the likelihood of a true NFLD was determined by using the linear discriminant analysis and an artificial neural network (ANN). The sensitivity for detecting the NFLDs was 91% at 1.0 false positive per image by using the ANN. The proposed computerized system for the detection of NFLDs can be useful to physicians in the diagnosis of glaucoma in a mass screening.

  16. Photon-counting compressive sensing laser radar for 3D imaging.

    PubMed

    Howland, G A; Dixon, P B; Howell, J C

    2011-11-01

    We experimentally demonstrate a photon-counting, single-pixel, laser radar camera for 3D imaging where transverse spatial resolution is obtained through compressive sensing without scanning. We use this technique to image through partially obscuring objects, such as camouflage netting. Our implementation improves upon pixel-array based designs with a compact, resource-efficient design and highly scalable resolution. PMID:22086015

  17. Weakly supervised automatic segmentation and 3D modeling of the knee joint from MR images

    NASA Astrophysics Data System (ADS)

    Amami, Amal; Ben Azouz, Zouhour

    2013-12-01

    Automatic segmentation and 3D modeling of the knee joint from MR images, is a challenging task. Most of the existing techniques require the tedious manual segmentation of a training set of MRIs. We present an approach that necessitates the manual segmentation of one MR image. It is based on a volumetric active appearance model. First, a dense tetrahedral mesh is automatically created on a reference MR image that is arbitrary selected. Second, a pairwise non-rigid registration between each MRI from a training set and the reference MRI is computed. The non-rigid registration is based on a piece-wise affine deformation using the created tetrahedral mesh. The minimum description length is then used to bring all the MR images into a correspondence. An average image and tetrahedral mesh, as well as a set of main modes of variations, are generated using the established correspondence. Any manual segmentation of the average MRI can be mapped to other MR images using the AAM. The proposed approach has the advantage of simultaneously generating 3D reconstructions of the surface as well as a 3D solid model of the knee joint. The generated surfaces and tetrahedral meshes present the interesting property of fulfilling a correspondence between different MR images. This paper shows preliminary results of the proposed approach. It demonstrates the automatic segmentation and 3D reconstruction of a knee joint obtained by mapping a manual segmentation of a reference image.

  18. Characterizing and reducing crosstalk in printed anaglyph stereoscopic 3D images

    NASA Astrophysics Data System (ADS)

    Woods, Andrew J.; Harris, Chris R.; Leggo, Dean B.; Rourke, Tegan M.

    2013-04-01

    The anaglyph three-dimensional (3D) method is a widely used technique for presenting stereoscopic 3D images. Its primary advantages are that it will work on any full-color display and only requires that the user view the anaglyph image using a pair of anaglyph 3D glasses with usually one lens tinted red and the other lens tinted cyan. A common image quality problem of anaglyph 3D images is high levels of crosstalk-the incomplete isolation of the left and right image channels such that each eye sees a "ghost" of the opposite perspective view. In printed anaglyph images, the crosstalk levels are often very high-much higher than when anaglyph images are presented on emissive displays. The sources of crosstalk in printed anaglyph images are described and a simulation model is developed that allows the amount of printed anaglyph crosstalk to be estimated based on the spectral characteristics of the light source, paper, ink set, and anaglyph glasses. The model is validated using a visual crosstalk ranking test, which indicates good agreement. The model is then used to consider scenarios for the reduction of crosstalk in printed anaglyph systems and finds a number of options that are likely to reduce crosstalk considerably.

  19. Fusion of laser and image sensory data for 3-D modeling of the free navigation space

    NASA Technical Reports Server (NTRS)

    Mass, M.; Moghaddamzadeh, A.; Bourbakis, N.

    1994-01-01

    A fusion technique which combines two different types of sensory data for 3-D modeling of a navigation space is presented. The sensory data is generated by a vision camera and a laser scanner. The problem of different resolutions for these sensory data was solved by reduced image resolution, fusion of different data, and use of a fuzzy image segmentation technique.

  20. Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display.

    PubMed

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-05-18

    We present a image quality improvement in a parallax barrier (PB)-based multiview autostereoscopic 3D display system under a real-time tracking of positions of a viewer's eyes. The system presented exploits a parallax barrier engineered to offer significantly improved quality of three-dimensional images for a moving viewer without an eyewear under the dynamic eye tracking. The improved image quality includes enhanced uniformity of image brightness, reduced point crosstalk, and no pseudoscopic effects. We control the relative ratio between two parameters i.e., a pixel size and the aperture of a parallax barrier slit to improve uniformity of image brightness at a viewing zone. The eye tracking that monitors positions of a viewer's eyes enables pixel data control software to turn on only pixels for view images near the viewer's eyes (the other pixels turned off), thus reducing point crosstalk. The eye tracking combined software provides right images for the respective eyes, therefore producing no pseudoscopic effects at its zone boundaries. The viewing zone can be spanned over area larger than the central viewing zone offered by a conventional PB-based multiview autostereoscopic 3D display (no eye tracking). Our 3D display system also provides multiviews for motion parallax under eye tracking. More importantly, we demonstrate substantial reduction of point crosstalk of images at the viewing zone, its level being comparable to that of a commercialized eyewear-assisted 3D display system. The multiview autostereoscopic 3D display presented can greatly resolve the point crosstalk problem, which is one of the critical factors that make it difficult for previous technologies for a multiview autostereoscopic 3D display to replace an eyewear-assisted counterpart. PMID:26074575