These are representative sample records from Science.gov related to your search topic.
For comprehensive and current results, perform a real-time search at Science.gov.
1

3D Reconstruction of the Retinal Arterial Tree Using Subject-Specific Fundus Images  

NASA Astrophysics Data System (ADS)

Systemic diseases, such as hypertension and diabetes, are associated with changes in the retinal microvasculature. Although a number of studies have been performed on the quantitative assessment of the geometrical patterns of the retinal vasculature, previous work has been confined to 2 dimensional (2D) analyses. In this paper, we present an approach to obtain a 3D reconstruction of the retinal arteries from a pair of 2D retinal images acquired in vivo. A simple essential matrix based self-calibration approach was employed for the "fundus camera-eye" system. Vessel segmentation was performed using a semi-automatic approach and correspondence between points from different images was calculated. The results of 3D reconstruction show the centreline of retinal vessels and their 3D curvature clearly. Three-dimensional reconstruction of the retinal vessels is feasible and may be useful in future studies of the retinal vasculature in disease.

Liu, D.; Wood, N. B.; Xu, X. Y.; Witt, N.; Hughes, A. D.; Samcg, Thom

2

Adaptive-optics optical coherence tomography for high-resolution and high-speed 3D retinal in vivo imaging  

PubMed Central

We have combined Fourier-domain optical coherence tomography (FD-OCT) with a closed-loop adaptive optics (AO) system using a Hartmann-Shack wavefront sensor and a bimorph deformable mirror. The adaptive optics system measures and corrects the wavefront aberration of the human eye for improved lateral resolution (~4 ?m) of retinal images, while maintaining the high axial resolution (~6 ?m) of stand alone OCT. The AO-OCT instrument enables the three-dimensional (3D) visualization of different retinal structures in vivo with high 3D resolution (4󫶚 ?m). Using this system, we have demonstrated the ability to image microscopic blood vessels and the cone photoreceptor mosaic. PMID:19096728

Zawadzki, Robert J.; Jones, Steven M.; Olivier, Scot S.; Zhao, Mingtao; Bower, Bradley A.; Izatt, Joseph A.; Choi, Stacey; Laut, Sophie; Werner, John S.

2008-01-01

3

Probabilistic intra-retinal layer segmentation in 3-D OCT images using global shape regularization.  

PubMed

With the introduction of spectral-domain optical coherence tomography (OCT), resulting in a significant increase in acquisition speed, the fast and accurate segmentation of 3-D OCT scans has become evermore important. This paper presents a novel probabilistic approach, that models the appearance of retinal layers as well as the global shape variations of layer boundaries. Given an OCT scan, the full posterior distribution over segmentations is approximately inferred using a variational method enabling efficient probabilistic inference in terms of computationally tractable model components: Segmenting a full 3-D volume takes around a minute. Accurate segmentations demonstrate the benefit of using global shape regularization: We segmented 35 fovea-centered 3-D volumes with an average unsigned error of 2.46 0.22 ?m as well as 80 normal and 66 glaucomatous 2-D circular scans with errors of 2.92 0.5 ?m and 4.09 0.98 ?m respectively. Furthermore, we utilized the inferred posterior distribution to rate the quality of the segmentation, point out potentially erroneous regions and discriminate normal from pathological scans. No pre- or postprocessing was required and we used the same set of parameters for all data sets, underlining the robustness and out-of-the-box nature of our approach. PMID:24835184

Rathke, Fabian; Schmidt, Stefan; Schn鰎r, Christoph

2014-07-01

4

A statistical model for 3D segmentation of retinal choroid in optical coherence tomography images  

NASA Astrophysics Data System (ADS)

The choroid is a densely layer under the retinal pigment epithelium (RPE). Its deeper boundary is formed by the sclera, the outer fibrous shell of the eye. However, the inhomogeneity within the layers of choroidal Optical Coherence Tomography (OCT)-tomograms presents a significant challenge to existing segmentation algorithms. In this paper, we performed a statistical study of retinal OCT data to extract the choroid. This model fits a Gaussian mixture model (GMM) to image intensities with Expectation Maximization (EM) algorithm. The goodness of fit for proposed GMM model is computed using Chi-square measure and is obtained lower than 0.04 for our dataset. After fitting GMM model on OCT data, Bayesian classification method is employed for segmentation of the upper and lower border of boundary of retinal choroid. Our simulations show the signed and unsigned error of -1.44 +/- 0.5 and 1.6 +/- 0.53 for upper border, and -5.7 +/- 13.76 and 6.3 +/- 13.4 for lower border, respectively.

Ghasemi, F.; Rabbani, H.

2014-03-01

5

Retinal Imaging and Image Analysis  

PubMed Central

Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships. PMID:21743764

Abr鄊off, Michael D.; Garvin, Mona K.; Sonka, Milan

2011-01-01

6

Adaptive-optics optical coherence tomography for high-resolution and high-speed 3D retinal in vivo imaging  

Microsoft Academic Search

We have combined Fourier-domain optical coherence tomography (FD-OCT) with a closed-loop adaptive optics (AO) system using a Hartmann-Shack wavefront sensor and a bimorph deformable mirror. The adaptive optics system measures and corrects the wavefront aberration of the human eye for improved lateral resolution (~4 mum) of retinal images, while maintaining the high axial resolution (~6 mum) of stand alone OCT.

Robert J. Zawadzki; Steven M. Jones; Scot S. Olivier; Mingtao Zhao; Bradley A. Bower; Joseph A. Izatt; Stacey Choi; Sophie Laut; John S. Werner

2005-01-01

7

A framework for retinal layer intensity analysis for retinal artery occlusion patient based on 3D OCT  

NASA Astrophysics Data System (ADS)

Occlusion of retinal artery leads to severe ischemia and dysfunction of retina. Quantitative analysis of the reflectivity in the retina is very needed to quantitative assessment of the severity of retinal ischemia. In this paper, we proposed a framework for retinal layer intensity analysis for retinal artery occlusion patient based on 3D OCT images. The proposed framework consists of five main steps. First, a pre-processing step is applied to the input OCT images. Second, the graph search method was applied to segment multiple surfaces in OCT images. Third, the RAO region was detected based on texture classification method. Fourth, the layer segmentation was refined using the detected RAO regions. Finally, the retinal layer intensity analysis was performed. The proposed method was tested on tested on 27 clinical Spectral domain OCT images. The preliminary results show the feasibility and efficacy of the proposed method.

Liao, Jianping; Chen, Haoyu; Zhou, Chunlei; Chen, Xinjian

2014-03-01

8

3-D threat image projection  

NASA Astrophysics Data System (ADS)

Automated Explosive Detection Systems utilizing Computed Tomography perform a series X-ray scans of passenger bags being checked in at the airport, and produce various 2-D projection images and 3-D volumetric images of the bag. The determination as to whether the passenger bag contains an explosive and needs to be searched manually is performed through trained Transportation Security Administration screeners following an approved protocol. In order to keep the screeners vigilant with regards to screening quality, the Transportation Security Administration has mandated the use of Threat Image Projection on 2-D projection X-ray screening equipment used at all US airports. These algorithms insert visual artificial threats into images of the normal passenger bags in order to test the screeners with regards to their screening efficiency and their screening quality at determining threats. This technology for 2-D X-ray system is proven and is widespread amongst multiple manufacturers of X-ray projection systems. Until now, Threat Image Projection has been unsuccessful at being introduced into 3-D Automated Explosive Detection Systems for numerous reasons. The failure of these prior attempts are mainly due to imaging queues that the screeners pickup on, and therefore make it easy for the screeners to discern the presence of the threat image and thus defeating the intended purpose. This paper presents a novel approach for 3-D Threat Image Projection for 3-D Automated Explosive Detection Systems. The method presented here is a projection based approach where both the threat object and the bag remain in projection sinogram space. Novel approaches have been developed for projection based object segmentation, projection based streak reduction used for threat object isolation along with scan orientation independence and projection based streak generation for an overall realistic 3-D image. The algorithms are prototyped in MatLab and C++ and demonstrate non discernible 3-D threat image insertion into various luggage, and non discernable streak patterns for 3-D images when compared to actual scanned images.

Yildiz, Yesna O.; Abraham, Douglas Q.; Agaian, Sos; Panetta, Karen

2008-02-01

9

3-D segmentation of retinal blood vessels in spectral-domain OCT volumes of the optic nerve head  

NASA Astrophysics Data System (ADS)

Segmentation of retinal blood vessels can provide important information for detecting and tracking retinal vascular diseases including diabetic retinopathy, arterial hypertension, arteriosclerosis and retinopathy of prematurity (ROP). Many studies on 2-D segmentation of retinal blood vessels from a variety of medical images have been performed. However, 3-D segmentation of retinal blood vessels from spectral-domain optical coherence tomography (OCT) volumes, which is capable of providing geometrically accurate vessel models, to the best of our knowledge, has not been previously studied. The purpose of this study is to develop and evaluate a method that can automatically detect 3-D retinal blood vessels from spectral-domain OCT scans centered on the optic nerve head (ONH). The proposed method utilized a fast multiscale 3-D graph search to segment retinal surfaces as well as a triangular mesh-based 3-D graph search to detect retinal blood vessels. An experiment on 30 ONH-centered OCT scans (15 right eye scans and 15 left eye scans) from 15 subjects was performed, and the mean unsigned error in 3-D of the computer segmentations compared with the independent standard obtained from a retinal specialist was 3.4 +/- 2.5 voxels (0.10 +/- 0.07 mm).

Lee, Kyungmoo; Abr鄊off, Michael D.; Niemeijer, Meindert; Garvin, Mona K.; Sonka, Milan

2010-03-01

10

3D computational ghost imaging  

NASA Astrophysics Data System (ADS)

Computational ghost imaging is a technique that enables lensless single-pixel detectors to produce images. By illuminating a scene with a series of patterns from a digital light projector (DLP) and measuring the reflected or transmitted intensity, it is possible to retrieve a two-dimensional (2D) image when using a suitable computer algorithm. An important feature of this approach is that although the light travels from the DLP and is measured by the detector, the images produced reveal that the detector behaves like a source of light and the DLP behaves like a camera. By placing multiple single-pixel detectors in different locations it is possible to obtain multiple ghost images with different shading profiles, which together can be used to accurately calculate the three-dimensional (3D) surface geometry through a photometric stereo techniques. In this work we show that using four photodiodes and a 850nm source of illumination, high quality 3D images of a large toy soldier can be retrieved. The use of simplified lensless detectors in 3D imaging allows different detector materials and architectures to be used whose sensitivity may extend beyond the visible spectrum, at wavelengths where existing camera based technology can become expensive or unsuitable.

Edgar, Matthew P.; Sun, Baoqing; Bowman, Richard; Welsh, Stephen S.; Padgett, Miles J.

2013-10-01

11

Retinal imaging in uveitis  

PubMed Central

Ancillary investigations are the backbone of uveitis workup for posterior segment inflammations. They help in establishing the differential diagnosis and making certain diagnosis by ruling out certain pathologies and are a useful aid in monitoring response to therapy during follow-up. These investigations include fundus photography including ultra wide field angiography, fundus autofluorescence imaging, fluorescein angiography, optical coherence tomography and multimodal imaging. This review aims to be an overview describing the role of these retinal investigations for posterior uveitis. PMID:24843301

Gupta, Vishali; Al-Dhibi, Hassan A.; Arevalo, J. Fernando

2014-01-01

12

Quantitative analysis of retinal layers' optical intensities on 3D optical coherence tomography for central retinal artery occlusion  

PubMed Central

Optical coherence tomography (OCT) provides not only morphological information but also information about layer-specific optical intensities, which may represent the underlying tissue properties. The purpose of this study is to quantitatively investigate the optical intensity of each retinal layers in central retinal artery occlusion (CRAO). Twenty-nine CRAO cases at acute phase and 33 normal controls were included. Macula-centered 3D OCT images were segmented with a fully-automated Iowa Reference Algorithm into 10 layers. Layer-specific mean intensities were determined and compared between the patient and control groups using multiple regression analysis while adjusting for age and optical intensity of the entire region. The optical intensities were higher in CRAO than in controls in layers spanning from the retinal ganglion cell layer to outer plexiform layer (standardized beta = 0.657 to 0.777, all p < 0.001), possibly due to ischemia. Optical intensities were lower at the photoreceptor, retinal pigment epithelium (RPE), and choroid layers (standardized beta = ?0.412 to ?0.611, all p < 0.01), possibly due to shadowing effects. Among the intraretinal layers, the inner nuclear layer was identified as the best indicator of CRAO. Our study provides in vivo information of the optical intensity changes in each retinal layer in CRAO patients. PMID:25784298

Chen, Haoyu; Chen, Xinjian; Qiu, Zhiqiao; Xiang, Dehui; Chen, Weiqi; Shi, Fei; Zheng, Jianlong; Zhu, Weifang; Sonka, Milan

2015-01-01

13

Transplantation of Embryonic and Induced Pluripotent Stem Cell-Derived 3D Retinal Sheets into Retinal Degenerative Mice  

PubMed Central

Summary In this article, we show that mouse embryonic stem cell- or induced pluripotent stem cell-derived 3D retinal tissue developed a structured outer nuclear layer (ONL) with complete inner and outer segments even in an advanced retinal degeneration model (rd1) that lacked ONL. We also observed host-graft synaptic connections by immunohistochemistry. This study provides a 損roof of concept for retinal sheet transplantation therapy for advanced retinal degenerative diseases. PMID:24936453

Assawachananont, Juthaporn; Mandai, Michiko; Okamoto, Satoshi; Yamada, Chikako; Eiraku, Mototsugu; Yonemura, Shigenobu; Sasai, Yoshiki; Takahashi, Masayo

2014-01-01

14

Consistent stylization of stereoscopic 3D images  

Microsoft Academic Search

The application of stylization filters to photographs is common, Instagram being a popular recent example. These image manipulation applications work great for 2D images. However, stereoscopic 3D cameras are increasingly available to consumers (Nintendo 3DS, Fuji W3 3D, HTC Evo 3D). How will users apply these same stylizations to stereoscopic images?

Lesley Northam; Paul Asente; Craig S. Kaplan

2012-01-01

15

3D Modeling From 2D Images  

Microsoft Academic Search

This article will give an overview of the methods of transition from the set of images into 3D model. Direct method of creating 3D model using 3D software will be described. Creating photorealistic 3D models from a set of photographs is challenging problem in computer vision because the technology is still in its development stage while the demands for 3D

Lana Madracevic; Stjepan Sogoric

2010-01-01

16

3D Cardiac Deformation from Ultrasound Images  

Microsoft Academic Search

The quantitative estimation of regional cardiac deformation from 3D image sequences has important clinical implications for the assessment of viability in the heart wall. Such estimates have so far been obtained almost exclusively from Magnetic Resonance (MR) im- ages, speciflcally MR tagging. In this paper we describe a methodology for estimating cardiac deformations from 3D ultrasound images. The images are

Xenophon Papademetris; Albert J. Sinusas; Donald P. Dione; James S. Duncan

1999-01-01

17

Analysis of posterior retinal layers in spectral optical coherence tomography images of the normal retina and retinal pathologies  

E-print Network

We present a computationally efficient, semiautomated method for analysis of posterior retinal layers in three-dimensional (3-D) images obtained by spectral optical coherence tomography (SOCT). The method consists of two ...

Szkulmowski, Maciej

18

3D ultrafast ultrasound imaging in vivo  

NASA Astrophysics Data System (ADS)

Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32? ?32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra梐nd inter-observer variability.

Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

2014-10-01

19

Retinal Image Quality During Accommodation  

PubMed Central

Purpose We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Methods Subjects viewed a monochromatic (552nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Results Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Conclusions Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye抯 higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. PMID:23786386

L髉ez-Gil, N.; Martin, J.; Liu, T.; Bradley, A.; D韆z-Mu駉z, D.; Thibos, L.

2013-01-01

20

Semaphorin3D Guides Retinal Axons along the Dorsoventral Axis of the Tectum  

Microsoft Academic Search

We examined the role of Sema3D, a semaphorin of previously unknown function, in guiding retinal ganglion cell (RGC) axons to the optic tectum in the developing zebrafish. Sema3D is expressed more strongly in the ventral versus dorsal tectum, suggesting that it may participate in guiding RGC axons along the dorsoventral axis of the tectum. Ubiquitous misexpression of Sema3D in transgenic

Yan Liu; Jason Berndt; Fengyun Su; Hiroshi Tawarayama; Wataru Shoji; John Y. Kuwada; Mary C. Halloran

2004-01-01

21

Comparison of reflectivity maps and outer retinal topography in retinal disease by 3-D Fourier domain optical coherence tomography?  

PubMed Central

We demonstrate and compare two image processing methods for visualization and analysis of three-dimensional optical coherence tomography (OCT) data acquired in eyes with different retinal pathologies. A method of retinal layer segmentation based on a multiple intensity thresholding algorithm was implemented in order to generate simultaneously outer retinal topography maps and reflectivity maps. We compare the applicability of the two methods to the diagnosis of retinal diseases and their progression. The data presented in this contribution were acquired with a high speed (25,000 A-scans/s), high resolution (4.5 ?m) spectral OCT prototype instrument operating in the ophthalmology clinic. PMID:19259255

Wojtkowski, Maciej; Sikorski, Bartosz L.; Gorczynska, Iwona; Gora, Michalina; Szkulmowski, Maciej; Bukowska, Danuta; Ka?uzny, Jakub; Fujimoto, James G.; Kowalczyk, Andrzej

2009-01-01

22

Fundamentals of 3D Laplacian Image pyramids  

E-print Network

INF555 Fundamentals of 3D Lecture 9: Laplacian Image pyramids Expectation-Maximization + Overview of the hat Stripes of the hair Interpreting Fourier spectra #12;Laplacian image pyramids Used also. Interpolate and estimate Laplacian image pyramids Residual Reconstruction Precursors of wavelets #12;Blurring

Nielsen, Frank

23

3D imaging in forensic odontology.  

PubMed

This paper describes the investigation of a new 3D capture method for acquiring and subsequent forensic analysis of bite mark injuries on human skin. When documenting bite marks with standard 2D cameras errors in photographic technique can occur if best practice is not followed. Subsequent forensic analysis of the mark is problematic when a 3D structure is recorded into a 2D space. Although strict guidelines (BAFO) exist, these are time-consuming to follow and, due to their complexity, may produce errors. A 3D image capture and processing system might avoid the problems resulting from the 2D reduction process, simplifying the guidelines and reducing errors. Proposed Solution: a series of experiments are described in this paper to demonstrate that the potential of a 3D system might produce suitable results. The experiments tested precision and accuracy of the traditional 2D and 3D methods. A 3D image capture device minimises the amount of angular distortion, therefore such a system has the potential to create more robust forensic evidence for use in courts. A first set of experiments tested and demonstrated which method of forensic analysis creates the least amount of intra-operator error. A second set tested and demonstrated which method of image capture creates the least amount of inter-operator error and visual distortion. In a third set the effects of angular distortion on 2D and 3D methods of image capture were evaluated. PMID:20557154

Evans, Sam; Jones, Carl; Plassmann, Peter

2010-06-16

24

3D holoscopic video imaging system  

NASA Astrophysics Data System (ADS)

Since many years, integral imaging has been discussed as a technique to overcome the limitations of standard still photography imaging systems where a three-dimensional scene is irrevocably projected onto two dimensions. With the success of 3D stereoscopic movies, a huge interest in capturing three-dimensional motion picture scenes has been generated. In this paper, we present a test bench integral imaging camera system aiming to tailor the methods of light field imaging towards capturing integral 3D motion picture content. We estimate the hardware requirements needed to generate high quality 3D holoscopic images and show a prototype camera setup that allows us to study these requirements using existing technology. The necessary steps that are involved in the calibration of the system as well as the technique of generating human readable holoscopic images from the recorded data are discussed.

Steurer, Johannes H.; Pesch, Matthias; Hahne, Christopher

2012-03-01

25

Reversible 3-D decorrelation of medical images.  

PubMed

Two methods, namely, differential pulse code modulation (DPCM) and hierarchical interpolation (HINT), are considered. It is shown that HINT cannot be extended straightforwardly to 3-D images as contrasted with DPCM. A 3-D HINT is therefore proposed which is based on a combination of 2-D and 3-D filters. Both decorrelation methods were applied to three-dimensional computed tomography (CT), magnetic resonance (MR), and single-photon-emission CT (SPECT) images. It was found that a 3-D approach is optimal for some studies, while for other studies 2-D or even 1-D decorrelation performs better. The optimal dimensionality of DPCM is related to the magnitudes of the local correlation coefficients (CCs). However, the nonlocal nature of HINT makes the local correlation coefficients useless as indicators of the dimensionality; a better candidate is the image voxel size. For images with cubic or nearly cubic voxels 3-D HINT is generally optimal. For images in which the slice thickness is large compared to the pixel size a 2-D (intraslice) HINT is best. In general, the increase in efficiency obtained by extending 2-D decorrelation method to 3-D is small. PMID:18218433

Roos, P; Viergever, M A

1993-01-01

26

Intra-retinal layer segmentation of 3D optical coherence tomography using coarse grained diffusion map.  

PubMed

Optical coherence tomography (OCT) is a powerful and noninvasive method for retinal imaging. In this paper, we introduce a fast segmentation method based on a new variant of spectral graph theory named diffusion maps. The research is performed on spectral domain (SD) OCT images depicting macular and optic nerve head appearance. The presented approach does not require edge-based image information in localizing most of boundaries and relies on regional image texture. Consequently, the proposed method demonstrates robustness in situations of low image contrast or poor layer-to-layer image gradients. Diffusion mapping applied to 2D and 3D OCT datasets is composed of two steps, one for partitioning the data into important and less important sections, and another one for localization of internal layers. In the first step, the pixels/voxels are grouped in rectangular/cubic sets to form a graph node. The weights of the graph are calculated based on geometric distances between pixels/voxels and differences of their mean intensity. The first diffusion map clusters the data into three parts, the second of which is the area of interest. The other two sections are eliminated from the remaining calculations. In the second step, the remaining area is subjected to another diffusion map assessment and the internal layers are localized based on their textural similarities. The proposed method was tested on 23 datasets from two patient groups (glaucoma and normals). The mean unsigned border positioning errors (mean SD) was 8.52 3.13 and 7.56 2.95 ?m for the 2D and 3D methods, respectively. PMID:23837966

Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D; Sonka, Milan

2013-12-01

27

Intra-retinal layer segmentation of 3D optical coherence tomography using coarse grained diffusion map  

PubMed Central

Optical coherence tomography (OCT) is a powerful and noninvasive method for retinal imaging. In this paper, we introduce a fast segmentation method based on a new variant of spectral graph theory named diffusion maps. The research is performed on spectral domain (SD) OCT images depicting macular and optic nerve head appearance. The presented approach does not require edge-based image information in localizing most of boundaries and relies on regional image texture. Consequently, the proposed method demonstrates robustness in situations of low image contrast or poor layer-to-layer image gradients. Diffusion mapping applied to 2D and 3D OCT datasets is composed of two steps, one for partitioning the data into important and less important sections, and another one for localization of internal layers. In the first step, the pixels/voxels are grouped in rectangular/cubic sets to form a graph node. The weights of the graph are calculated based on geometric distances between pixels/voxels and differences of their mean intensity. The first diffusion map clusters the data into three parts, the second of which is the area of interest. The other two sections are eliminated from the remaining calculations. In the second step, the remaining area is subjected to another diffusion map assessment and the internal layers are localized based on their textural similarities. The proposed method was tested on 23 datasets from two patient groups (glaucoma and normals). The mean unsigned border positioning errors (mean SD) was 8.52 3.13 and 7.56 2.95 ?m for the 2D and 3D methods, respectively. PMID:23837966

Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D.; Sonka, Milan

2013-01-01

28

ICER-3D Hyperspectral Image Compression Software  

NASA Technical Reports Server (NTRS)

Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received prior to the loss can be used to reconstruct that partition at lower fidelity. By virtue of the compression improvement it achieves relative to previous means of onboard data compression, this software enables (1) increased return of hyperspectral scientific data in the presence of limits on the rates of transmission of data from spacecraft to Earth via radio communication links and/or (2) reduction in spacecraft radio-communication power and/or cost through reduction in the amounts of data required to be downlinked and stored onboard prior to downlink. The software is also suitable for compressing hyperspectral images for ground storage or archival purposes.

Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

2010-01-01

29

Acquisition and applications of 3D images  

NASA Astrophysics Data System (ADS)

The moir fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

Sterian, Paul; Mocanu, Elena

2007-08-01

30

3D camera tracking from disparity images  

NASA Astrophysics Data System (ADS)

In this paper, we propose a robust camera tracking method that uses disparity images computed from known parameters of 3D camera and multiple epipolar constraints. We assume that baselines between lenses in 3D camera and intrinsic parameters are known. The proposed method reduces camera motion uncertainty encountered during camera tracking. Specifically, we first obtain corresponding feature points between initial lenses using normalized correlation method. In conjunction with matching features, we get disparity images. When the camera moves, the corresponding feature points, obtained from each lens of 3D camera, are robustly tracked via Kanade-Lukas-Tomasi (KLT) tracking algorithm. Secondly, relative pose parameters of each lens are calculated via Essential matrices. Essential matrices are computed from Fundamental matrix calculated using normalized 8-point algorithm with RANSAC scheme. Then, we determine scale factor of translation matrix by d-motion. This is required because the camera motion obtained from Essential matrix is up to scale. Finally, we optimize camera motion using multiple epipolar constraints between lenses and d-motion constraints computed from disparity images. The proposed method can be widely adopted in Augmented Reality (AR) applications, 3D reconstruction using 3D camera, and fine surveillance systems which not only need depth information, but also camera motion parameters in real-time.

Kim, Kiyoung; Woo, Woontack

2005-07-01

31

3D Face Mesh Modeling from Range Images for 3D Face Recognition  

Microsoft Academic Search

We present an algorithm for 3D face deformation and modeling using range data captured by a 3D scanner. Using only three facial feature points extracted from the range images and a 3D generic face model, the algorithm first aligns the 3D model to the entire range data of a given subject's face. Then each aligned triangle of the mesh model,

A-nasser Ansari; Mohamed Abdel-mottaleb; Mohammad H. Mahoor

2007-01-01

32

Integration of retinal image sequences  

NASA Astrophysics Data System (ADS)

In this paper a method for noise reduction in ocular fundus image sequences is described. The eye is the only part of the human body where the capillary network can be observed along with the arterial and venous circulation using a non invasive technique. The study of the retinal vessels is very important both for the study of the local pathology (retinal disease) and for the large amount of information it offers on systematic haemodynamics, such as hypertension, arteriosclerosis, and diabetes. In this paper a method for image integration of ocular fundus image sequences is described. The procedure can be divided in two step: registration and fusion. First we describe an automatic alignment algorithm for registration of ocular fundus images. In order to enhance vessel structures, we used a spatially oriented bank of filters designed to match the properties of the objects of interest. To evaluate interframe misalignment we adopted a fast cross-correlation algorithm. The performances of the alignment method have been estimated by simulating shifts between image pairs and by using a cross-validation approach. Then we propose a temporal integration technique of image sequences so as to compute enhanced pictures of the overall capillary network. Image registration is combined with image enhancement by fusing subsequent frames of a same region. To evaluate the attainable results, the signal-to-noise ratio was estimated before and after integration. Experimental results on synthetic images of vessel-like structures with different kind of Gaussian additive noise as well as on real fundus images are reported.

Ballerini, Lucia

1998-10-01

33

Texture anisotropy in 3-D images  

Microsoft Academic Search

Two approaches to the characterization of three-dimensional (3-D) textures are presented: one based on gradient vectors and one on generalized co-occurrence matrices. They are investigated with the help of simulated data for their behavior in the presence of noise and for various values of the parameters they depend on. They are also applied to several medical volume images characterized by

Vassili A. Kovalev; Maria Petrou; Yaroslav S. Bondar

1999-01-01

34

3D goes digital: from stereoscopy to modern 3D imaging techniques  

NASA Astrophysics Data System (ADS)

In the 19th century, English physicist Charles Wheatstone discovered stereopsis, the basis for 3D perception. His construction of the first stereoscope established the foundation for stereoscopic 3D imaging. Since then, many optical instruments were influenced by these basic ideas. In recent decades, the advent of digital technologies revolutionized 3D imaging. Powerful readily available sensors and displays combined with efficient pre- or post-processing enable new methods for 3D imaging and applications. This paper draws an arc from basic concepts of 3D imaging to modern digital implementations, highlighting instructive examples from its 175 years of history.

Kerwien, N.

2014-11-01

35

Imaging retinal progenitor lineages in developing zebrafish embryos.  

PubMed

In this protocol, we describe how to make and analyze four dimensional (4D) movies of retinal lineage in the zebrafish embryo in vivo. 4D consists of three spatial dimensions (3D) reconstructed from stacks of confocal planes plus one time dimension. Our imaging is performed on transgenic cells that express fluorescent proteins under the control of cell-specific promoters or on cells that transiently express such reporters in specific retinal cell progenitors. An important aspect of lineage tracing is the ability to follow individual cells as they undergo multiple cell divisions, final migration, and differentiation. This may mean many hours of 4D imaging, requiring that cells be kept healthy and maintained under conditions suitable for normal development. The longest movies we have made are ?50 h. By analyzing these movies, we can see when a specific cell was born and who its sister was, allowing us to reconstruct its retinal lineages in vivo. PMID:23457345

Jusuf, Patricia; Harris, William A; Poggi, Lucia

2013-03-01

36

Novel 3D stereoscopic imaging technology  

NASA Astrophysics Data System (ADS)

Numerous 3-D stereoscopic techniques have been explored. These previous techniques have had shortcomings precluding them from making stereoscopic imaging pervasive in mainstream applications. In the last decade, several enabling technologies have emerged and have become available and affordable. They make it possible now to realize the near-ideal stereoscopic imaging technology that can be made available to the masses making possible the inevitable transition from flat imaging to stereoscopic imaging. The ideal stereoscopic technology must meet four important criteria: (1) high stereoscopic image quality; (2) affordability; (3) compatibility with existing infrastructure, e.g., NTSC video, PC, and other devices; and (4) general purpose characteristics, e.g., the ability to produce electronic displays, hard-copy printing and capturing stereoscopic images on film and stored electronically. In section 2, an overview of prior art technologies is given highlighting their advantages and disadvantages. In section 3, the novel (mu) PolTM stereoscopic technology is described making the case that it meets the four criteria for realizing the inevitable transition from flat to stereoscopic imaging for mass applications.

Faris, Sadeg M.

1994-04-01

37

Ames Lab 101: Real-Time 3D Imaging  

ScienceCinema

Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

Zhang, Song

2012-08-29

38

Ames Lab 101: Real-Time 3D Imaging  

SciTech Connect

Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

Zhang, Song

2010-01-01

39

REALISTIC 3D SCENE RECONSTRUCTION FROM UNSCONSTRAINED AND UNCALIBRATED IMAGES  

E-print Network

such as architechture, engineering, education and arts. Traditional modelling systems such as Maya, 3D Max or Blender REALISTIC 3D SCENE RECONSTRUCTION FROM UNSCONSTRAINED AND UNCALIBRATED IMAGES TAKEN of reconstructing 3D scenes from a set of unconstrained images. These image sequences can be acquired by a video

Sun, Jing

40

3-D images 11/17/2008 1 3-D Images.  

E-print Network

a scatter of dots but when you relax your eyes in just the right way a 3-D image pops out? Well, the idea who "crossed" their eyes saw everything in front of the page and saw the white flowers as being students who saw it one way were unable see the other. A few could flip back and forth. Then I gave them

Taylor, Peter

41

3-D Volume Imaging for Dentistry: A New Dimension  

Microsoft Academic Search

The use of computed tomography for dental imaging procedures has in- creased recently. Use of CT for even seemingly routine diagnosis and treatment procedures suggests that the desire for 3-D imaging is more than a current trend but rather a shift toward a future of dimensional volume imaging. Recognizing this shift, several imaging manufacturers recently have developed 3-D imaging devices

Robert A. Danforth; Ivan Dus; James Mah

2003-01-01

42

Imaging a Sustainable Future in 3D  

NASA Astrophysics Data System (ADS)

It is the intention of this paper, to contribute to a sustainable future by providing objective object information based on 3D photography as well as promoting 3D photography not only for scientists, but also for amateurs. Due to the presentation of this article by CIPA Task Group 3 on "3D Photographs in Cultural Heritage", the presented samples are masterpieces of historic as well as of current 3D photography concentrating on cultural heritage. In addition to a report on exemplarily access to international archives of 3D photographs, samples for new 3D photographs taken with modern 3D cameras, as well as by means of a ground based high resolution XLITE staff camera and also 3D photographs taken from a captive balloon and the use of civil drone platforms are dealt with. To advise on optimum suited 3D methodology, as well as to catch new trends in 3D, an updated synoptic overview of the 3D visualization technology, even claiming completeness, has been carried out as a result of a systematic survey. In this respect, e.g., today's lasered crystals might be "early bird" products in 3D, which, due to lack in resolution, contrast and color, remember to the stage of the invention of photography.

Schuhr, W.; Lee, J. D.; Kanngieser, E.

2012-07-01

43

Toward a compact underwater structured light 3-D imaging system  

E-print Network

A compact underwater 3-D imaging system based on the principles of structured light was created for classroom demonstration and laboratory research purposes. The 3-D scanner design was based on research by the Hackengineer ...

Dawson, Geoffrey E

2013-01-01

44

Digital tracking and control of retinal images  

NASA Astrophysics Data System (ADS)

Laser induced retinal lesions are used to treat a variety of eye diseases such as diabetic retinopathy and retinal detachment. An instrumentation system has been developed to track a specific lesion coordinate on the retinal surface and provide corrective signals to maintain laser position on the coordinate. High resolution retinal images are acquired via a CCD camera coupled to a fundus camera and video frame grabber. Optical filtering and histogram modification are used to enhance the retinal vessel network against the lighter retinal background. Six distinct retinal landmarks are tracked on the high contrast image obtained from the frame grabber using two-dimensional blood vessel templates. The frame grabber is hosted on a 486 PC. The PC performs correction signal calculations using an exhaustive search on selected image portions. An X and Y laser correction signal is derived from the landmark tracking information and provided to a pair of galvanometer steered mirrors via a data acquisition and control subsystem. This subsystem also responds to patient inputs and the system monitoring lesion growth. This paper begins with an overview of the robotic laser system design followed by implementation and testing of a development system for proof of concept. The paper concludes with specifications for a real time system.

Barrett, Steven F.; Jerath, Maya R.; Rylander, Henry G., III; Welch, Ashley J.

1993-06-01

45

Human body 3D imaging by speckle texture projection photogrammetry  

Microsoft Academic Search

Describes a non-contact optical sensing technology called C3D that is based on speckle texture projection photogrammetry. C3D has been applied to capturing all-round 3D models of the human body of high dimensional accuracy and photorealistic appearance. The essential strengths and limitation of the C3D approach are presented and the basic principles of this stereo-imaging approach are outlined, from image capture

J. Paul Siebert; Stephen J. Marshall

2000-01-01

46

A personal identification system using retinal vasculature in retinal fundus images  

Microsoft Academic Search

The characteristics of human body such as fingerprint, face, hand palm and iris are measured, recorded and identified by performing comparison using biometric devices. Even though it has not seen widespread acceptance yet, retinal identification based on retinal vasculatures in retina provides the most secure and accurate authentication means among biometric systems. Using retinal images taken from individuals, retinal identification

Cemal K鰏e

2011-01-01

47

Retinal image analysis: preprocessing and feature extraction  

NASA Astrophysics Data System (ADS)

Image processing, analysis and computer vision techniques are found today in all fields of medical science. These techniques are especially relevant to modern ophthalmology, a field heavily dependent on visual data. Retinal images are widely used for diagnostic purposes by ophthalmologists. However, these images often need visual enhancement prior to apply a digital analysis for pathological risk or damage detection. In this work we propose the use of an image enhancement technique for the compensation of non-uniform contrast and luminosity distribution in retinal images. We also explore optic nerve head segmentation by means of color mathematical morphology and the use of active contours.

Marrugo, Andr閟 G.; Mill醤, Mar韆 S.

2011-01-01

48

Automatic detection, segmentation and characterization of retinal horizontal neurons in large-scale 3D confocal imagery  

NASA Astrophysics Data System (ADS)

Automatic analysis of neuronal structure from wide-field-of-view 3D image stacks of retinal neurons is essential for statistically characterizing neuronal abnormalities that may be causally related to neural malfunctions or may be early indicators for a variety of neuropathies. In this paper, we study classification of neuron fields in large-scale 3D confocal image stacks, a challenging neurobiological problem because of the low spatial resolution imagery and presence of intertwined dendrites from different neurons. We present a fully automated, four-step processing approach for neuron classification with respect to the morphological structure of their dendrites. In our approach, we first localize each individual soma in the image by using morphological operators and active contours. By using each soma position as a seed point, we automatically determine an appropriate threshold to segment dendrites of each neuron. We then use skeletonization and network analysis to generate the morphological structures of segmented dendrites, and shape-based features are extracted from network representations of each neuron to characterize the neuron. Based on qualitative results and quantitative comparisons, we show that we are able to automatically compute relevant features that clearly distinguish between normal and abnormal cases for postnatal day 6 (P6) horizontal neurons.

Karakaya, Mahmut; Kerekes, Ryan A.; Gleason, Shaun S.; Martins, Rodrigo A. P.; Dyer, Michael A.

2011-03-01

49

Automatic Detection, Segmentation and Classification of Retinal Horizontal Neurons in Large-scale 3D Confocal Imagery  

SciTech Connect

Automatic analysis of neuronal structure from wide-field-of-view 3D image stacks of retinal neurons is essential for statistically characterizing neuronal abnormalities that may be causally related to neural malfunctions or may be early indicators for a variety of neuropathies. In this paper, we study classification of neuron fields in large-scale 3D confocal image stacks, a challenging neurobiological problem because of the low spatial resolution imagery and presence of intertwined dendrites from different neurons. We present a fully automated, four-step processing approach for neuron classification with respect to the morphological structure of their dendrites. In our approach, we first localize each individual soma in the image by using morphological operators and active contours. By using each soma position as a seed point, we automatically determine an appropriate threshold to segment dendrites of each neuron. We then use skeletonization and network analysis to generate the morphological structures of segmented dendrites, and shape-based features are extracted from network representations of each neuron to characterize the neuron. Based on qualitative results and quantitative comparisons, we show that we are able to automatically compute relevant features that clearly distinguish between normal and abnormal cases for postnatal day 6 (P6) horizontal neurons.

Karakaya, Mahmut [ORNL; Kerekes, Ryan A [ORNL; Gleason, Shaun Scott [ORNL; Martins, Rodrigo [St. Jude Children's Research Hospital; Dyer, Michael [St. Jude Children's Research Hospital

2011-01-01

50

Retinal imaging using adaptive optics technology?  

PubMed Central

Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effect of wave front distortions. Retinal imaging using AO aims to compensate for higher order aberrations originating from the cornea and the lens by using deformable mirror. The main application of AO retinal imaging has been to assess photoreceptor cell density, spacing, and mosaic regularity in normal and diseased eyes. Apart from photoreceptors, the retinal pigment epithelium, retinal nerve fiber layer, retinal vessel wall and lamina cribrosa can also be visualized with AO technology. Recent interest in AO technology in eye research has resulted in growing number of reports and publications utilizing this technology in both animals and humans. With the availability of first commercially available instruments we are making transformation of AO technology from a research tool to diagnostic instrument. The current challenges include imaging eyes with less than perfect optical media, formation of normative databases for acquired images such as cone mosaics, and the cost of the technology. The opportunities for AO will include more detailed diagnosis with description of some new findings in retinal diseases and glaucoma as well as expansion of AO into clinical trials which has already started. PMID:24843304

Kozak, Igor

2014-01-01

51

Screening Diabetic Retinopathy Through Color Retinal Images  

NASA Astrophysics Data System (ADS)

Diabetic Retinopathy (DR) is a common complication of diabetes that damages the eye's retina. Recognition DR as early as possible is very important to protect patients' vision. We propose a method for screening DR and distin-guishing Prolifetive Diabetic Retinopathy (PDR) from Non-Prolifetive Retino-pathy (NPDR) automatatically through color retinal images. This method evaluates the severity of DR by analyzing the appearnce of bright lesions and retinal vessel patterns. The bright lesions are extracted through morphlogical re-consturction. After that, the retinal vessels are automatically extracted using multiscale matched filters. Then the vessel patterns are analyzed by extracting the vessel net density. The experimental results domonstrate that it is a effective solution to screen DR and distinguish PDR from NPDR by only using color retinal images.

Li, Qin; Jin, Xue-Min; Gao, Quan-Xue; You, Jane; Bhattacharya, Prabir

52

3D ultrasound imaging for prosthesis fabrication and diagnostic imaging  

SciTech Connect

The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

Morimoto, A.K.; Bow, W.J.; Strong, D.S. [and others

1995-06-01

53

Enhancing retinal images by nonlinear registration  

E-print Network

Being able to image the human retina in high resolution opens a new era in many important fields, such as pharmacological research for retinal diseases, researches in human cognition, nervous system, metabolism and blood stream, to name a few. In this paper, we propose to share the knowledge acquired in the fields of optics and imaging in solar astrophysics in order to improve the retinal imaging at very high spatial resolution in the perspective to perform a medical diagnosis. The main purpose would be to assist health care practitioners by enhancing retinal images and detect abnormal features. We apply a nonlinear registration method using local correlation tracking to increase the field of view and follow structure evolutions using correlation techniques borrowed from solar astronomy technique expertise. Another purpose is to define the tracer of movements after analyzing local correlations to follow the proper motions of an image from one moment to another, such as changes in optical flows that would be o...

Molodij, Guillaume; Glanc, Marie; Chenegros, Guillaume

2014-01-01

54

Progress in 3-D Multiperspective Display by Integral Imaging  

Microsoft Academic Search

Three-dimensional (3-D) imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture and display true 3-D color images, has been seen as the right technology for 3-D viewing for audiences of more than one person. Due to the advanced degree of its development, InI technology could be

Ra趌 Martinez-Cuenca; Genaro Saavedra; Manuel Martinez-Corral; Bahram Javidi

2009-01-01

55

Constructing Complex 3D Biological Environments from Medical Imaging Using  

E-print Network

in humans, from histology images, to create a unique but realistic 3D virtual organ. Histology images were the tissue. This information was then related back to the histology images, linking the 2D cross sections reconstruction, biological tissue, histology. ? 1 INTRODUCTION CREATING realistic 3D computational models

Romano, Daniela

56

Reconstruction-based 3D/2D image registration.  

PubMed

In this paper we present a novel 3D/2D registration method, where first, a 3D image is reconstructed from a few 2D X-ray images and next, the preoperative 3D image is brought into the best possible spatial correspondence with the reconstructed image by optimizing a similarity measure. Because the quality of the reconstructed image is generally low, we introduce a novel asymmetric mutual information similarity measure, which is able to cope with low image quality as well as with different imaging modalities. The novel 3D/2D registration method has been evaluated using standardized evaluation methodology and publicly available 3D CT, 3DRX, and MR and 2D X-ray images of two spine phantoms, for which gold standard registrations were known. In terms of robustness, reliability and capture range the proposed method outperformed the gradient-based method and the method based on digitally reconstructed radiographs (DRRs). PMID:16685964

Tomazevic, Dejan; Likar, Bostjan; Pernus, Franjo

2005-01-01

57

Retinal detachment repair - series (image)  

MedlinePLUS

... floaters", or loss of part of the visual field. Emergency retinal detachment surgery is necessary to prevent ... buckle is applied. This consists of a silicone patch wrapped around the eye, compressing the globe and ...

58

3D imaging for glasses-free multiview 3D displays  

Microsoft Academic Search

The multi-view three-dimensional (3D) visualization by means of a 3D display requires reproduction of scene light fields. The complete light field of a scene can be reproduced from the images of a scene ideally taken from infinite viewpoints. However, capturing the images of a scene from infinite viewpoints is not feasible for practical applications. Therefore, in this work, we propose

Sabri Gurbuz; Masahiro Kawakita; Sumio Yano; Shoichiro Iwasawa; Hiroshi Ando

2011-01-01

59

Automatic 2D-to-3D image conversion using 3D examples from the internet  

NASA Astrophysics Data System (ADS)

The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D repository. While far from perfect, the presented results demonstrate that on-line repositories of 3D content can be used for effective 2D-to-3D image conversion. With the continuously increasing amount of 3D data on-line and with the rapidly growing computing power in the cloud, the proposed framework seems a promising alternative to operator-assisted 2D-to-3D conversion.

Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.

2012-03-01

60

DSP based image processing for retinal prosthesis  

Microsoft Academic Search

The real-time image processing in retinal prosthesis consists of the implementation of various image processing algorithms like edge detection, edge enhancement, decimation etc. The algorithmic computations in real-time may have high level of computational complexity and hence the use of digital signal processors (DSPs) for the implementation of such algorithms is proposed here. This application desires that the DSPs be

Neha J. Parikh; James D. Weiland; Mark S. Humayun; Saloni S. Shah; Gaurav S. Mohile

2004-01-01

61

Disparity guided exhibition watermarking for 3D stereo images  

Microsoft Academic Search

In this paper, a watermarking scheme for 3D stereo images is presented. The target application is 3D Digital Cinema. The watermarking is based on a dependent stereo image coding scheme, where the watermark is embedded in the JPEG2000 decoding pipeline after the inverse quantization and prior to the inverse discrete wavelet transform (IDWT). A perceptual mask is designed to cope

Rony Darazi; Mireia Montanola Sales; Li Weng; Beno顃 Macq; Bart Preneel

2011-01-01

62

Grain Segmentation of 3D Superalloy Images Using Multichannel EWCVT  

E-print Network

Grain Segmentation of 3D Superalloy Images Using Multichannel EWCVT under Human Annotation@math.sc.edu Abstract. Grain segmentation on 3D superalloy images provides super- alloy's micro-structures, based of (1) the number of grains in a superalloy sample could be thousands or even more; (2) the intensity

Wang, Song

63

Shade Face: Multiple Image-based 3D Face Recognition  

Microsoft Academic Search

Three-dimensional face recognition is illumination in- variant, however the acquisition process itself is not. In ac- tive 3D recognition, multiple images are captured while the face is actively illuminated with different patterns. We pro- pose a 3D face recognition paradigm that bypasses recon- struction and exploits the plethora of information available in multiple images of a person acquired while varying

Ajmal S. Mian

64

Incremental Reconstruction of 3D Scenes from Multiple, Complex Images  

Microsoft Academic Search

The 3 D Mosaic system is a vision system &at incremenmlly reconstructs complex 3 D scenes from a sequence of images obtained from mdtiple vKwpoints. The system encompasses several levels of the vision process, starting with images and ending with symbolic scene descriprionr. This paper describes the various components of the system, including stereo analysis, monocular analysis, and constructing and

Martin Herman; Takeo Kanade

1986-01-01

65

3D Image Viz-Analysis Tools and V3D Development Hackathon, July 26 -August 8, 2010  

E-print Network

3D Image Viz-Analysis Tools and V3D Development Hackathon, July 26 - August 8, 2010 Janelia Farm- Hacking / Dinner at Bob's Pub #12;3D Image Viz-Analysis Tools and V3D Development Hackathon, July 26 Hackathon, July 26 - August 8, 2010 Janelia Farm Research Campus Electron Room AGENDA! !"#$$#%!& Aug 2, 3, 4

Peng, Hanchuan

66

Retinal image quality assessment using generic features  

NASA Astrophysics Data System (ADS)

Retinal image quality assessment is an important step in automated eye disease diagnosis. Diagnosis accuracy is highly dependent on the quality of retinal images, because poor image quality might prevent the observation of significant eye features and disease manifestations. A robust algorithm is therefore required in order to evaluate the quality of images in a large database. We developed an algorithm for retinal image quality assessment based on generic features that is independent from segmentation methods. It exploits the local sharpness and texture features by applying the cumulative probability of blur detection metric and run-length encoding algorithm, respectively. The quality features are combined to evaluate the image's suitability for diagnosis purposes. Based on the recommendations of medical experts and our experience, we compared a global and a local approach. A support vector machine with radial basis functions was used as a nonlinear classifier in order to classify images to gradable and ungradable groups. We applied our methodology to 65 images of size 25921944 pixels that had been graded by a medical expert. The expert evaluated 38 images as gradable and 27 as ungradable. The results indicate very good agreement between the proposed algorithm's predictions and the medical expert's judgment: the sensitivity and specificity for the local approach are respectively 92% and 94%. The algorithm demonstrates sufficient robustness to identify relevant images for automated diagnosis.

Fasih, Mahnaz; Langlois, J. M. Pierre; Ben Tahar, Houssem; Cheriet, Farida

2014-03-01

67

3D laser scanner system using high dynamic range imaging  

NASA Astrophysics Data System (ADS)

Because of its high measuring speed, moderate accuracy, low cost and robustness in the industrial field, 3D laser scanning has been widely used in a variety of applications. However, the measurement of a 3D profile of a high dynamic range (HDR) brightness surface such as a partially highlighted object or a partial specular reflection remains one of the most challenging problems. This difficulty has limited the adoption of such scanner systems. In this paper, an optical imaging system based on a high-resolution liquid crystal on silicon (LCoS) device and an image sensor (CCD or CMOS) was built to adjust the image's brightness pixel by pixel as required. The radiance value of the image captured by the image sensor is constrained to lie within the dynamic range of the sensor after an adaptive algorithm of pixel mapping between the LCoS mask plane and image plane through the HDR imaging system is added. Thus, an HDR image was reconstructed by the LCoS mask and the CCD image on this system. The significant difference between the proposed system and a traditional 3D laser scanner system is that the HDR image was used to calibrate and calculate the 3D profile coordinate. Experimental results show that HDR imaging can enhance 3D laser scanner system environmental adaptability and improve the accuracy of 3D profile measurement.

Zhongdong, Yang; Peng, Wang; Xiaohui, Li; Changku, Sun

2014-03-01

68

Visualization and Analysis of 3D Microscopic Images  

Microsoft Academic Search

In a wide range of biological studies, it is highly desirable to visualize and analyze three-dimensional (3D) microscopic images. In this primer, we first introduce several major methods for visualizing typical 3D images and related multi-scale, multi-time-point, multi-color data sets. Then, we discuss three key categories of image analysis tasks, namely segmentation, registration, and annotation. We demonstrate how to pipeline

Fuhui Long; Jianlong Zhou; Hanchuan Peng

2012-01-01

69

The Interpretation of a Moving Retinal Image  

Microsoft Academic Search

It is shown that from a monocular view of a rigid, textured, curved surface it is possible, in principle, to determine the gradient of the surface at any point, and the motion of the eye relative to it, from the velocity field of the changing retinal image, and its first and second spatial derivatives. The relevant equations are redundant, thus

H. C. Longuet-Higgins; K. Prazdny

1980-01-01

70

MR image denoising method for brain surface 3D modeling  

NASA Astrophysics Data System (ADS)

Three-dimensional (3D) modeling of medical images is a critical part of surgical simulation. In this paper, we focus on the magnetic resonance (MR) images denoising for brain modeling reconstruction, and exploit a practical solution. We attempt to remove the noise existing in the MR imaging signal and preserve the image characteristics. A wavelet-based adaptive curve shrinkage function is presented in spherical coordinates system. The comparative experiments show that the denoising method can preserve better image details and enhance the coefficients of contours. Using these denoised images, the brain 3D visualization is given through surface triangle mesh model, which demonstrates the effectiveness of the proposed method.

Zhao, De-xin; Liu, Peng-jie; Zhang, De-gan

2014-11-01

71

ASTM E57 3D imaging systems committee: an update  

NASA Astrophysics Data System (ADS)

In 2006, ASTM committee E57 was established to develop standards for the performance evaluation of 3D imaging systems. The committee's initial focus is on standards for 3D imaging systems typically used for applications including, but not limited to, construction and maintenance, surveying, mapping and terrain characterization, manufacturing (e.g., aerospace, shipbuilding), transportation, mining, mobility, historic preservation, and forensics. ASTM E57 consists of four subcommittees: Terminology, Test Methods, Best Practices, and Data Interoperability. This paper reports the accomplishments of the ASTM E57 3D Imaging Systems committee in 2007.

Cheok, Geraldine S.; Lytle, Alan M.; Saidi, Kamel S.

2008-04-01

72

Segmentation of Retinal Arteries in Adaptive Optics Images  

E-print Network

Segmentation of Retinal Arteries in Adaptive Optics Images Nicolas Lerm, Florence Rossant for automatically segmenting the walls of retinal arteries in adaptive optics images. To the best of our knowledge. INTRODUCTION The diseases affecting the retinal blood vessels of small diameter ( 150祄) such as arterial

Paris-Sud XI, Universit de

73

Accurate segmentation of blood vessels from 3D medical images  

Microsoft Academic Search

The authors' work contributes to the accurate segmentation of blood vessels from 3D medical images. The blood vessel axis and surface are optimized in an alternating way. Starting from an initial blood vessel axis estimate, slices are resampled in the 3D data volume perpendicular to this axis. In these slices, blood vessel contour candidate points are extracted at maximum gradient

B. Verdonck; L. Bloch; H. Maitre; D. Vandermeulen; P. Suetens; G. Marchal

1996-01-01

74

Adaptive Metamorphs Model for 3D Medical Image Segmentation  

E-print Network

Adaptive Metamorphs Model for 3D Medical Image Segmentation Junzhou Huang1 , Xiaolei Huang2 a solid model deforms toward object bound- ary. Our 3D segmentation method stems from Metamorphs. Metamorphs [1] is proposed as a new class of deformable models that integrate boundary information

Huang, Junzhou

75

Image Processing Software for 3D Light Microscopy  

Microsoft Academic Search

Advances in microscopy now enable researchers to easily acquire multi-channel three-dimensional (3D) images and 3D time series (4D). However, processing, analyzing, and displaying this data can often be difficult and time- consuming. We discuss some of the software tools and techniques that are available to accomplish these tasks.

Jeffrey L. Clendenon; Jason M. Byars; Deborah P. Hyink

2006-01-01

76

Retinal imaging with virtual reality stimulus for studying Salticidae retinas  

NASA Astrophysics Data System (ADS)

We present a 3-path optical system for studying the retinal movement of jumping spiders: a visible OLED virtual reality system presents stimulus, while NIR illumination and imaging systems observe retinal movement.

Schiesser, Eric; Canavesi, Cristina; Long, Skye; Jakob, Elizabeth; Rolland, Jannick P.

2014-12-01

77

Image based 3D city modeling : Comparative study  

NASA Astrophysics Data System (ADS)

3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city reconstruction; CityEngine is a good product. Agisoft Photoscan software creates much better 3D model with good texture quality and automatic processing. So this image based comparative study is useful for 3D city user community. Thus this study will provide a good roadmap for geomatics user community to create photo-realistic virtual 3D city model by using image based techniques.

Singh, S. P.; Jain, K.; Mandla, V. R.

2014-06-01

78

Modeling 3D human poses from uncalibrated monocular images  

Microsoft Academic Search

This paper introduces an efficient algorithm that reconstructs 3D human poses as well as camera parameters from a small number of 2D point correspondences obtained from uncalibrated monocular images. This problem is challenging because 2D image constraints (e.g. 2D point correspondences) are often not sufficient to determine 3D poses of an articulated object. The key idea of this paper is

Xiaolin K. Wei; Jinxiang Chai

2009-01-01

79

Analyzing 3D Images of the Brain NICHOLAS AYACHE  

E-print Network

research. Along these lines, and focusing on 3D images of the brain obtained with CT, MRI, SPECT, and PET a consider- able effort of research in automating the analysis and fusion of multidimensional images and elastic registration, fusion of multimodal images, analysis of temporal sequences (4D data), modeling

Paris-Sud XI, Universit de

80

Imaging hypoxia using 3D photoacoustic spectroscopy  

NASA Astrophysics Data System (ADS)

Purpose: The objective is to develop a multivariate in vivo hemodynamic model of tissue oxygenation (MiHMO2) based on 3D photoacoustic spectroscopy. Introduction: Low oxygen levels, or hypoxia, deprives cancer cells of oxygen and confers resistance to irradiation, some chemotherapeutic drugs, and oxygen-dependent therapies (phototherapy) leading to treatment failure and poor disease-free and overall survival. For example, clinical studies of patients with breast carcinomas, cervical cancer, and head and neck carcinomas (HNC) are more likely to suffer local reoccurrence and metastasis if their tumors are hypoxic. A novel method to non invasively measure tumor hypoxia, identify its type, and monitor its heterogeneity is devised by measuring tumor hemodynamics, MiHMO2. Material and Methods: Simulations are performed to compare tumor pO2 levels and hypoxia based on physiology - perfusion, fractional plasma volume, fractional cellular volume - and its hemoglobin status - oxygen saturation and hemoglobin concentration - based on in vivo measurements of breast, prostate, and ovarian tumors. Simulations of MiHMO2 are performed to assess the influence of scanner resolutions and different mathematic models of oxygen delivery. Results: Sensitivity of pO2 and hypoxic fraction to photoacoustic scanner resolution and dependencies on model complexity will be presented using hemodynamic parameters for different tumors. Conclusions: Photoacoustic CT spectroscopy provides a unique ability to monitor hemodynamic and cellular physiology in tissue, which can be used to longitudinally monitor tumor oxygenation and its response to anti-angiogenic therapies.

Stantz, Keith M.

2010-02-01

81

3D image texture analysis of simulated and real-world vascular trees.  

PubMed

A method is proposed for quantitative description of blood-vessel trees, which can be used for tree classification and/or physical parameters indirect monitoring. The method is based on texture analysis of 3D images of the trees. Several types of trees were defined, with distinct tree parameters (number of terminal branches, blood viscosity, input and output flow). A number of trees were computer-simulated for each type. 3D image was computed for each tree and its texture features were calculated. Best discriminating features were found and applied to 1-NN nearest neighbor classifier. It was demonstrated that (i) tree images can be correctly classified for realistic signal-to-noise ratio, (ii) some texture features are monotonously related to tree parameters, (iii) 2D texture analysis is not sufficient to represent the trees in the discussed sense. Moreover, applicability of texture model to quantitative description of vascularity images was also supported by unsupervised exploratory analysis. Eventually, the experimental confirmation was done, with the use of confocal microscopy images of rat brain vasculature. Several classes of brain tissue were clearly distinguished based on 3D texture numerical parameters, including control and different kinds of tumours - treated with NG2 proteoglycan to promote angiogenesis-dependent growth of the abnormal tissue. The method, applied to magnetic resonance imaging e.g. real neovasculature or retinal images can be used to support noninvasive medical diagnosis of vascular system diseases. PMID:21803438

Koci?ski, Marek; Klepaczko, Artur; Materka, Andrzej; Chekenya, Martha; Lundervold, Arvid

2012-08-01

82

Dedicated 3D photoacoustic breast imaging  

PubMed Central

Purpose: To report the design and imaging methodology of a photoacoustic scanner dedicated to imaging hemoglobin distribution throughout a human breast. Methods: The authors developed a dedicated breast photoacoustic mammography (PAM) system using a spherical detector aperture based on our previous photoacoustic tomography scanner. The system uses 512 detectors with rectilinear scanning. The scan shape is a spiral pattern whose radius varies from 24 to 96 mm, thereby allowing a field of view that accommodates a wide range of breast sizes. The authors measured the contrast-to-noise ratio (CNR) using a target comprised of 1-mm dots printed on clear plastic. Each dot absorption coefficient was approximately the same as a 1-mm thickness of whole blood at 756 nm, the output wavelength of the Alexandrite laser used by this imaging system. The target was immersed in varying depths of an 8% solution of stock Liposyn II-20%, which mimics the attenuation of breast tissue (1.1 cm?1). The spatial resolution was measured using a 6 ?m-diameter carbon fiber embedded in agar. The breasts of four healthy female volunteers, spanning a range of breast size from a brassiere C cup to a DD cup, were imaged using a 96-mm spiral protocol. Results: The CNR target was clearly visualized to a depth of 53 mm. Spatial resolution, which was estimated from the full width at half-maximum of a profile across the PAM image of a carbon fiber, was 0.42 mm. In the four human volunteers, the vasculature was well visualized throughout the breast tissue, including to the chest wall. Conclusions: CNR, lateral field-of-view and penetration depth of our dedicated PAM scanning system is sufficient to image breasts as large as 1335 mL, which should accommodate up to 90% of the women in the United States. PMID:24320471

Kruger, Robert A.; Kuzmiak, Cherie M.; Lam, Richard B.; Reinecke, Daniel R.; Del Rio, Stephen P.; Steed, Doreen

2013-01-01

83

3-D capacitance density imaging system  

DOEpatents

A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved. 7 figs.

Fasching, G.E.

1988-03-18

84

Imaging fault zones using 3D seismic image processing techniques  

NASA Astrophysics Data System (ADS)

Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes and collecting these into "disturbance geobodies". These seismic image processing methods represents a first efficient step toward a construction of a robust technique to investigate sub-seismic strain, mapping noisy deformed zones and displacement within subsurface geology (Dutzer et al.,2011; Iacopini et al.,2012). In all these cases, accurate fault interpretation is critical in applied geology to building a robust and reliable reservoir model, and is essential for further study of fault seal behavior, and reservoir compartmentalization. They are also fundamental for understanding how deformation localizes within sedimentary basins, including the processes associated with active seismogenetic faults and mega-thrust systems in subduction zones. Dutzer, JF, Basford., H., Purves., S. 2009, Investigating fault sealing potential through fault relative seismic volume analysis. Petroleum Geology Conference series 2010, 7:509-515; doi:10.1144/0070509 Marfurt, K.J., Chopra, S., 2007, Seismic attributes for prospect identification and reservoir characterization. SEG Geophysical development Iacopini, D., Butler, RWH. & Purves, S. (2012). 'Seismic imaging of thrust faults and structural damage: a visualization workflow for deepwater thrust belts'. First Break, vol 5, no. 30, pp. 39-46.

Iacopini, David; Butler, Rob; Purves, Steve

2013-04-01

85

AUTOMATIC REGISTRATION OF 3D ULTRASOUND IMAGES  

E-print Network

AUTOMATIC REGISTRATION OF 3颅D ULTRASOUND IMAGES R.N. Rohling, A.H. Gee and L. Berman CUED promising applications of 3颅D ultrasound lies in the visualisation and volume estimation of internal 3颅D structures. Unfortunately, the quality of the ultrasound data can be severely degraded by artifacts

Drummond, Tom

86

3D Finite Element Meshing from Imaging Data ?  

PubMed Central

This paper describes an algorithm to extract adaptive and quality 3D meshes directly from volumetric imaging data. The extracted tetrahedral and hexahedral meshes are extensively used in the Finite Element Method (FEM). A top-down octree subdivision coupled with the dual contouring method is used to rapidly extract adaptive 3D finite element meshes with correct topology from volumetric imaging data. The edge contraction and smoothing methods are used to improve the mesh quality. The main contribution is extending the dual contouring method to crack-free interval volume 3D meshing with feature sensitive adaptation. Compared to other tetrahedral extraction methods from imaging data, our method generates adaptive and quality 3D meshes without introducing any hanging nodes. The algorithm has been successfully applied to constructing the geometric model of a biomolecule in finite element calculations. PMID:19777144

Zhang, Yongjie; Bajaj, Chandrajit; Sohn, Bong-Soo

2009-01-01

87

Fully Automatic 3D Reconstruction of Histological Images  

E-print Network

In this paper, we propose a computational framework for 3D volume reconstruction from 2D histological slices using registration algorithms in feature space. To improve the quality of reconstructed 3D volume, first, intensity variations in images are corrected by an intensity standardization process which maps image intensity scale to a standard scale where similar intensities correspond to similar tissues. Second, a subvolume approach is proposed for 3D reconstruction by dividing standardized slices into groups. Third, in order to improve the quality of the reconstruction process, an automatic best reference slice selection algorithm is developed based on an iterative assessment of image entropy and mean square error of the registration process. Finally, we demonstrate that the choice of the reference slice has a significant impact on registration quality and subsequent 3D reconstruction.

Bagci, Ulas

2009-01-01

88

Color, 3D simulated images with shapelets  

NASA Astrophysics Data System (ADS)

We present a method to simulate color, 3-dimensional images taken with a space-based observatory by building off of the established shapelets pipeline. The simulated galaxies exhibit complex morphologies, which are realistically correlated between, and include, known redshifts. The simulations are created using galaxies from the 4 optical and near-infrared bands ( B, V, i and z) of the Hubble ultra Deep Field (UDF) as a basis set to model morphologies and redshift. We include observational effects such as sky noise and pixelization and can add astronomical signals of interest such as weak gravitational lensing. The realism of the simulations is demonstrated by comparing their morphologies to the original UDF galaxies and by comparing their distribution of ellipticities as a function of redshift and magnitude to wider HST COSMOS data. These simulations have already been useful for calibrating multicolor image analysis techniques and for better optimizing the design of proposed space telescopes.

Ferry, Matt; Rhodes, Jason; Massey, Richard; White, Martin; Coe, Dan; Mobasher, Bahram

2008-09-01

89

3-D Imaging Based, Radiobiological Dosimetry  

PubMed Central

Targeted radionuclide therapy holds promise as a new treatment against cancer. Advances in imaging are making it possible to evaluate the spatial distribution of radioactivity in tumors and normal organs over time. Matched anatomical imaging such as combined SPECT/CT and PET/CT have also made it possible to obtain tissue density information in conjunction with the radioactivity distribution. Coupled with sophisticated iterative reconstruction algorithims, these advances have made it possible to perform highly patient-specific dosimetry that also incorporates radiobiological modeling. Such sophisticated dosimetry techniques are still in the research investigation phase. Given the attendant logistical and financial costs, a demonstrated improvement in patient care will be a prerequisite for the adoption of such highly-patient specific internal dosimetry methods. PMID:18662554

Sgouros, George; Frey, Eric; Wahl, Richard; He, Bin; Prideaux, Andrew; Hobbs, Robert

2008-01-01

90

Acoustic 3D imaging of dental structures  

SciTech Connect

Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

Lewis, D.K. [Lawrence Livermore National Lab., CA (United States); Hume, W.R. [California Univ., Los Angeles, CA (United States); Douglass, G.D. [California Univ., San Francisco, CA (United States)

1997-02-01

91

Fourier domain mode locked (FDML) lasers at 1050 nm and 202,000 sweeps per second for OCT retinal imaging  

Microsoft Academic Search

Retinal imaging ranks amongst the most important clinical applications for optical coherence tomography (OCT) [1, 2]. The recent demonstration of increased sensitivity [3-6] in Fourier Domain detection [7, 8] has opened the way for dramatically higher imaging speeds, up to axial scan rates of several tens of kilohertz. However, these imaging speeds are still not sufficient for high density 3D

Robert A. Huber; Desmond C. Adler; Vivek J. Srinivasan; Iwona M. Gorczynska; James G. Fujimoto

2007-01-01

92

Automatic 3D lesion segmentation on breast ultrasound images  

NASA Astrophysics Data System (ADS)

Automatically acquired and reconstructed 3D breast ultrasound images allow radiologists to detect and evaluate breast lesions in 3D. However, assessing potential cancers in 3D ultrasound can be difficult and time consuming. In this study, we evaluate a 3D lesion segmentation method, which we had previously developed for breast CT, and investigate its robustness on lesions on 3D breast ultrasound images. Our dataset includes 98 3D breast ultrasound images obtained on an ABUS system from 55 patients containing 64 cancers. Cancers depicted on 54 US images had been clinically interpreted as negative on screening mammography and 44 had been clinically visible on mammography. All were from women with breast density BI-RADS 3 or 4. Tumor centers and margins were indicated and outlined by radiologists. Initial RGI-eroded contours were automatically calculated and served as input to the active contour segmentation algorithm yielding the final lesion contour. Tumor segmentation was evaluated by determining the overlap ratio (OR) between computer-determined and manually-drawn outlines. Resulting average overlap ratios on coronal, transverse, and sagittal views were 0.60 +/- 0.17, 0.57 +/- 0.18, and 0.58 +/- 0.17, respectively. All OR values were significantly higher the 0.4, which is deemed "acceptable". Within the groups of mammogram-negative and mammogram-positive cancers, the overlap ratios were 0.63 +/- 0.17 and 0.56 +/- 0.16, respectively, on the coronal views; with similar results on the other views. The segmentation performance was not found to be correlated to tumor size. Results indicate robustness of the 3D lesion segmentation technique in multi-modality 3D breast imaging.

Kuo, Hsien-Chi; Giger, Maryellen L.; Reiser, Ingrid; Drukker, Karen; Edwards, Alexandra; Sennett, Charlene A.

2013-02-01

93

3D thermography imaging standardization technique for inflammation diagnosis  

NASA Astrophysics Data System (ADS)

We develop a 3D thermography imaging standardization technique to allow quantitative data analysis. Medical Digital Infrared Thermal Imaging is very sensitive and reliable mean of graphically mapping and display skin surface temperature. It allows doctors to visualise in colour and quantify temperature changes in skin surface. The spectrum of colours indicates both hot and cold responses which may co-exist if the pain associate with an inflammatory focus excites an increase in sympathetic activity. However, due to thermograph provides only qualitative diagnosis information, it has not gained acceptance in the medical and veterinary communities as a necessary or effective tool in inflammation and tumor detection. Here, our technique is based on the combination of visual 3D imaging technique and thermal imaging technique, which maps the 2D thermography images on to 3D anatomical model. Then we rectify the 3D thermogram into a view independent thermogram and conform it a standard shape template. The combination of these imaging facilities allows the generation of combined 3D and thermal data from which thermal signatures can be quantified.

Ju, Xiangyang; Nebel, Jean-Christophe; Siebert, J. Paul

2005-01-01

94

Prostate Mechanical Imaging: 3-D Image Composition and Feature Calculations  

PubMed Central

We have developed a method and a device entitled prostate mechanical imager (PMI) for the real-time imaging of prostate using a transrectal probe equipped with a pressure sensor array and position tracking sensor. PMI operation is based on measurement of the stress pattern on the rectal wall when the probe is pressed against the prostate. Temporal and spatial changes in the stress pattern provide information on the elastic structure of the gland and allow two-dimensional (2-D) and three-dimensional (3-D) reconstruction of prostate anatomy and assessment of prostate mechanical properties. The data acquired allow the calculation of prostate features such as size, shape, nodularity, consistency/hardness, and mobility. The PMI prototype has been validated in laboratory experiments on prostate phantoms and in a clinical study. The results obtained on model systems and in vivo images from patients prove that PMI has potential to become a diagnostic tool that could largely supplant DRE through its higher sensitivity, quantitative record storage, ease-of-use and inherent low cost. PMID:17024836

Egorov, Vladimir; Ayrapetyan, Suren; Sarvazyan, Armen P.

2008-01-01

95

Getting in touch3D printing in Forensic Imaging  

Microsoft Academic Search

With the increasing use of medical imaging in forensics, as well as the technological advances in rapid prototyping, we suggest combining these techniques to generate displays of forensic findings. We used computed tomography (CT), CT angiography, magnetic resonance imaging (MRI) and surface scanning with photogrammetry in conjunction with segmentation techniques to generate 3D polygon meshes. Based on these data sets,

Lars Chr. Ebert; Michael J. Thali; Steffen Ross

2011-01-01

96

2D\\/3D image (facial) comparison using camera matching  

Microsoft Academic Search

A problem in forensic facial comparison of images of perpetrators and suspects is that distances between fixed anatomical points in the face, which form a good starting point for objective, anthropometric comparison, vary strongly according to the position and orientation of the camera. In case of a cooperating suspect, a 3D image may be taken using e.g. a laser scanning

Mirelle I. M. Goos; Ivo B. Alberink; Arnout C. C. Ruifrok

2006-01-01

97

Facial image comparison using a 3D laser scanning system  

Microsoft Academic Search

To reliably perform comparisons of facial images, it is important to position the head corresponding to the facial images available. Techniques using three or more landmark points on the face have been proposed for matching the face and camera positions to the available photographs. However, these methods can be cumbersome, and require the cooperation of the subject. 3D photographs, together

Arnout C. Ruifrok; Mirelle Goos; Bart Hoogeboom; Derk Vrijdag; Jurrien Bijhold

2003-01-01

98

2D and 3D Elasticity Imaging Using Freehand Ultrasound  

E-print Network

2D and 3D Elasticity Imaging Using Freehand Ultrasound Joel Edward Lindop Pembroke College March to mechanical properties (e.g., stiffness) to which conventional forms of ultrasound, X-ray and magnetic that occur between the acquisition of multiple ultrasound images. Likely applications include improved

Drummond, Tom

99

3D image analysis of abdominal aortic aneurysm  

NASA Astrophysics Data System (ADS)

In this paper we propose a technique for 3-D segmentation of abdominal aortic aneurysm (AAA) from computed tomography angiography (CTA) images. Output data (3-D model) form the proposed method can be used for measurement of aortic shape and dimensions. Knowledge of aortic shape and size is very important in planning of minimally invasive procedure that is for selection of appropriate stent graft device for treatment of AAA. The technique is based on a 3-D deformable model and utilizes the level-set algorithm for implementation of the method. The method performs 3-D segmentation of CTA images and extracts a 3-D model of aortic wall. Once the 3-D model of aortic wall is available it is easy to perform all required measurements for appropriate stent graft selection. The method proposed in this paper uses the level-set algorithm for deformable models, instead of the classical snake algorithm. The main advantage of the level set algorithm is that it enables easy segmentation of complex structures, surpassing most of the drawbacks of the classical approach. We have extended the deformable model to incorporate the a priori knowledge about the shape of the AAA. This helps direct the evolution of the deformable model to correctly segment the aorta. The algorithm has been implemented in IDL and C languages. Experiments have been performed using real patient CTA images and have shown good results.

Subasic, Marko; Loncaric, Sven; Sorantin, Erich

2001-07-01

100

3D image analysis of abdominal aortic aneurysm  

NASA Astrophysics Data System (ADS)

This paper presents a method for 3-D segmentation of abdominal aortic aneurysm from computed tomography angiography images. The proposed method is automatic and requires minimal user assistance. Segmentation is performed in two steps. First inner and then outer aortic border is segmented. Those two steps are different due to different image conditions on two aortic borders. Outputs of these two segmentations give a complete 3-D model of abdominal aorta. Such a 3-D model is used in measurements of aneurysm area. The deformable model is implemented using the level-set algorithm due to its ability to describe complex shapes in natural manner which frequently occur in pathology. In segmentation of outer aortic boundary we introduced some knowledge based preprocessing to enhance and reconstruct low contrast aortic boundary. The method has been implemented in IDL and C languages. Experiments have been performed using real patient CTA images and have shown good results.

Subasic, Marko; Loncaric, Sven; Sorantin, Erich

2002-05-01

101

Computerized analysis of pelvic incidence from 3D images  

NASA Astrophysics Data System (ADS)

The sagittal alignment of the pelvis can be evaluated by the angle of pelvic incidence (PI), which is constant for an arbitrary subject position and orientation and can be therefore compared among subjects in standing, sitting or supine position. In this study, PI was measured from three-dimensional (3D) computed tomography (CT) images of normal subjects that were acquired in supine position. A novel computerized method, based on image processing techniques, was developed to automatically determine the anatomical references required to measure PI, i.e. the centers of the femoral heads in 3D, and the center and inclination of the sacral endplate in 3D. Multiplanar image reformation was applied to obtain perfect sagittal views with all anatomical structures completely in line with the hip axis, from which PI was calculated. The resulting PI (mean+/-standard deviation) was equal to 46.6+/-9.2 for male subjects (N = 189), 47.6+/-10.7 for female subjects (N = 181), and 47.1+/-10.0 for all subjects (N = 370). The obtained measurements of PI from 3D images were not biased by acquisition projection or structure orientation, because all anatomical structures were completely in line with the hip axis. The performed measurements in 3D therefore represent PI according to the actual geometrical relationships among anatomical structures of the sacrum, pelvis and hips, as observed from the perfect sagittal views.

Vrtovec, Toma; Janssen, Michiel M. A.; Pernu, Franjo; Castelein, Ren M.; Viergever, Max A.

2012-02-01

102

Single 3D cell segmentation from optical CT microscope images  

NASA Astrophysics Data System (ADS)

The automated segmentation of the nucleus and cytoplasm regions in 3D optical CT microscope images has been achieved with two methods, a global threshold gradient based approach and a graph-cut approach. For the first method, the first two peaks of a gradient figure of merit curve are selected as the thresholds for cytoplasm and nucleus segmentation. The second method applies a graph-cut segmentation twice: the first identifies the nucleus region and the second identifies the cytoplasm region. Image segmentation of single cells is important for automated disease diagnostic systems. The segmentation methods were evaluated with 200 3D images consisting of 40 samples of 5 different cell types. The cell types consisted of columnar, macrophage, metaplastic and squamous human cells and cultured A549 cancer cells. The segmented cells were compared with both 2D and 3D reference images and the quality of segmentation was determined by the Dice Similarity Coefficient (DSC). In general, the graph-cut method had a superior performance to the gradient-based method. The graph-cut method achieved an average DSC of 86% and 72% for nucleus and cytoplasm segmentations respectively for the 2D reference images and 83% and 75% for the 3D reference images. The gradient method achieved an average DSC of 72% and 51% for nucleus and cytoplasm segmentation for the 2D reference images and 71% and 51% for the 3D reference images. The DSC of cytoplasm segmentation was significantly lower than for the nucleus since the cytoplasm was not differentiated as well by image intensity from the background.

Xie, Yiting; Reeves, Anthony P.

2014-03-01

103

Elastic models: a comparative study applied to retinal images  

E-print Network

Elastic models: a comparative study applied to retinal images E.Karali1 , S. Lambropoulou2 , and D: a comparative study applied to retinal images Abstract In this work various methods of parametric elastic models near the object boundaries. Elastic models are the most popular image segmentation techniques

Lambropoulou, Sofia

104

3-D image analysis of abdominal aortic aneurysm.  

PubMed

In this paper we propose a technique for 3-D segmentation of abdominal aortic aneurysm (AAA) from computed tomography (CT) angiography images. Output data form the proposed method can be used for measurement of aortic shape and dimensions. Knowledge of aortic shape and size is very important for selection of appropriate stent graft device for treatment of AAA. The technique is based on a 3-D deformable model and utilizes the level-set algorithm for implementation of the method. The method performs 3-D segmentation of CT images and extracts a 3-D model of aortic wall. Once the 3-D model of aortic wall is available it is easy to perform all required measurements for appropriate stent graft selection. The method proposed in this paper uses the level-set algorithm instead of the classical active contour algorithm developed by Kass et al. The main advantage of the level set algorithm is that it enables easy segmentation surpassing most of the drawbacks of the classical approach. In the level-set approach for shape modeling, a 3-D surface is represented by a real 3-D function or equivalent 4-D surface. The 4-D surface is then evolved through an iterative process of solving the differential equation of surface motion. Surface motion is defined by velocity at each point. The velocity is a sum of constant velocity and curvature-dependent velocity. The stopping criterion is calculated based on image gradient. The algorithm has been implemented in MATLAB and C languages. Experiments have been performed using real patient CT angiography images and have shown good results. PMID:11187511

Subasic, M; Loncaric, S; Sorantin, E

2000-01-01

105

Molecular Imaging of Retinal Disease  

PubMed Central

Abstract Imaging of the eye plays an important role in ocular therapeutic discovery and evaluation in preclinical models and patients. Advances in ophthalmic imaging instrumentation have enabled visualization of the retina at an unprecedented resolution. These developments have contributed toward early detection of the disease, monitoring of disease progression, and assessment of the therapeutic response. These powerful technologies are being further harnessed for clinical applications by configuring instrumentation to detect disease biomarkers in the retina. These biomarkers can be detected either by measuring the intrinsic imaging contrast in tissue, or by the engineering of targeted injectable contrast agents for imaging of the retina at the cellular and molecular level. Such approaches have promise in providing a window on dynamic disease processes in the retina such as inflammation and apoptosis, enabling translation of biomarkers identified in preclinical and clinical studies into useful diagnostic targets. We discuss recently reported and emerging imaging strategies for visualizing diverse cell types and molecular mediators of the retina in vivo during health and disease, and the potential for clinical translation of these approaches. PMID:23421501

Capozzi, Megan E.; Gordon, Andrew Y.; Penn, John S.

2013-01-01

106

Optimized Bayes variational regularization prior for 3D PET images.  

PubMed

A new prior for variational Maximum a Posteriori regularization is proposed to be used in a 3D One-Step-Late (OSL) reconstruction algorithm accounting also for the Point Spread Function (PSF) of the PET system. The new regularization prior strongly smoothes background regions, while preserving transitions. A detectability index is proposed to optimize the prior. The new algorithm has been compared with different reconstruction algorithms such as 3D-OSEM+PSF, 3D-OSEM+PSF+post-filtering and 3D-OSL with a Gauss-Total Variation (GTV) prior. The proposed regularization allows controlling noise, while maintaining good signal recovery; compared to the other algorithms it demonstrates a very good compromise between an improved quantitation and good image quality. PMID:24958594

Rapisarda, Eugenio; Presotto, Luca; De Bernardi, Elisabetta; Gilardi, Maria Carla; Bettinardi, Valentino

2014-09-01

107

Adaptive Optics Technology for High-Resolution Retinal Imaging  

PubMed Central

Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effects of optical aberrations. The direct visualization of the photoreceptor cells, capillaries and nerve fiber bundles represents the major benefit of adding AO to retinal imaging. Adaptive optics is opening a new frontier for clinical research in ophthalmology, providing new information on the early pathological changes of the retinal microstructures in various retinal diseases. We have reviewed AO technology for retinal imaging, providing information on the core components of an AO retinal camera. The most commonly used wavefront sensing and correcting elements are discussed. Furthermore, we discuss current applications of AO imaging to a population of healthy adults and to the most frequent causes of blindness, including diabetic retinopathy, age-related macular degeneration and glaucoma. We conclude our work with a discussion on future clinical prospects for AO retinal imaging. PMID:23271600

Lombardo, Marco; Serrao, Sebastiano; Devaney, Nicholas; Parravano, Mariacristina; Lombardo, Giuseppe

2013-01-01

108

Automatic 3d Mapping Using Multiple Uncalibrated Close Range Images  

NASA Astrophysics Data System (ADS)

Automatic three-dimensions modeling of the real world is an important research topic in the geomatics and computer vision fields for many years. By development of commercial digital cameras and modern image processing techniques, close range photogrammetry is vastly utilized in many fields such as structure measurements, topographic surveying, architectural and archeological surveying, etc. A non-contact photogrammetry provides methods to determine 3D locations of objects from two-dimensional (2D) images. Problem of estimating the locations of 3D points from multiple images, often involves simultaneously estimating both 3D geometry (structure) and camera pose (motion), it is commonly known as structure from motion (SfM). In this research a step by step approach to generate the 3D point cloud of a scene is considered. After taking images with a camera, we should detect corresponding points in each two views. Here an efficient SIFT method is used for image matching for large baselines. After that, we must retrieve the camera motion and 3D position of the matched feature points up to a projective transformation (projective reconstruction). Lacking additional information on the camera or the scene makes the parallel lines to be unparalleled. The results of SfM computation are much more useful if a metric reconstruction is obtained. Therefor multiple views Euclidean reconstruction applied and discussed. To refine and achieve the precise 3D points we use more general and useful approach, namely bundle adjustment. At the end two real cases have been considered to reconstruct (an excavation and a tower).

Rafiei, M.; Saadatseresht, M.

2013-09-01

109

Active Multispectral 3D Vision Sensor Image Evaluation  

NASA Astrophysics Data System (ADS)

The first laboratory images from an active multispectral 3D imaging sensor under development for cross-country navigation of an autonomous vehicle are presented. Images of the reflected return from three spectral channels, 0.532 ?m, 0.802 ?m, and 1.064 ?m, and a range channel were acquired simultaneously in performance tests during the first stage of development of the sensor. An analysis of the image data showed strong agreement between the actual sensor performance and calculated estimates. Position, shape, and spectral discrimination of objects are evident in the images and will function as input to the autonomous vehicle for locating and classifying obstacles and terrain.

Gleichman, K.; Zuk, D.; Harmon, L.; Bair, M.

1988-05-01

110

A microfabricated 3-D stem cell delivery scaffold for retinal regenerative therapy  

E-print Network

Diseases affecting the retina, such as Age-related Macular Degeneration (AMD) and Retinitis Pigmentosa (RP), result in the degeneration of the photoreceptor cells and can ultimately lead to blindness in patients. There is ...

Sodha, Sonal

2009-01-01

111

3D reconstruction based on CT image and its application  

NASA Astrophysics Data System (ADS)

Reconstitute the 3-D model of the liver and its internal piping system and simulation of the liver surgical operation can increase the accurate and security of the liver surgical operation, attain a purpose for the biggest limit decrease surgical operation wound, shortening surgical operation time, increasing surgical operation succeeding rate, reducing medical treatment expenses and promoting patient recovering from illness. This text expatiated technology and method that the author constitutes 3-D the model of the liver and its internal piping system and simulation of the liver surgical operation according to the images of CT. The direct volume rendering method establishes 3D the model of the liver. Under the environment of OPENGL adopt method of space point rendering to display liver's internal piping system and simulation of the liver surgical operation. Finally, we adopt the wavelet transform method compressed the medical image data.

Zhang, Jianxun; Zhang, Mingmin

2004-03-01

112

Reconstruction of 3D scenes from sequences of images  

NASA Astrophysics Data System (ADS)

Reconstruction of three-dimensional (3D) scenes is an active research topic in the field of computer vision and 3D display. It's a challenge to model 3D objects rapidly and effectively. A 3D model can be extracted from multiple images. The system only requires a sequence of images taken with cameras without knowing the parameters of camera, which provide flexibility to a high degree. We focus on quickly merging point cloud of the object from depth map sequences. The whole system combines algorithms of different areas in computer vision, such as camera calibration, stereo correspondence, point cloud splicing and surface reconstruction. The procedure of 3D reconstruction is decomposed into a number of successive steps. Firstly, image sequences are received by the camera freely moving around the object. Secondly, the scene depth is obtained by a non-local stereo matching algorithm. The pairwise is realized with the Scale Invariant Feature Transform (SIFT) algorithm. An initial matching is then made for the first two images of the sequence. For the subsequent image that is processed with previous image, the point of interest corresponding to ones in previous images are refined or corrected. The vertical parallax between the images is eliminated. The next step is to calibrate camera, and intrinsic parameters and external parameters of the camera are calculated. Therefore, The relative position and orientation of camera are gotten. A sequence of depth maps are acquired by using a non-local cost aggregation method for stereo matching. Then point cloud sequence is achieved by the scene depths, which consists of point cloud model using the external parameters of camera and the point cloud sequence. The point cloud model is then approximated by a triangular wire-frame mesh to reduce geometric complexity and to tailor the model to the requirements of computer graphics visualization systems. Finally, the texture is mapped onto the wire-frame model, which can also be used for 3D display. According to the experimental results, we can reconstruct a 3D point cloud model more quickly and efficiently than other methods.

Niu, Bei; Sang, Xinzhu; Chen, Duo; Cai, Yuanfa

2013-08-01

113

3D Measurements in Images using CAD Models George Vosselman  

E-print Network

3D Measurements in Images using CAD Models George Vosselman Delft University of Technology Faculty.vosselman@geo.tudelft.nl Keywords: Measurement, Matching, CAD-Models Abstract Semi-automatic measurement of objects with regular are summarised in section six. 2 Related work 2.1 Manipulation of wire frames The tools available in CAD packages

Vosselman, George

114

3-D transformations of images in scanline order  

Microsoft Academic Search

Currerntly texture mapping onto projections of 3-D surfaces is time consuming and subject to considerable aliasing errors. Usually the procedure is to perform some inverse mapping from the area of the pixel onto the surface texture. It is difficult to do this correctly. There is an alternate approach where the texture surface is transformed as a 2-D image until it

Ed Catmull; Alvy Ray Smith

1980-01-01

115

3D Plant Modelling via Hyperspectral Imaging Australian National University  

E-print Network

of drought tolerance and flowering behavior. Therefore, ro- bust and accurate measurement plant methods3D Plant Modelling via Hyperspectral Imaging Jie Liang Australian National University Australia jie Australia jun.zhou@griffith.edu.au Xavier Sirault CSIRO Australia xavier.sirault@csiro.au Abstract Plant

Zhou, Jun

116

3-D Building Reconstruction from Multi-view Aerial Images  

Microsoft Academic Search

In an automated procedure for stereoscopic mapping, reasoning is always needed when insufficient information due to occlusions, shadows, or indistinctness of image features is encountered. In order to alleviate the influence of the information insufficiency, it is common to include more geometric or spectral data source to assist the reconstruction for 3-D man-made buildings. Considering the improvement of geometric information,

Cheng-Yin Chen; Liang-Chien Chen

117

Object-Based 3D X-Ray Imaging  

Microsoft Academic Search

A form of 3D X-ray imaging is introduced, in which the subject material is represented as discrete objects. The surfaces of these objects are derived, accurately and in a novel way, from their outlines in about 10 views, distributed in solid angle, and are represented as arrays of miniature triangular facets. The technique is suitable for a number of important

Ralph. Benjamin

1995-01-01

118

Practical pseudo-3D registration for large tomographic images  

NASA Astrophysics Data System (ADS)

Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has been performed.

Liu, Xuan; Laperre, Kjell; Sasov, Alexander

2014-09-01

119

Optimizing 3D image quality and performance for stereoscopic gaming  

NASA Astrophysics Data System (ADS)

The successful introduction of stereoscopic TV systems, such as Samsung's 3D Ready Plasma, requires high quality 3D content to be commercially available to the consumer. Console and PC games provide the most readily accessible source of high quality 3D content. This paper describes innovative developments in a generic, PC-based game driver architecture that addresses the two key issues affecting 3D gaming: quality and speed. At the heart of the quality issue are the same considerations that studios face producing stereoscopic renders from CG movies: how best to perform the mapping from a geometric CG environment into the stereoscopic display volume. The major difference being that for game drivers this mapping cannot be choreographed by hand but must be automatically calculated in real-time without significant impact on performance. Performance is a critical issue when dealing with gaming. Stereoscopic gaming has traditionally meant rendering the scene twice with the associated performance overhead. An alternative approach is to render the scene from one virtual camera position and use information from the z-buffer to generate a stereo pair using Depth-Image-Based Rendering (DIBR). We analyze this trade-off in more detail and provide some results relating to both 3D image quality and render performance.

Flack, Julien; Sanderson, Hugh; Pegg, Steven; Kwok, Simon; Paterson, Daniel

2009-02-01

120

3D wavefront image formation for NIITEK GPR  

NASA Astrophysics Data System (ADS)

The U.S. Department of Defense Humanitarian Demining (HD) Research and Development Program focuses on developing, testing, demonstrating, and validating new technology for immediate use in humanitarian demining operations around the globe. Beginning in the late 1990's, the U.S. Army Countermine Division funded the development of the NIITEK ground penetrating radar (GPR) for detection of anti-tank (AT) landmines. This work is concerned with signal processing algorithms to suppress sources of artifacts in the NIITEK GPR, and formation of three-dimensional (3D) imagery from the resultant data. We first show that the NIITEK GPR data correspond to a 3D Synthetic Aperture Radar (SAR) database. An adaptive filtering method is utilized to suppress ground return and self-induced resonance (SIR) signals that are generated by the interaction of the radar-carrying platform and the transmitted radar signal. We examine signal processing methods to improve the fidelity of imagery for this 3D SAR system using pre-processing methods that suppress Doppler aliasing as well as other side lobe leakage artifacts that are introduced by the radar radiation pattern. The algorithm, known as digital spotlighting, imposes a filtering scheme on the azimuth-compressed SAR data, and manipulates the resultant spectral data to achieve a higher PRF to suppress the Doppler aliasing. We also present the 3D version of the Fourier-based wavefront reconstruction, a computationally-efficient and approximation-free SAR imaging method, for image formation with the NIITEK 3D SAR database.

Soumekh, Mehrdad; Ton, Tuan; Howard, Pete

2009-05-01

121

3D surface recognition and shape estimation from textured images  

Microsoft Academic Search

The authors present a novel and unifying decision-theoretic model-based approach for solving the problem of 3-D surface recognition and orientation estimation given an surface image patch. They concentrate on the subclass of quadric surfaces, mainly planes, cylinders, and spheres. The underlying assumption is that the different surfaces are textured. The authors adopt a model-based approach, wherein in the sensed image

Femand S. Cohen; M. A. Patel

1988-01-01

122

Practical applications of 3D sonography in gynecologic imaging.  

PubMed

Volume imaging in the pelvis has been well demonstrated to be an extremely useful technique, largely based on its ability to reconstruct the coronal plane of the uterus that usually cannot be visualized using traditional 2-dimensional (2D) imaging. As a result, this technique is now a part of the standard pelvic ultrasound protocol in many institutions. A variety of valuable applications of 3D sonography in the pelvis are discussed in this article. PMID:25444101

Andreotti, Rochelle F; Fleischer, Arthur C

2014-11-01

123

Segmentation of Retinal Arteries in Adaptive Optics Images  

E-print Network

Segmentation of Retinal Arteries in Adaptive Optics Images Nicolas Lerm, Florence Rossant--In this paper, we present a method for automatically segmenting the walls of retinal arteries in adaptive optics, ap- proximate parallelism, retina imaging. I. INTRODUCTION Arterial hypertension (AH) and diabetic

Boyer, Edmond

124

Perceptions of Retinal Imaging Technology for Identifying Livestock Exhibits  

Microsoft Academic Search

This paper outlines the results of an online survey about the perceptions of Indiana 4-H Youth Educators on the use of retinal imaging for the purpose of identifying 4-H livestock projects. Indiana has begun a three year implementation period of retinal imaging and doing away with nose printing as the method of permanent livestock identification. The perceptions relate to the

Christine R. Blomeke; Brian M. Howell; Stephen J. Elliott

2006-01-01

125

Stereotactic mammography imaging combined with 3D US imaging for image guided breast biopsy  

SciTech Connect

Stereotactic X-ray mammography (SM) and ultrasound (US) guidance are both commonly used for breast biopsy. While SM provides three-dimensional (3D) targeting information and US provides real-time guidance, both have limitations. SM is a long and uncomfortable procedure and the US guided procedure is inherently two dimensional (2D), requiring a skilled physician for both safety and accuracy. The authors developed a 3D US-guided biopsy system to be integrated with, and to supplement SM imaging. Their goal is to be able to biopsy a larger percentage of suspicious masses using US, by clarifying ambiguous structures with SM imaging. Features from SM and US guided biopsy were combined, including breast stabilization, a confined needle trajectory, and dual modality imaging. The 3D US guided biopsy system uses a 7.5 MHz breast probe and is mounted on an upright SM machine for preprocedural imaging. Intraprocedural targeting and guidance was achieved with real-time 2D and near real-time 3D US imaging. Postbiopsy 3D US imaging allowed for confirmation that the needle was penetrating the target. The authors evaluated 3D US-guided biopsy accuracy of their system using test phantoms. To use mammographic imaging information, they registered the SM and 3D US coordinate systems. The 3D positions of targets identified in the SM images were determined with a target localization error (TLE) of 0.49 mm. The z component (x-ray tube to image) of the TLE dominated with a TLE{sub z} of 0.47 mm. The SM system was then registered to 3D US, with a fiducial registration error (FRE) and target registration error (TRE) of 0.82 and 0.92 mm, respectively. Analysis of the FRE and TRE components showed that these errors were dominated by inaccuracies in the z component with a FRE{sub z} of 0.76 mm and a TRE{sub z} of 0.85 mm. A stereotactic mammography and 3D US guided breast biopsy system should include breast compression for stability and safety and dual modality imaging for target localization. The system will provide preprocedural x-ray mammography information in the form of SM imaging along with real-time US imaging for needle guidance to a target. 3D US imaging will also be available for targeting, guidance, and biopsy verification immediately postbiopsy.

Surry, K. J. M.; Mills, G. R.; Bevan, K.; Downey, D. B.; Fenster, A. [Imaging Research Labs, Robarts Research Institute, London, N6A 5K8 (Canada) and Department of Medical Biophysics, University of Western Ontario, London, N6A 5C1 (Canada); Imaging Research Labs, Robarts Research Institute, London, N6A 5K8 (Canada); Imaging Research Labs, Robarts Research Institute, London, N6A 5K8 (Canada) and Department of Radiology, London Health Sciences Centre, London, N6A 5K8 (Canada); Imaging Research Labs, Robarts Research Institute, London, N6A 5K8 (Canada) and Department of Medical Biophysics, University of Western Ontario, London, N6A 5C1 Canada

2007-11-15

126

Refraction Correction in 3D Transcranial Ultrasound Imaging  

PubMed Central

We present the first correction of refraction in three-dimensional (3D) ultrasound imaging using an iterative approach that traces propagation paths through a two-layer planar tissue model, applying Snell抯 law in 3D. This approach is applied to real-time 3D transcranial ultrasound imaging by precomputing delays offline for several skull thicknesses, allowing the user to switch between three sets of delays for phased array imaging at the push of a button. Simulations indicate that refraction correction may be expected to increase sensitivity, reduce beam steering errors, and partially restore lost spatial resolution, with the greatest improvements occurring at the largest steering angles. Distorted images of cylindrical lesions were created by imaging through an acrylic plate in a tissue-mimicking phantom. As a result of correcting for refraction, lesions were restored to 93.6% of their original diameter in the lateral direction and 98.1% of their original shape along the long axis of the cylinders. In imaging two healthy volunteers, the mean brightness increased by 8.3% and showed no spatial dependency. PMID:24275538

Lindsey, Brooks D.; Smith, Stephen W.

2014-01-01

127

Simulation of 3D objects into breast tomosynthesis images.  

PubMed

Digital breast tomosynthesis is a new three-dimensional (3D) breast-imaging modality that produces images of cross-sectional planes parallel to the detector plane from a limited number of X-ray projections over a limited angular range. Several technical and clinical parameters have not yet been completely optimised. Some of the open questions could be addressed experimentally; other parameter settings cannot be easily realised in practice and the associated optimisation process requires therefore a theoretical approach. Rather than simulating the complete 3D imaging chain, it is hypothesised that the simulation of small lesions into clinical (or test object) images can be of help in the optimisation process. In the present study, small 3D objects have been simulated into real projection images. Subsequently, these hybrid projection images are reconstructed using the routine clinical reconstruction tools. In this study, the validation of this simulation framework is reported through the comparison between simulated and real objects in reconstructed planes. The results confirm that there is no statistically significant difference between the simulated and the real objects. This suggests that other small mathematical or physiological objects could be simulated with the same approach. PMID:20207750

Shaheen, E; Zanca, F; Sisini, F; Zhang, G; Jacobs, J; Bosmans, H

2010-01-01

128

2D/3D image (facial) comparison using camera matching.  

PubMed

A problem in forensic facial comparison of images of perpetrators and suspects is that distances between fixed anatomical points in the face, which form a good starting point for objective, anthropometric comparison, vary strongly according to the position and orientation of the camera. In case of a cooperating suspect, a 3D image may be taken using e.g. a laser scanning device. By projecting the 3D image onto a 2D image with the suspect's head in the same pose as that of the perpetrator, using the same focal length and pixel aspect ratio, numerical comparison of (ratios of) distances between fixed points becomes feasible. An experiment was performed in which, starting from two 3D scans and one 2D image of two colleagues, male and female, and using seven fixed anatomical locations in the face, comparisons were made for the matching and non-matching case. Using this method, the non-matching pair cannot be distinguished from the matching pair of faces. Facial expression and resolution of images were all more or less optimal, and the results of the study are not encouraging for the use of anthropometric arguments in the identification process. More research needs to be done though on larger sets of facial comparisons. PMID:16337353

Goos, Mirelle I M; Alberink, Ivo B; Ruifrok, Arnout C C

2006-11-10

129

Preliminary comparison of 3D synthetic aperture imaging with Explososcan  

NASA Astrophysics Data System (ADS)

Explososcan is the 'gold standard' for real-time 3D medical ultrasound imaging. In this paper, 3D synthetic aperture imaging is compared to Explososcan by simulation of 3D point spread functions. The simulations mimic a 3232 element prototype transducer. The transducer mimicked is a dense matrix phased array with a pitch of 300 ?m, made by Vermon. For both imaging techniques, 289 emissions are used to image a volume spanning 60 in both the azimuth and elevation direction and 150mm in depth. This results for both techniques in a frame rate of 18 Hz. The implemented synthetic aperture technique reduces the number of transmit channels from 1024 to 256, compared to Explososcan. In terms of FWHM performance, was Explososcan and synthetic aperture found to perform similar. At 90mm depth is Explososcan's FWHM performance 7% better than that of synthetic aperture. Synthetic aperture improved the cystic resolution, which expresses the ability to detect anechoic cysts in a uniform scattering media, at all depths except at Explososcan's focus point. Synthetic aperture reduced the cyst radius, R20dB, at 90mm depth by 48%. Synthetic aperture imaging was shown to reduce the number of transmit channels by four and still, generally, improve the imaging quality.

Rasmussen, Morten Fischer; Hansen, Jens Munk; F閞in, Guillaume; Dufait, R閙i; Jensen, J鴕gen Arendt

2012-03-01

130

3-D object-oriented image analysis of geophysical data  

NASA Astrophysics Data System (ADS)

Geophysical data are the main source of information about the subsurface. Geophysical techniques are, however, highly non-unique in determining specific physical parameters and boundaries of subsurface objects. To obtain actual physical information, an inversion process is often applied, in which measurements at or above the Earth surface are inverted into a 2- or 3-D subsurface spatial distribution of the physical property. Interpreting these models into structural objects, related to physical processes, requires a priori knowledge and expert analysis which is susceptible to subjective choices and is therefore often non-repeatable. In this research, we implemented a recently introduced object-based approach to interpret the 3-D inversion results of a single geophysical technique using the available a priori information and the physical and geometrical characteristics of the interpreted objects. The introduced methodology is semi-automatic and repeatable, and allows the extraction of subsurface structures using 3-D object-oriented image analysis (3-D OOA) in an objective knowledge-based classification scheme. The approach allows for a semi-objective setting of thresholds that can be tested and, if necessary, changed in a very fast and efficient way. These changes require only changing the thresholds used in a so-called ruleset, which is composed of algorithms that extract objects from a 3-D data cube. The approach is tested on a synthetic model, which is based on a priori knowledge on objects present in the study area (Tanzania). Object characteristics and thresholds were well defined in a 3-D histogram of velocity versus depth, and objects were fully retrieved. The real model results showed how 3-D OOA can deal with realistic 3-D subsurface conditions in which the boundaries become fuzzy, the object extensions become unclear and the model characteristics vary with depth due to the different physical conditions. As expected, the 3-D histogram of the real data was substantially more complex. Still, the 3-D OOA-derived objects were extracted based on their velocity and their depth location. Spatially defined boundaries, based on physical variations, can improve the modelling with spatially dependent parameter information. With 3-D OOA, the non-uniqueness on the location of objects and their physical properties can be potentially significantly reduced.

Fadel, I.; Kerle, N.; van der Meijde, M.

2014-07-01

131

3D acoustic imaging applied to the Baikal Neutrino Telescope  

E-print Network

A hydro-acoustic imaging system was tested in a pilot study on distant localization of elements of the Baikal underwater neutrino telescope. For this innovative approach, based on broad band acoustic echo signals and strictly avoiding any active acoustic elements on the telescope, the imaging system was temporarily installed just below the ice surface, while the telescope stayed in its standard position at 1100 m depth. The system comprised an antenna with four acoustic projectors positioned at the corners of a 50 meter square; acoustic pulses were "linear sweep-spread signals" - multiple-modulated wide-band signals (10-22 kHz) of 51.2 s duration. Three large objects (two string buoys and the central electronics module) were localized by the 3D acoustic imaging, with a accuracy of ~0.2 m (along the beam) and ~1.0 m (transverse). We discuss signal forms and parameters necessary for improved 3D acoustic imaging of the telescope, and suggest a layout of a possible stationary bottom based 3D imaging setup. The presented technique may be of interest for neutrino telescopes of km3-scale and beyond, as a flexible temporary or as a stationary tool to localize basic telescope elements, while these are completely passive.

K. G. Kebkal; R. Bannasch; O. G. Kebkal; A. I. Panfilov; R. Wischnewski

2008-11-07

132

1024 pixels single photon imaging array for 3D ranging  

NASA Astrophysics Data System (ADS)

Three dimensions (3D) acquisition systems are driving applications in many research field. Nowadays 3D acquiring systems are used in a lot of applications, such as cinema industry or in automotive (for active security systems). Depending on the application, systems present different features, for example color sensitivity, bi-dimensional image resolution, distance measurement accuracy and acquisition frame rate. The system we developed acquires 3D movie using indirect Time of Flight (iTOF), starting from phase delay measurement of a sinusoidally modulated light. The system acquires live movie with a frame rate up to 50frame/s in a range distance between 10 cm up to 7.5 m.

Bellisai, S.; Guerrieri, F.; Tisa, S.; Zappa, F.; Tosi, A.; Giudice, A.

2011-01-01

133

Large distance 3D imaging of hidden objects  

NASA Astrophysics Data System (ADS)

Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

2014-06-01

134

3-D Image Denoising By Local Smoothing And Nonparametric Partha Sarathi Mukherjee and Peihua Qiu  

E-print Network

3-D Image Denoising By Local Smoothing And Nonparametric Regression Partha Sarathi Mukherjee-dimensional (3-D) images are becoming increasingly popular in image applications, such as magnetic resonance imaging (MRI), functional MRI (fMRI), and other image applications. Observed 3-D images often contain

Qiu, Peihua

135

Linear tracking for 3-D medical ultrasound imaging.  

PubMed

As the clinical application grows, there is a rapid technical development of 3-D ultrasound imaging. Compared with 2-D ultrasound imaging, 3-D ultrasound imaging can provide improved qualitative and quantitative information for various clinical applications. In this paper, we proposed a novel tracking method for a freehand 3-D ultrasound imaging system with improved portability, reduced degree of freedom, and cost. We designed a sliding track with a linear position sensor attached, and it transmitted positional data via a wireless communication module based on Bluetooth, resulting in a wireless spatial tracking modality. A traditional 2-D ultrasound probe fixed to the position sensor on the sliding track was used to obtain real-time B-scans, and the positions of the B-scans were simultaneously acquired when moving the probe along the track in a freehand manner. In the experiments, the proposed method was applied to ultrasound phantoms and real human tissues. The results demonstrated that the new system outperformed a previously developed freehand system based on a traditional six-degree-of-freedom spatial sensor in phantom and in vivo studies, indicating its merit in clinical applications for human tissues and organs. PMID:23757592

Huang, Qing-Hua; Yang, Zhao; Hu, Wei; Jin, Lian-Wen; Wei, Gang; Li, Xuelong

2013-12-01

136

3D imaging: how to achieve highest accuracy  

NASA Astrophysics Data System (ADS)

The generation of 3D information from images is a key technology in many different areas, e.g. in 3D modeling and representation of architectural or heritage objects, in human body motion tracking and scanning, in 3D scene analysis of traffic scenes, in industrial applications and many more. The basic concepts rely on mathematical representations of central perspective viewing as they are widely known from photogrammetry or computer vision approaches. The objectives of these methods differ, more or less, from high precision and well-structured measurements in (industrial) photogrammetry to fully-automated non-structured applications in computer vision. Accuracy and precision is a critical issue for the 3D measurement of industrial, engineering or medical objects. As state of the art, photogrammetric multi-view measurements achieve relative precisions in the order of 1:100000 to 1:200000, and relative accuracies with respect to retraceable lengths in the order of 1:50000 to 1:100000 of the largest object diameter. In order to obtain these figures a number of influencing parameters have to be optimized. These are, besides others: physical representation of object surface (targets, texture), illumination and light sources, imaging sensors, cameras and lenses, calibration strategies (camera model), orientation strategies (bundle adjustment), image processing of homologue features (target measurement, stereo and multi-image matching), representation of object or workpiece coordinate systems and object scale. The paper discusses the above mentioned parameters and offers strategies for obtaining highest accuracy in object space. Practical examples of high-quality stereo camera measurements and multi-image applications are used to prove the relevance of high accuracy in different applications, ranging from medical navigation to static and dynamic industrial measurements. In addition, standards for accuracy verifications are presented and demonstrated by practical examples and tests.

Luhmann, Thomas

2011-07-01

137

Automated Recognition of 3D Features in GPIR Images  

NASA Technical Reports Server (NTRS)

A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a directed-graph data structure. Relative to past approaches, this multiaxis approach offers the advantages of more reliable detections, better discrimination of objects, and provision of redundant information, which can be helpful in filling gaps in feature recognition by one of the component algorithms. The image-processing class also includes postprocessing algorithms that enhance identified features to prepare them for further scrutiny by human analysts (see figure). Enhancement of images as a postprocessing step is a significant departure from traditional practice, in which enhancement of images is a preprocessing step.

Park, Han; Stough, Timothy; Fijany, Amir

2007-01-01

138

Monopulse radar 3-D imaging and application in terminal guidance radar  

NASA Astrophysics Data System (ADS)

Monopulse radar 3-D imaging integrates ISAR, monopulse angle measurement and 3-D imaging processing to obtain the 3-D image which can reflect the real size of a target, which means any two of the three measurement parameters, namely azimuth difference beam elevation difference beam and radial range, can be used to form 3-D image of 3-D object. The basic principles of Monopulse radar 3-D imaging are briefly introduced, the effect of target carriage changes(including yaw, pitch, roll and movement of target itself) on 3-D imaging and 3-D moving compensation based on the chirp rate ? and Doppler frequency f d are analyzed, and the application of monopulse radar 3-D imaging to terminal guidance radars is forecasted. The computer simulation results show that monopulse radar 3-D imaging has apparent advantages in distinguishing a target from overside interference and precise assault on vital part of a target, and has great importance in terminal guidance radars.

Xu, Hui; Qin, Guodong; Zhang, Lina

2007-11-01

139

Automated Identification of Fiducial Points on 3D Torso Images.  

PubMed

Breast reconstruction is an important part of the breast cancer treatment process for many women. Recently, 2D and 3D images have been used by plastic surgeons for evaluating surgical outcomes. Distances between different fiducial points are frequently used as quantitative measures for characterizing breast morphology. Fiducial points can be directly marked on subjects for direct anthropometry, or can be manually marked on images. This paper introduces novel algorithms to automate the identification of fiducial points in 3D images. Automating the process will make measurements of breast morphology more reliable, reducing the inter- and intra-observer bias. Algorithms to identify three fiducial points, the nipples, sternal notch, and umbilicus, are described. The algorithms used for localization of these fiducial points are formulated using a combination of surface curvature and 2D color information. Comparison of the 3D co-ordinates of automatically detected fiducial points and those identified manually, and geodesic distances between the fiducial points are used to validate algorithm performance. The algorithms reliably identified the location of all three of the fiducial points. We dedicate this article to our late colleague and friend, Dr. Elisabeth K. Beahm. Elisabeth was both a talented plastic surgeon and physician-scientist; we deeply miss her insight and her fellowship. PMID:25288903

Kawale, Manas M; Reece, Gregory P; Crosby, Melissa A; Beahm, Elisabeth K; Fingeret, Michelle C; Markey, Mia K; Merchant, Fatima A

2013-01-01

140

Radiometric Quality Evaluation of INSAT-3D Imager Data  

NASA Astrophysics Data System (ADS)

INSAT-3D is an advanced meteorological satellite of ISRO which acquires imagery in optical and infra-red (IR) channels for study of weather dynamics in Indian sub-continent region. In this paper, methodology of radiometric quality evaluation for Level-1 products of Imager, one of the payloads onboard INSAT-3D, is described. Firstly, overall visual quality of scene in terms of dynamic range, edge sharpness or modulation transfer function (MTF), presence of striping and other image artefacts is computed. Uniform targets in Desert and Sea region are identified for which detailed radiometric performance evaluation for IR channels is carried out. Mean brightness temperature (BT) of targets is computed and validated with independently generated radiometric references. Further, diurnal/seasonal trends in target BT values and radiometric uncertainty or sensor noise are studied. Results of radiometric quality evaluation over duration of eight months (January to August 2014) and comparison of radiometric consistency pre/post yaw flip of satellite are presented. Radiometric Analysis indicates that INSAT-3D images have high contrast (MTF > 0.2) and low striping effects. A bias of <4K is observed in the brightness temperature values of TIR-1 channel measured during January-August 2014 indicating consistent radiometric calibration. Diurnal and seasonal analysis shows that Noise equivalent differential temperature (NEdT) for IR channels is consistent and well within specifications.

Prakash, S.; Jindal, D.; Badal, N.; Kartikeyan, B.; Gopala Krishna, B.

2014-11-01

141

Triangulation Based 3D Laser Imaging for Fracture Orientation Analysis  

NASA Astrophysics Data System (ADS)

Laser imaging has recently been identified as a potential tool for rock mass characterization. This contribution focuses on the application of triangulation based, short-range laser imaging to determine fracture orientation and surface texture. This technology measures the distance to the target by triangulating the projected and reflected laser beams, and also records the reflection intensity. In this study, we acquired 3D laser images of rock faces using the Laser Camera System (LCS), a portable instrument developed by Neptec Design Group (Ottawa, Canada). The LCS uses an infrared laser beam and is immune to the lighting conditions. The maximum image resolution is 1024 x 1024 volumetric image elements. Depth resolution is 0.5 mm at 5 m. An above ground field trial was conducted at a blocky road cut with well defined joint sets (Kingston, Ontario). An underground field trial was conducted at the Inco 175 Ore body (Sudbury, Ontario) where images were acquired in the dark and the joint set features were more subtle. At each site, from a distance of 3 m away from the rock face, a grid of six images (approximately 1.6 m by 1.6 m) was acquired at maximum resolution with 20% overlap between adjacent images. This corresponds to a density of 40 image elements per square centimeter. Polyworks, a high density 3D visualization software tool, was used to align and merge the images into a single digital triangular mesh. The conventional method of determining fracture orientations is by manual measurement using a compass. In order to be accepted as a substitute for this method, the LCS should be capable of performing at least to the capabilities of manual measurements. To compare fracture orientation estimates derived from the 3D laser images to manual measurements, 160 inclinometer readings were taken at the above ground site. Three prominent joint sets (strike/dip: 236/09, 321/89, 325/01) were identified by plotting the joint poles on a stereonet. Underground, two main joint sets (strike/dip: 060/00, 114/86) were identified from 49 manual inclinometer measurements A stereonet of joint poles from the 3D laser data was generated using the commercial software Split-FX. Joint sets were identified successfully and their orientations correlated well with the hand measurements. However, Split-Fx overlays a simply 2D grid of equal-sized triangles onto the 3D surface and requires significant user input. In a more automated approach, we have developed a MATLAB script which directly imports the Polyworks 3D triangular mesh. A typical mesh is composed of over 1 million triangles of variable sizes: smooth regions are represented by large triangles, whereas rough surfaces are captured by several smaller triangles. Using the triangle vertices, the script computes the strike and dip of each triangle. This approach opens possibilities for statistical analysis of a large population of fracture orientation estimates, including surface texture. The methodology will be used to evaluate both synthetic and field data.

Mah, J.; Claire, S.; Steve, M.

2009-05-01

142

Pavement cracking measurements using 3D laser-scan images  

NASA Astrophysics Data System (ADS)

Pavement condition surveying is vital for pavement maintenance programs that ensure ride quality and traffic safety. This paper first introduces an automated pavement inspection system which uses a three-dimensional (3D) camera and a structured laser light to acquire dense transverse profiles of a pavement lane surface when it carries a moving vehicle. After the calibration, the 3D system can yield a depth resolution of 0.5 mm and a transverse resolution of 1.56 mm pixel-1 at 1.4 m camera height from the ground. The scanning rate of the camera can be set to its maximum at 5000 lines s-1, allowing the density of scanned profiles to vary with the vehicle's speed. The paper then illustrates the algorithms that utilize 3D information to detect pavement distress, such as transverse, longitudinal and alligator cracking, and presents the field tests on the system's repeatability when scanning a sample pavement in multiple runs at the same vehicle speed, at different vehicle speeds and under different weather conditions. The results show that this dedicated 3D system can capture accurate pavement images that detail surface distress, and obtain consistent crack measurements in repeated tests and under different driving and lighting conditions.

Ouyang, W.; Xu, B.

2013-10-01

143

Combined registration of 3D tibia and femur implant models in 3D magnetic resonance images  

NASA Astrophysics Data System (ADS)

The most frequent reasons for revision of total knee arthroplasty are loosening and abnormal axial alignment leading to an unphysiological kinematic of the knee implant. To get an idea about the postoperative kinematic of the implant, it is essential to determine the position and orientation of the tibial and femoral prosthesis. Therefore we developed a registration method for fitting 3D CAD-models of knee joint prostheses into an 3D MR image. This rigid registration is the basis for a quantitative analysis of the kinematics of knee implants. Firstly the surface data of the prostheses models are converted into a voxel representation; a recursive algorithm determines all boundary voxels of the original triangular surface data. Secondly an initial preconfiguration of the implants by the user is still necessary for the following step: The user has to perform a rough preconfiguration of both remaining prostheses models, so that the fine matching process gets a reasonable starting point. After that an automated gradient-based fine matching process determines the best absolute position and orientation: This iterative process changes all 6 parameters (3 rotational- and 3 translational parameters) of a model by a minimal amount until a maximum value of the matching function is reached. To examine the spread of the final solutions of the registration, the interobserver variability was measured in a group of testers. This variability, calculated by the relative standard deviation, improved from about 50% (pure manual registration) to 0.5% (rough manual preconfiguration and subsequent fine registration with the automatic fine matching process).

Englmeier, Karl-Hans; Siebert, Markus; von Eisenhart-Rothe, Ruediger; Graichen, Heiko

2008-03-01

144

Sparse aperture 3D passive image sensing and recognition  

NASA Astrophysics Data System (ADS)

The way we perceive, capture, store, communicate and visualize the world has greatly changed in the past century Novel three dimensional (3D) imaging and display systems are being pursued both in academic and industrial settings. In many cases, these systems have revolutionized traditional approaches and/or enabled new technologies in other disciplines including medical imaging and diagnostics, industrial metrology, entertainment, robotics as well as defense and security. In this dissertation, we focus on novel aspects of sparse aperture multi-view imaging systems and their application in quantum-limited object recognition in two separate parts. In the first part, two concepts are proposed. First a solution is presented that involves a generalized framework for 3D imaging using randomly distributed sparse apertures. Second, a method is suggested to extract the profile of objects in the scene through statistical properties of the reconstructed light field. In both cases, experimental results are presented that demonstrate the feasibility of the techniques. In the second part, the application of 3D imaging systems in sensing and recognition of objects is addressed. In particular, we focus on the scenario in which only 10s of photons reach the sensor from the object of interest, as opposed to hundreds of billions of photons in normal imaging conditions. At this level, the quantum limited behavior of light will dominate and traditional object recognition practices may fail. We suggest a likelihood based object recognition framework that incorporates the physics of sensing at quantum-limited conditions. Sensor dark noise has been modeled and taken into account. This framework is applied to 3D sensing of thermal objects using visible spectrum detectors. Thermal objects as cold as 250K are shown to provide enough signature photons to be sensed and recognized within background and dark noise with mature, visible band, image forming optics and detector arrays. The results suggest that one might not need to venture into exotic and expensive detector arrays and associated optics for sensing room-temperature thermal objects in complete darkness.

Daneshpanah, Mehdi

145

Large Scale 3D Image Reconstruction in Optical Interferometry  

E-print Network

Astronomical optical interferometers (OI) sample the Fourier transform of the intensity distribution of a source at the observation wavelength. Because of rapid atmospheric perturbations, the phases of the complex Fourier samples (visibilities) cannot be directly exploited , and instead linear relationships between the phases are used (phase closures and differential phases). Consequently, specific image reconstruction methods have been devised in the last few decades. Modern polychromatic OI instruments are now paving the way to multiwavelength imaging. This paper presents the derivation of a spatio-spectral ("3D") image reconstruction algorithm called PAINTER (Polychromatic opticAl INTErferometric Reconstruction software). The algorithm is able to solve large scale problems. It relies on an iterative process, which alternates estimation of polychromatic images and of complex visibilities. The complex visibilities are not only estimated from squared moduli and closure phases, but also from differential phase...

Schutz, Antony; Mary, David; Thi閎aut, Eric; Soulez, Ferr閛l

2015-01-01

146

Phase Sensitive Cueing for 3D Objects in Overhead Images  

SciTech Connect

A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

Paglieroni, D W; Eppler, W G; Poland, D N

2005-02-18

147

3D Lunar Terrain Reconstruction from Apollo Images  

NASA Technical Reports Server (NTRS)

Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

2009-01-01

148

Quantitative 3D Optical Imaging: Applications in Dosimetry and Biophysics  

NASA Astrophysics Data System (ADS)

Optical-CT has been shown to be a potentially useful imaging tool for the two very different spheres of biologists and radiation therapy physicists, but it has yet to live up to that potential. In radiation therapy, researchers have used optical-CT for the readout of 3D dosimeters, but it is yet to be a clinically relevant tool as the technology is too slow to be considered practical. Biologists have used the technique for structural imaging, but have struggled with emission tomography as the reality of photon attenuation for both excitation and emission have made the images quantitatively irrelevant. Dosimetry. The DLOS (Duke Large field of view Optical-CT Scanner) was designed and constructed to make 3D dosimetry utilizing optical-CT a fast and practical tool while maintaining the accuracy of readout of the previous, slower readout technologies. Upon construction/optimization/implementation of several components including a diffuser, band pass filter, registration mount & fluid filtration system the dosimetry system provides high quality data comparable to or exceeding that of commercial products. In addition, a stray light correction algorithm was tested and implemented. The DLOS in combination with the 3D dosimeter it was designed for, PREAGETM, then underwent rigorous commissioning and benchmarking tests validating its performance against gold standard data including a set of 6 irradiations. DLOS commissioning tests resulted in sub-mm isotropic spatial resolution (MTF >0.5 for frequencies of 1.5lp/mm) and a dynamic range of 60dB. Flood field uniformity was 10% and stable after 45minutes. Stray light proved to be small, due to telecentricity, but even the residual can be removed through deconvolution. Benchmarking tests showed the mean 3D passing gamma rate (3%, 3mm, 5% dose threshold) over the 6 benchmark data sets was 97.3% +/- 0.6% (range 96%-98%) scans totaling 10 minutes, indicating excellent ability to perform 3D dosimetry while improving the speed of readout. Noise was low at 2% for 2mm reconstructions. The DLOS/PRESAGERTM benchmark tests show consistently excellent performance, with very good agreement to simple known distributions. The telecentric design was critical to enabling fast (~15mins) imaging with minimal stray light artifacts. The system produces accurate isotropic 2mm3 dose data over clinical volumes (e.g. 16cm diameter phantoms, 12 cm height), and represents a uniquely useful and versatile new tool for commissioning complex radiotherapy techniques. The system also has wide versatility, and has successfully been used in preliminary tests with protons and with kV irradiations. Biology. Attenuation corrections for optical-emission-CT were done by modeling physical parameters in the imaging setup within the framework of an ordered subset expectation maximum (OSEM) iterative reconstruction algorithm. This process has a well documented history in single photon emission computed tomography (SPECT), but is inherently simpler due to the lack of excitation photons to account for. Excitation source strength distribution, excitation and emission attenuation were modeled. The accuracy of the correction was investigated by imaging phantoms containing known distributions of attenuation and fluorophores. The correction was validated on a manufactured phantom designed to give uniform emission in a central cuboidal region and later applied to a cleared mouse brain with GFP (green-fluorescentprotein) labeled vasculature and a cleared 4T1 xenograft flank tumor with constitutive RFP (red-fluorescent-protein). Reconstructions were compared to corresponding slices imaged with a fluorescent dissection microscope. Significant optical-ECT attenuation artifacts were observed in the uncorrected phantom images and appeared up to 80% less intense than the verification image in the central region. The corrected phantom images showed excellent agreement with the verification image with only slight variations. The corrected tissue sample reconstructions showed general agreement between the verification images. Comp

Thomas, Andrew Stephen

149

Feature detection on 3D images of dental imprints  

NASA Astrophysics Data System (ADS)

A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.

Mokhtari, Marielle; Laurendeau, Denis

1994-09-01

150

Image to Point Cloud Method of 3D-MODELING  

NASA Astrophysics Data System (ADS)

This article describes the method of constructing 3D models of objects (buildings, monuments) based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.

Chibunichev, A. G.; Galakhov, V. P.

2012-07-01

151

3D remote sensing images data organization and web publication  

NASA Astrophysics Data System (ADS)

This paper presents a method on how to organize 3D remote sensing images and how to publish these images quickly. We use two levels of grid-based spatial index to organize massive images. First, we divide a huge digital city image into many map sheets (big images). All of map sheets construct a matrix structure. We use row number and column number to encode every map sheet. Second, by using resample and bilinear interpolation method, we build pyramid for every map sheet to form multi-scale hierarchical structure. At the same time building pyramid, we adopt JPEG compression technology to produce JPEG image format files. The number of output image files equals to the number of pyramid layers. Third, divide every pyramid layer image into many small image tiles. The size of each tile image is 256*256 pixels. All of small tiles of each pyramid layer image also construct a matrix structure. We also use row number and column number to encode every small image tile. We create a file directory for each map sheet in order to store all of small image tiles. we neatly combine the spatial index structure with the file name of each tile, which make server be able to return tile to browser side very quickly without any query operation. With the proposed method, we can provide users with a fast and efficiently tool to publish their own spatial information without involving any programming work. The system performance is very good and the response time is almost identical for different size images.

Tong, Hengjian; Zhang, Yun; Shao, Zhenfeng

2008-12-01

152

Retinal vessel caliber and lifelong neuropsychological functioning: retinal imaging as an investigative tool for cognitive epidemiology.  

PubMed

Why do more intelligent people live healthier and longer lives? One possibility is that intelligence tests assess health of the brain, but psychological science has lacked technology to evaluate this hypothesis. Digital retinal imaging, a new, noninvasive method to visualize microcirculation in the eye, may reflect vascular conditions in the brain. We studied the association between retinal vessel caliber and neuropsychological functioning in the representative Dunedin birth cohort. Wider venular caliber was associated with poorer neuropsychological functioning at midlife, independently of potentially confounding factors. This association was not limited to any specific test domain and extended to informants' reports of cohort members' cognitive difficulties in everyday life. Moreover, wider venular caliber was associated with lower childhood IQ tested 25 years earlier. The findings indicate that retinal venular caliber may be an indicator of neuropsychological health years before the onset of dementing diseases and suggest that digital retinal imaging may be a useful investigative tool for psychological science. PMID:23678508

Shalev, Idan; Moffitt, Terrie E; Wong, Tien Y; Meier, Madeline H; Houts, Renate M; Ding, Jie; Cheung, Carol Y; Ikram, M Kamran; Caspi, Avshalom; Poulton, Richie

2013-07-01

153

The 3D model control of image processing  

NASA Technical Reports Server (NTRS)

Telerobotics studies remote control of distant robots by a human operator using supervisory or direct control. Even if the robot manipulators has vision or other senses, problems arise involving control, communications, and delay. The communication delays that may be expected with telerobots working in space stations while being controlled from an Earth lab have led to a number of experiments attempting to circumvent the problem. This delay in communication is a main motivating factor in moving from well understood instantaneous hands-on manual control to less well understood supervisory control; the ultimate step would be the realization of a fully autonomous robot. The 3-D model control plays a crucial role in resolving many conflicting image processing problems that are inherent in resolving in the bottom-up approach of most current machine vision processes. The 3-D model control approach is also capable of providing the necessary visual feedback information for both the control algorithms and for the human operator.

Nguyen, An H.; Stark, Lawrence

1989-01-01

154

Imaging PVC gas pipes using 3-D GPR  

SciTech Connect

Over the years, many enhancements have been made by the oil and gas industry to improve the quality of seismic images. The GPR project at GTRI borrows heavily from these technologies in order to produce 3-D GPR images of PVC gas pipes. As will be demonstrated, improvements in GPR data acquisition, 3-D processing and visualization schemes yield good images of PVC pipes in the subsurface. Data have been collected in cooperation with the local gas company and at a test facility in Texas. Surveys were conducted over both a metal pipe and PVC pipes of diameters ranging from {1/2} in. to 4 in. at depths from 1 ft to 3 ft in different soil conditions. The metal pipe produced very good reflections and was used to fine tune and optimize the processing run stream. It was found that the following steps significantly improve the overall image: (1) Statics for drift and topography compensation, (2) Deconvolution, (3) Filtering and automatic gain control, (4) Migration for focusing and resolution, and (5) Visualization optimization. The processing flow implemented is relatively straightforward, simple to execute and robust under varying conditions. Future work will include testing resolution limits, effects of soil conditions, and leak detection.

Bradford, J.; Ramaswamy, M.; Peddy, C. [GTRI/HARC, Woodlands, TX (United States)

1996-11-01

155

3D range scan enhancement using image-based methods  

NASA Astrophysics Data System (ADS)

This paper addresses the problem of 3D surface scan refinement, which is desirable due to noise, outliers, and missing measurements being present in the 3D surfaces obtained with a laser scanner. We present a novel algorithm for the fusion of absolute laser scanner depth profiles and photometrically estimated surface normal data, which yields a noise-reduced and highly detailed depth profile with large scale shape robustness. In contrast to other approaches published in the literature, the presented algorithm (1) regards non-Lambertian surfaces, (2) simultaneously computes surface reflectance (i.e. BRDF) parameters required for 3D reconstruction, (3) models pixelwise incident light and viewing directions, and (4) accounts for interreflections. The algorithm as such relies on the minimization of a three-component error term, which penalizes intensity deviations, integrability deviations, and deviations from the known large-scale surface shape. The solution of the error minimization is obtained iteratively based on a calculus of variations. BRDF parameters are estimated by initially reducing and then iteratively refining the optical resolution, which provides the required robust data basis. The 3D reconstruction of concave surface regions affected by interreflections is improved by compensating global illumination in the image data. The algorithm is evaluated based on eight objects with varying albedos and reflectance behaviors (diffuse, specular, metallic). The qualitative evaluation shows a removal of outliers and a strong reduction of noise, while the large scale shape is preserved. Fine surface details Which are previously not contained in the surface scans, are incorporated through using image data. The algorithm is evaluated with respect to its absolute accuracy using two caliper objects of known shape, and based on synthetically generated data. The beneficial effect of interreflection compensation on the reconstruction accuracy is evaluated quantitatively in a Photometric Stereo framework.

Herbort, Steffen; Gerken, Britta; Schugk, Daniel; W鰄ler, Christian

2013-10-01

156

Optimization of the open-loop liquid crystal adaptive optics retinal imaging system  

NASA Astrophysics Data System (ADS)

An open-loop adaptive optics (AO) system for retinal imaging was constructed using a liquid crystal spatial light modulator (LC-SLM) as the wavefront compensator. Due to the dispersion of the LC-SLM, there was only one illumination source for both aberration detection and retinal imaging in this system. To increase the field of view (FOV) for retinal imaging, a modified mechanical shutter was integrated into the illumination channel to control the size of the illumination spot on the fundus. The AO loop was operated in a pulsing mode, and the fundus was illuminated twice by two laser impulses in a single AO correction loop. As a result, the FOV for retinal imaging was increased to 1.7-deg without compromising the aberration detection accuracy. The correction precision of the open-loop AO system was evaluated in a closed-loop configuration; the residual error is approximately 0.0909? (root-mean-square, RMS), and the Strehl ratio ranges to 0.7217. Two subjects with differing rates of myopia (-3D and -5D) were tested. High-resolution images of capillaries and photoreceptors were obtained.

Kong, Ningning; Li, Chao; Xia, Mingliang; Li, Dayu; Qi, Yue; Xuan, Li

2012-02-01

157

An adaptive-optics scanning laser ophthalmoscope for imaging murine retinal microstructure  

Microsoft Academic Search

In vivo retinal imaging is an outstanding tool to observe biological processes unfold in real-time. The ability to image microstructure in vivo can greatly enhance our understanding of function in retinal microanatomy under normal conditions and in disease. Transgenic mice are frequently used for mouse models of retinal diseases. However, commercially available retinal imaging instruments lack the optical resolution and

Clemens Alt; David P. Biss; Nadja Tajouri; Tatjana C. Jakobs; Charles P. Lin

2010-01-01

158

Improving 3D Wavelet-Based Compression of Hyperspectral Images  

NASA Technical Reports Server (NTRS)

Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a manner similar to that of a baseline hyperspectral- image-compression method. The mean values are encoded in the compressed bit stream and added back to the data at the appropriate decompression step. The overhead incurred by encoding the mean values only a few bits per spectral band is negligible with respect to the huge size of a typical hyperspectral data set. The other method is denoted modified decomposition. This method is so named because it involves a modified version of a commonly used multiresolution wavelet decomposition, known in the art as the 3D Mallat decomposition, in which (a) the first of multiple stages of a 3D wavelet transform is applied to the entire dataset and (b) subsequent stages are applied only to the horizontally-, vertically-, and spectrally-low-pass subband from the preceding stage. In the modified decomposition, in stages after the first, not only is the spatially-low-pass, spectrally-low-pass subband further decomposed, but also spatially-low-pass, spectrally-high-pass subbands are further decomposed spatially. Either method can be used alone to improve the quality of a reconstructed image (see figure). Alternatively, the two methods can be combined by first performing modified decomposition, then subtracting the mean values from spatial planes of spatially-low-pass subbands.

Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

2009-01-01

159

New method of retinal vessels diameter evaluation in images obtained during retinal tomography  

NASA Astrophysics Data System (ADS)

PURPOSE: To assess accuracy and reproducibility of retinal vessels caliber measurement in Heidelberg retina tomographer (HRT II) images by new developed method. METHODS: 76 images of optic nerve head were obtained from 76 eyes. Eight vessels' diameters were measured in each case in the area of 0.5 to 1.0 disc diameter from the optic disc margin. The window for "interactive measurements" was used to determine three-dimensional coordinates (x,y,z) of each vessel diameter. Diameter of each vessel was calculated according to the Pythagorean Theorem (value of "z" coordinate remained unchanged). RESULTS: Diameter of retinal arterioles varied from 55,0 to 106,5 ?m. Diameter of retinal venulas ranged from 68,9 to 140,1 ?m. The standard deviation value changed from 0,6 to 16 micron. Artetiole/venule ratio mean value was 0,702+/-0,039. CONCLUSIONS: Measurement of retinal vessels diameter in images obtained during retinal tomography is exact and informative. The described method is the unique way of retinal vessels caliber measurement in absolute values.

Astakhov, Yury S.; Akopov, Evgeny L.

2005-04-01

160

Estimation of regional lung expansion via 3D image registration  

NASA Astrophysics Data System (ADS)

A method is described to estimate regional lung expansion and related biomechanical parameters using multiple CT images of the lungs, acquired at different inflation levels. In this study, the lungs of two sheep were imaged utilizing a multi-detector row CT at different lung inflations in the prone and supine positions. Using the lung surfaces and the airway branch points for guidance, a 3D inverse consistent image registration procedure was used to match different lung volumes at each orientation. The registration was validated using a set of implanted metal markers. After registration, the Jacobian of the deformation field was computed to express regional expansion or contraction. The regional lung expansion at different pressures and different orientations are compared.

Pan, Yan; Kumar, Dinesh; Hoffman, Eric A.; Christensen, Gary E.; McLennan, Geoffrey; Song, Joo Hyun; Ross, Alan; Simon, Brett A.; Reinhardt, Joseph M.

2005-04-01

161

3D Wavelet Subbands Mixing for Image Denoising  

PubMed Central

A critical issue in image restoration is the problem of noise removal while keeping the integrity of relevant image information. The method proposed in this paper is a fully automatic 3D blockwise version of the nonlocal (NL) means filter with wavelet subbands mixing. The proposed wavelet subbands mixing is based on a multiresolution approach for improving the quality of image denoising filter. Quantitative validation was carried out on synthetic datasets generated with the BrainWeb simulator. The results show that our NL-means filter with wavelet subbands mixing outperforms the classical implementation of the NL-means filter in terms of denoising quality and computation time. Comparison with wellestablished methods, such as nonlinear diffusion filter and total variation minimization, shows that the proposed NL-means filter produces better denoising results. Finally, qualitative results on real data are presented. PMID:18431448

Coup, Pierrick; Hellier, Pierre; Prima, Sylvain; Kervrann, Charles; Barillot, Christian

2008-01-01

162

3D wavelet subbands mixing for image denoising.  

PubMed

A critical issue in image restoration is the problem of noise removal while keeping the integrity of relevant image information. The method proposed in this paper is a fully automatic 3D blockwise version of the nonlocal (NL) means filter with wavelet subbands mixing. The proposed wavelet subbands mixing is based on a multiresolution approach for improving the quality of image denoising filter. Quantitative validation was carried out on synthetic datasets generated with the BrainWeb simulator. The results show that our NL-means filter with wavelet subbands mixing outperforms the classical implementation of the NL-means filter in terms of denoising quality and computation time. Comparison with wellestablished methods, such as nonlinear diffusion filter and total variation minimization, shows that the proposed NL-means filter produces better denoising results. Finally, qualitative results on real data are presented. PMID:18431448

Coup, Pierrick; Hellier, Pierre; Prima, Sylvain; Kervrann, Charles; Barillot, Christian

2008-01-01

163

Automated 3D segmentation of intraretinal layers from optic nerve head optical coherence tomography images  

NASA Astrophysics Data System (ADS)

Optical coherence tomography (OCT), being a noninvasive imaging modality, has begun to find vast use in the diagnosis and management of ocular diseases such as glaucoma, where the retinal nerve fiber layer (RNFL) has been known to thin. Furthermore, the recent availability of the considerably larger volumetric data with spectral-domain OCT has increased the need for new processing techniques. In this paper, we present an automated 3-D graph-theoretic approach for the segmentation of 7 surfaces (6 layers) of the retina from 3-D spectral-domain OCT images centered on the optic nerve head (ONH). The multiple surfaces are detected simultaneously through the computation of a minimum-cost closed set in a vertex-weighted graph constructed using edge/regional information, and subject to a priori determined varying surface interaction and smoothness constraints. The method also addresses the challenges posed by presence of the large blood vessels and the optic disc. The algorithm was compared to the average manual tracings of two observers on a total of 15 volumetric scans, and the border positioning error was found to be 7.25 +/- 1.08 ?m and 8.94 +/- 3.76 ?m for the normal and glaucomatous eyes, respectively. The RNFL thickness was also computed for 26 normal and 70 glaucomatous scans where the glaucomatous eyes showed a significant thinning (p < 0.01, mean thickness 73.7 +/- 32.7 ?m in normal eyes versus 60.4 +/- 25.2 ?m in glaucomatous eyes).

Antony, Bhavna J.; Abr鄊off, Michael D.; Lee, Kyungmoo; Sonkova, Pavlina; Gupta, Priya; Kwon, Young; Niemeijer, Meindert; Hu, Zhihong; Garvin, Mona K.

2010-03-01

164

Task-specific comparison of 3D image registration methods  

NASA Astrophysics Data System (ADS)

We present a new class of approaches for rigid-body registration and their evaluation in studying Multiple Sclerosis via multi protocol MRI. Two pairs of rigid-body registration algorithms were implemented, using cross- correlation and mutual information, operating on original gray-level images and on the intermediate images resulting from our new scale-based method. In the scale image, every voxel has the local scale value assigned to it, defined as the radius of the largest sphere centered at the voxel with homogeneous intensities. 3D data of the head were acquired from 10 MS patients using 6 MRI protocols. Images in some of the protocols have been acquired in registration. The co-registered pairs were used as ground truth. Accuracy and consistency of the 4 registration methods were measured within and between protocols for known amounts of misregistrations. Our analysis indicates that there is no best method. For medium and large misregistration, methods using mutual information, for small misregistration, and for the consistency tests, correlation methods using the original gray-level images give the best results. We have previously demonstrated the use of local scale information in fuzzy connectedness segmentation and image filtering. Scale may also have considerable potential for image registration as suggested by this work.

Nyul, Laszlo G.; Udupa, Jayaram K.; Saha, Punam K.

2001-07-01

165

3D ROC Analysis for Medical Imaging Diagnosis.  

PubMed

Receiver operating characteristics (ROC) has been widely used as a performance evaluation tool to measure effectiveness of medical modalities. It is derived from a standard detection theory with false alarm and detection power interpreted as false positive (FP) and true positive (TP) respectively in terms of medical diagnosis. The ROC curve is plotted based on TP versus FP via hard decisions. This paper presents a three dimensional (3D) ROC analysis which extends the traditional two-dimensional (2D) ROC analysis by including a threshold parameter in a third dimension resulting from soft decisions, (SD). As a result, a 3D ROC curve can be plotted based on three parameters, TP, FP and SD. By virtue of such a 3D ROC curve three two-dimensional (2D) ROC curves can be derived, one of which is the traditional 2D ROC curve of TP versus FP with SD reduced to hard decision. In order to illustrate its utility in medical diagnosis, its application to magnetic resonance (MR) image classification is demonstrated. PMID:17282027

Wang, Su; Chang, C-I; Yang, Sheng-Chih; Hsu, Giu-Cheng; Hsu, Hsian-He; Chung, Pau-Choo; Guo, Shu-Mei; Lee, San-Kan

2005-01-01

166

3D Multispectral Light Propagation Model For Subcutaneous Veins Imaging  

SciTech Connect

In this paper, we describe a new 3D light propagation model aimed at understanding the effects of various physiological properties on subcutaneous vein imaging. In particular, we build upon the well known MCML (Monte Carlo Multi Layer) code and present a tissue model that improves upon the current state-of-the-art by: incorporating physiological variation, such as melanin concentration, fat content, and layer thickness; including veins of varying depth and diameter; using curved surfaces from real arm shapes; and modeling the vessel wall interface. We describe our model, present results from the Monte Carlo modeling, and compare these results with those obtained with other Monte Carlo methods.

Paquit, Vincent C [ORNL; Price, Jeffery R [ORNL; Meriaudeau, Fabrice [ORNL; Tobin Jr, Kenneth William [ORNL

2008-01-01

167

3D laser optoacoustic ultrasonic imaging system for preclinical research  

NASA Astrophysics Data System (ADS)

In this work, we introduce a novel three-dimensional imaging system for in vivo high-resolution anatomical and functional whole-body visualization of small animal models developed for preclinical or other type of biomedical research. The system (LOUIS-3DM) combines a multi-wavelength optoacoustic and ultrawide-band laser ultrasound tomographies to obtain coregistered maps of tissue optical absorption and acoustic properties, displayed within the skin outline of the studied animal. The most promising applications of the LOUIS-3DM include 3D angiography, cancer research, and longitudinal studies of biological distribution of optoacoustic contrast agents (carbon nanotubes, metal plasmonic nanoparticles, etc.).

Ermilov, Sergey A.; Conjusteau, Andr; Hernandez, Travis; Su, Richard; Nadvoretskiy, Vyacheslav; Tsyboulski, Dmitri; Anis, Fatima; Anastasio, Mark A.; Oraevsky, Alexander A.

2013-03-01

168

3D photoacoustic imaging of a moving target  

NASA Astrophysics Data System (ADS)

We have developed a fast 3D photoacoustic imaging system based on a sparse array of ultrasound detectors and iterative image reconstruction. To investigate the high frame rate capabilities of our system in the context of rotational motion, flow, and spectroscopy, we performed high frame-rate imaging on a series of targets, including a rotating graphite rod, a bolus of methylene blue flowing through a tube, and hyper-spectral imaging of a tube filled with methylene blue under a no flow condition. Our frame-rate for image acquisition was 10 Hz, which was limited by the laser repetition rate. We were able to track the rotation of the rod and accurately estimate its rotational velocity, at a rate of 0.33 rotations-per-second. The flow of contrast in the tube, at a flow rate of 180 ?L/min, was also well depicted, and quantitative analysis suggested a potential method for estimating flow velocity from such measurements. The spectrum obtained did not provide accurate results, but depicted the spectral absorption signature of methylene blue , which may be sufficient for identification purposes. These preliminary results suggest that our high frame-rate photoacoustic imaging system could be used for identifying contrast agents and monitoring kinetics as an agent propagates through specific, simple structures such as blood vessels.

Ephrat, Pinhas; Roumeliotis, Michael; Prato, Frank S.; Carson, Jeffrey J. L.

2009-02-01

169

High-resolution 3-D refractive index imaging and Its biological applications  

E-print Network

This thesis presents a theory of 3-D imaging in partially coherent light under a non-paraxial condition. The transmission cross-coefficient (TCC) has been used to characterize partially coherent imaging in a 2- D and 3-D ...

Sung, Yongjin

2011-01-01

170

3D Slicer as an Image Computing Platform for the Quantitative Imaging Network  

PubMed Central

Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm, and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future directions that can further facilitate development and validation of imaging biomarkers using 3D Slicer. PMID:22770690

Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V.; Pieper, Steve; Kikinis, Ron

2012-01-01

171

Retinal image restoration by means of blind deconvolution  

NASA Astrophysics Data System (ADS)

Retinal imaging plays a key role in the diagnosis and management of ophthalmologic disorders, such as diabetic retinopathy, glaucoma, and age-related macular degeneration. Because of the acquisition process, retinal images often suffer from blurring and uneven illumination. This problem may seriously affect disease diagnosis and progression assessment. Here we present a method for color retinal image restoration by means of multichannel blind deconvolution. The method is applied to a pair of retinal images acquired within a lapse of time, ranging from several minutes to months. It consists of a series of preprocessing steps to adjust the images so they comply with the considered degradation model, followed by the estimation of the point-spread function and, ultimately, image deconvolution. The preprocessing is mainly composed of image registration, uneven illumination compensation, and segmentation of areas with structural changes. In addition, we have developed a procedure for the detection and visualization of structural changes. This enables the identification of subtle developments in the retina not caused by variation in illumination or blur. The method was tested on synthetic and real images. Encouraging experimental results show that the method is capable of significant restoration of degraded retinal images.

Marrugo, Andr閟 G.; 妎rel, Michal; 妑oubek, Filip; Mill醤, Mar韆 S.

2011-11-01

172

Retinal image restoration by means of blind deconvolution.  

PubMed

Retinal imaging plays a key role in the diagnosis and management of ophthalmologic disorders, such as diabetic retinopathy, glaucoma, and age-related macular degeneration. Because of the acquisition process, retinal images often suffer from blurring and uneven illumination. This problem may seriously affect disease diagnosis and progression assessment. Here we present a method for color retinal image restoration by means of multichannel blind deconvolution. The method is applied to a pair of retinal images acquired within a lapse of time, ranging from several minutes to months. It consists of a series of preprocessing steps to adjust the images so they comply with the considered degradation model, followed by the estimation of the point-spread function and, ultimately, image deconvolution. The preprocessing is mainly composed of image registration, uneven illumination compensation, and segmentation of areas with structural changes. In addition, we have developed a procedure for the detection and visualization of structural changes. This enables the identification of subtle developments in the retina not caused by variation in illumination or blur. The method was tested on synthetic and real images. Encouraging experimental results show that the method is capable of significant restoration of degraded retinal images. PMID:22112121

Marrugo, Andr閟 G; Sorel, Michal; Sroubek, Filip; Mill醤, Mar韆 S

2011-11-01

173

Quantitative validation of 3D image registration techniques  

NASA Astrophysics Data System (ADS)

Multimodality images obtained from different medical imaging systems such as magnetic resonance (MR), computed tomography (CT), ultrasound (US), positron emission tomography (PET), single photon emission computed tomography (SPECT) provide largely complementary characteristic or diagnostic information. Therefore, it is an important research objective to `fuse' or combine this complementary data into a composite form which would provide synergistic information about the objects under examination. An important first step in the use of complementary fused images is 3D image registration, where multi-modality images are brought into spatial alignment so that the point-to-point correspondence between image data sets is known. Current research in the field of multimodality image registration has resulted in the development and implementation of several different registration algorithms, each with its own set of requirements and parameters. Our research has focused on the development of a general paradigm for measuring, evaluating and comparing the performance of different registration algorithms. Rather than evaluating the results of one algorithm under a specific set of conditions, we suggest a general approach to validation using simulation experiments, where the exact spatial relationship between data sets is known, along with phantom data, to characterize the behavior of an algorithm via a set of quantitative image measurements. This behavior may then be related to the algorithm's performance with real patient data, where the exact spatial relationship between multimodality images is unknown. Current results indicate that our approach is general enough to apply to several different registration algorithms. Our methods are useful for understanding the different sources of registration error and for comparing the results between different algorithms.

Holton Tainter, Kerrie S.; Taneja, Udita; Robb, Richard A.

1995-05-01

174

3-D visualization and animation technologies in anatomical imaging  

PubMed Central

This paper explores a 3-D computer artist抯 approach to the creation of three-dimensional computer-generated imagery (CGI) derived from clinical scan data. Interpretation of scientific imagery, such as magnetic resonance imaging (MRI), is restricted to the eye of the trained medical practitioner in a clinical or scientific context. In the research work described here, MRI data are visualized and interpreted by a 3-D computer artist using the tools of the digital animator to navigate image complexity and widen interaction. In this process, the artefact moves across disciplines; it is no longer tethered to its diagnostic origins. It becomes an object that has visual attributes such as light, texture and composition, and a visual aesthetic of its own. The introduction of these visual attributes provides a platform for improved accessibility by a lay audience. The paper argues that this more artisan approach to clinical data visualization has a potential real-world application as a communicative tool for clinicians and patients during consultation. PMID:20002229

McGhee, John

2010-01-01

175

3D Tomographic imaging of colliding cylindrical blast waves  

NASA Astrophysics Data System (ADS)

The interaction of strong shocks & radiative blast waves is believed to give rise to the turbulent, knotted structures commonly observed in extended astrophysical objects. Modeling these systems is however extremely challenging due to the complex interplay between hydrodynamics, radiation and atomic physics. As a result we have been developing laboratory scale blast wave collision experiments to provide high quality data for code benchmarking, & to improve our physical understanding. We report on experimental & numerical investigations of the collision dynamics of counter propagating strong (>Mach 50) cylindrical thin-shelled blast waves driven by focusing intense laser pulses into an extended medium of atomic clusters. In our test system the blast wave collision creates strongly asymmetric electron density profiles, precluding the use of Abel inversion methods. In consequence we have employed a new tomographic imaging technique, allowing us to recover the full 3D, time framed electron density distribution. Tomography & streaked Schlieren imaging enabled tracking of radial & longitudinal mass flow & the investigation of Mach stem formation as pairs of blast waves collided. We have compared our experimental system to numerical simulations by the 3D magnetoresistive hydrocode GORGON.

Smith, R. A.; Lazarus, J.; Hohenberger, M.; Robinson, J.; Marocchino, A.; Chittenden, J.; Dunne, M.; Moore, A.; Gumbrell, E.

2007-11-01

176

Compensation of log-compressed images for 3-D ultrasound.  

PubMed

In this study, a Bayesian approach was used for 3-D reconstruction in the presence of multiplicative noise and nonlinear compression of the ultrasound (US) data. Ultrasound images are often considered as being corrupted by multiplicative noise (speckle). Several statistical models have been developed to represent the US data. However, commercial US equipment performs a nonlinear image compression that reduces the dynamic range of the US signal for visualization purposes. This operation changes the distribution of the image pixels, preventing a straightforward application of the models. In this paper, the nonlinear compression is explicitly modeled and considered in the reconstruction process, where the speckle noise present in the radio frequency (RF) US data is modeled with a Rayleigh distribution. The results obtained by considering the compression of the US data are then compared with those obtained assuming no compression. It is shown that the estimation performed using the nonlinear log-compression model leads to better results than those obtained with the Rayleigh reconstruction method. The proposed algorithm is tested with synthetic and real data and the results are discussed. The results have shown an improvement in the reconstruction results when the compression operation is included in the image formation model, leading to sharper images with enhanced anatomical details. PMID:12659912

Sanches, Jo鉶 M; Marques, Jorge S

2003-02-01

177

Quantitative analysis of 3D coronary modeling in 3D rotational X-ray imaging  

Microsoft Academic Search

This paper describes the analysis of the accuracy of modeling approaches to coronary reconstruction based on two or more projections in a calibrated X-ray G-arm system. The results of the study give insight into the quantitative accuracy of the calculated 3D centerline points and the 3D cross sectional areas of reconstructed objects as a function of the angular distance and

B. Movassaghi; V. Rasche; M. A. Viergever; W. Niessen

2002-01-01

178

High Resolution 3D Radar Imaging of Comet Interiors  

NASA Astrophysics Data System (ADS)

Knowing the interiors of comets and other primitive bodies is fundamental to our understanding of how planets formed. We have developed a Discovery-class mission formulation, Comet Radar Explorer (CORE), based on the use of previously flown planetary radar sounding techniques, with the goal of obtaining high resolution 3D images of the interior of a small primitive body. We focus on the Jupiter-Family Comets (JFCs) as these are among the most primitive bodies reachable by spacecraft. Scattered in from far beyond Neptune, they are ultimate targets of a cryogenic sample return mission according to the Decadal Survey. Other suitable targets include primitive NEOs, Main Belt Comets, and Jupiter Trojans. The approach is optimal for small icy bodies ~3-20 km diameter with spin periods faster than about 12 hours, since (a) navigation is relatively easy, (b) radar penetration is global for decameter wavelengths, and (c) repeated overlapping ground tracks are obtained. The science mission can be as short as ~1 month for a fast-rotating JFC. Bodies smaller than ~1 km can be globally imaged, but the navigation solutions are less accurate and the relative resolution is coarse. Larger comets are more interesting, but radar signal is unlikely to be reflected from depths greater than ~10 km. So, JFCs are excellent targets for a variety of reasons. We furthermore focus on the use of Solar Electric Propulsion (SEP) to rendezvous shortly after the comet's perihelion. This approach leaves us with ample power for science operations under dormant conditions beyond ~2-3 AU. This leads to a natural mission approach of distant observation, followed by closer inspection, terminated by a dedicated radar mapping orbit. Radar reflections are obtained from a polar orbit about the icy nucleus, which spins underneath. Echoes are obtained from a sounder operating at dual frequencies 5 and 15 MHz, with 1 and 10 MHz bandwidths respectively. The dense network of echoes is used to obtain global 3D images of interior structure to ~20 m, and to map dielectric properties (related to internal composition) to better than 200 m throughout. This is comparable in detail to modern 3D medical ultrasound, although we emphasize that the techniques are somewhat different. An interior mass distribution is obtained through spacecraft tracking, using data acquired during the close, quiet radar orbits. This is aligned with the radar-based images of the interior, and the shape model, to contribute to the multi-dimensional 3D global view. High-resolution visible imaging provides boundary conditions and geologic context to these interior views. An infrared spectroscopy and imaging campaign upon arrival reveals the time-evolving activity of the nucleus and the structure and composition of the inner coma, and the definition of surface units. CORE is designed to obtain a total view of a comet, from the coma to the active and evolving surface to the deep interior. Its primary science goal is to obtain clear images of internal structure and dielectric composition. These will reveal how the comet was formed, what it is made of, and how it 'works'. By making global yet detailed connections from interior to exterior, this knowledge will be an important complement to the Rosetta mission, and will lay the foundation for comet nucleus sample return by revealing the areas of shallow depth to 'bedrock', and relating accessible deposits to their originating provenances within the nucleus.

Asphaug, E. I.; Gim, Y.; Belton, M.; Brophy, J.; Weissman, P. R.; Heggy, E.

2012-12-01

179

A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images  

NASA Astrophysics Data System (ADS)

Glaucoma is an ocular disease characterized by distinctive changes in the optic nerve head (ONH) and visual field. Glaucoma can strike without symptoms and causes blindness if it remains without treatment. Therefore, early disease detection is important so that treatment can be initiated and blindness prevented. In this context, important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, an essential element for glaucoma detection and monitoring. 3D spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, has been commonly used to discriminate glaucomatous from healthy subjects. In this paper, we present a new framework for detection of glaucoma progression using 3D SD-OCT images. In contrast to previous works that the retinal nerve fiber layer (RNFL) thickness measurement provided by commercially available spectral-domain optical coherence tomograph, we consider the whole 3D volume for change detection. To integrate a priori knowledge and in particular the spatial voxel dependency in the change detection map, we propose the use of the Markov Random Field to handle a such dependency. To accommodate the presence of false positive detection, the estimated change detection map is then used to classify a 3D SDOCT image into the "non-progressing" and "progressing" glaucoma classes, based on a fuzzy logic classifier. We compared the diagnostic performance of the proposed framework to existing methods of progression detection.

Belghith, Akram; Bowd, Christopher; Weinreb, Robert N.; Zangwill, Linda M.

2014-03-01

180

3D Image Analysis of Geomaterials using Confocal Microscopy  

NASA Astrophysics Data System (ADS)

Confocal microscopy is one of the most significant advances in optical microscopy of the last century. It is widely used in biological sciences but its application to geomaterials lingers due to a number of technical problems. Potentially the technique can perform non-invasive testing on a laser illuminated sample that fluoresces using a unique optical sectioning capability that rejects out-of-focus light reaching the confocal aperture. Fluorescence in geomaterials is commonly induced using epoxy doped with a fluorochrome that is impregnated into the sample to enable discrimination of various features such as void space or material boundaries. However, for many geomaterials, this method cannot be used because they do not naturally fluoresce and because epoxy cannot be impregnated into inaccessible parts of the sample due to lack of permeability. As a result, the confocal images of most geomaterials that have not been pre-processed with extensive sample preparation techniques are of poor quality and lack the necessary image and edge contrast necessary to apply any commonly used segmentation techniques to conduct any quantitative study of its features such as vesicularity, internal structure, etc. In our present work, we are developing a methodology to conduct a quantitative 3D analysis of images of geomaterials collected using a confocal microscope with minimal amount of prior sample preparation and no addition of fluorescence. Two sample geomaterials, a volcanic melt sample and a crystal chip containing fluid inclusions are used to assess the feasibility of the method. A step-by-step process of image analysis includes application of image filtration to enhance the edges or material interfaces and is based on two segmentation techniques: geodesic active contours and region competition. Both techniques have been applied extensively to the analysis of medical MRI images to segment anatomical structures. Preliminary analysis suggests that there is distortion in the shapes of the segmented vesicles, vapor bubbles, and void spaces due to the optical measurements, so corrective actions are being explored. This will establish a practical and reliable framework for an adaptive 3D image processing technique for the analysis of geomaterials using confocal microscopy.

Mulukutla, G.; Proussevitch, A.; Sahagian, D.

2009-05-01

181

Orthogonal moments for determining correspondence between vessel bifurcations for retinal image registration.  

PubMed

Retinal image registration is a necessary step in diagnosis and monitoring of Diabetes Retinopathy (DR), which is one of the leading causes of blindness. Long term diabetes affects the retinal blood vessels and capillaries eventually causing blindness. This progressive damage to retina and subsequent blindness can be prevented by periodic retinal screening. The extent of damage caused by DR can be assessed by comparing retinal images captured during periodic retinal screenings. During image acquisition at the time of periodic screenings translation, rotation and scale (TRS) are introduced in the retinal images. Therefore retinal image registration is an essential step in automated system for screening, diagnosis, treatment and evaluation of DR. This paper presents an algorithm for registration of retinal images using orthogonal moment invariants as features for determining the correspondence between the dominant points (vessel bifurcations) in the reference and test retinal images. As orthogonal moments are invariant to TRS; moment invariants features around a vessel bifurcation are unaltered due to TRS and can be used to determine the correspondence between reference and test retinal images. The vessel bifurcation points are located in segmented, thinned (mono pixel vessel width) retinal images and labeled in corresponding grayscale retinal images. The correspondence between vessel bifurcations in reference and test retinal image is established based on moment invariants features. Further the TRS in test retinal image with respect to reference retinal image is estimated using similarity transformation. The test retinal image is aligned with reference retinal image using the estimated registration parameters. The accuracy of registration is evaluated in terms of mean error and standard deviation of the labeled vessel bifurcation points in the aligned images. The experimentation is carried out on DRIVE database, STARE database, VARIA database and database provided by local government hospital in Pune, India. The experimental results exhibit effectiveness of the proposed algorithm for registration of retinal images. PMID:25837489

Patankar, Sanika S; Kulkarni, Jayant V

2015-05-01

182

3D lung image retrieval using localized features  

NASA Astrophysics Data System (ADS)

The interpretation of high-resolution computed tomography (HRCT) images of the chest showing disorders of the lung tissue associated with interstitial lung diseases (ILDs) is time-consuming and requires experience. Whereas automatic detection and quantification of the lung tissue patterns showed promising results in several studies, its aid for the clinicians is limited to the challenge of image interpretation, letting the radiologists with the problem of the final histological diagnosis. Complementary to lung tissue categorization, providing visually similar cases using content-based image retrieval (CBIR) is in line with the clinical workflow of the radiologists. In a preliminary study, a Euclidean distance based on volume percentages of five lung tissue types was used as inter-case distance for CBIR. The latter showed the feasibility of retrieving similar histological diagnoses of ILD based on visual content, although no localization information was used for CBIR. However, to retrieve and show similar images with pathology appearing at a particular lung position was not possible. In this work, a 3D localization system based on lung anatomy is used to localize low-level features used for CBIR. When compared to our previous study, the introduction of localization features allows improving early precision for some histological diagnoses, especially when the region of appearance of lung tissue disorders is important.

Depeursinge, Adrien; Zrimec, Tatjana; Busayarat, Sata; M黮ler, Henning

2011-03-01

183

Fast 3-D Tomographic Microwave Imaging for Breast Cancer Detection  

PubMed Central

Microwave breast imaging (using electromagnetic waves of frequencies around 1 GHz) has mostly remained at the research level for the past decade, gaining little clinical acceptance. The major hurdles limiting patient use are both at the hardware level (challenges in collecting accurate and noncorrupted data) and software level (often plagued by unrealistic reconstruction times in the tens of hours). In this paper we report improvements that address both issues. First, the hardware is able to measure signals down to levels compatible with sub-centimeter image resolution while keeping an exam time under 2 min. Second, the software overcomes the enormous time burden and produces similarly accurate images in less than 20 min. The combination of the new hardware and software allows us to produce and report here the first clinical 3-D microwave tomographic images of the breast. Two clinical examples are selected out of 400+ exams conducted at the Dartmouth Hitchcock Medical Center (Lebanon, NH). The first example demonstrates the potential usefulness of our system for breast cancer screening while the second example focuses on therapy monitoring. PMID:22562726

Meaney, Paul M.; Kaufman, Peter A.; diFlorio-Alexander, Roberta M.; Paulsen, Keith D.

2013-01-01

184

Imaging the 3D geometry of pseudotachylyte-bearing faults  

NASA Astrophysics Data System (ADS)

Dynamic friction experiments in granitoid or gabbroic rocks that achieve earthquake slip velocities reveal significant weakening by melt-lubrication of the sliding surfaces. Extrapolation of these experimental results to seismic source depths (> 7 km) suggests that the slip weakening distance (Dw) over which this transition occurs is < 10 cm. The physics of this lubrication in the presence of a fluid (melt) is controlled by surface micro-topography. In order to characterize fault surface microroughness and its evolution during dynamic slip events on natural faults, we have undertaken an analysis of three-dimensional (3D) fault surface microtopography and its causes on a suite of pseudotachylyte-bearing fault strands from the Gole Larghe fault zone, Italy. The solidification of frictional melt soon after seismic slip ceases "freezes in" earthquake source geometries, however it also precludes the development of extensive fault surface exposures that have enabled direct studies of fault surface roughness. We have overcome this difficulty by imaging the intact 3D geometry of the fault using high-resolution X-ray computed tomography (CT). We collected a suite of 2-3.5 cm diameter cores (2-8 cm long) from individual faults within the Gole Larghe fault zone with a range of orientations (+/- 45 degrees from average strike) and slip magnitudes (0-1 m). Samples were scanned at the University of Texas High Resolution X-ray CT Facility, using an Xradia MicroCT scanner with a 70 kV X-ray source. Individual voxels (3D pixels) are ~36 ?m across. Fault geometry is thus imaged over ~4 orders of magnitude from the micron scale up to ~Dw. Pseudotachylyte-bearing fault zones are imaged as tabular bodies of intermediate X-ray attenuation crosscutting high attenuation biotite and low attenuation quartz and feldspar of the surrounding tonalite. We extract the fault surfaces (contact between the pseudotachylyte bearing fault zone and the wall rock) using integrated manual mapping, automated edge detection, and statistical evaluation. This approach results in a digital elevation model for each side of the fault zone that we use to quantify melt thickness and volume as well as surface microroughness and explore the relationship between these properties and the geometry, slip magnitude, and wall rock mineralogy of the fault.

Resor, Phil; Shervais, Katherine

2013-04-01

185

[Content-based automatic retinal image recognition and retrieval system].  

PubMed

This paper is aimed to fulfill a prototype system used to classify and retrieve retinal image automatically. With the content-based image retrieval (CBIR) technology, a method to represent the retinal characteristics mixing the fundus image color (gray) histogram with bright, dark region features and other local comprehensive information was proposed. The method uses kernel principal component analysis (KPCA) to further extract nonlinear features and dimensionality reduced. It also puts forward a measurement method using support vector machine (SVM) on KPCA weighted distance in similarity measure aspect. Testing 300 samples with this prototype system randomly, we obtained the total image number of wrong retrieved 32, and the retrieval rate 89.33%. It showed that the identification rate of the system for retinal image was high. PMID:23858770

Zhang, Jiumei; Du, Jianjun; Cheng, Xia; Cao, Hongliang

2013-04-01

186

3D Chemical and Elemental Imaging by STXM Spectrotomography  

NASA Astrophysics Data System (ADS)

Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

Wang, J.; Hitchcock, A. P.; Karunakaran, C.; Prange, A.; Franz, B.; Harkness, T.; Lu, Y.; Obst, M.; Hormes, J.

2011-09-01

187

3D ranging with a single-photon imaging array  

NASA Astrophysics Data System (ADS)

Several applications require systems for 3D ranging acquisition, where both high frame-rate and high sensitivity (for either very dark environments or opaque objects) are a must. We exploited a monolithic chip with 32 x 32 Single-Photon Avalanche Diode smart-pixels for 3D ranging applications based on an Indirect Time-of-Flight (iTOF) technique. The scene is illuminated by a sinusoidally modulated LED and the reflected light is acquired by the imager in different timeslots, for measuring the phase delay of outgoing vs. incoming signal, hence computing the distance between the sensor and objects in the scene. All 1024 array pixels are synchronously enabled by a global gate signal, which allows photon counting in well-defined time-slots within each frame. The frame duration is set in accordance to the desired SNR. We report on measurements performed on chips fabricated in a standard high-voltage 0.35 ?m CMOS technology, which feature 40% photon detection efficiency at 450 nm and 20% at 650nm. The single-photon sensitivity allowed the use of just few LEDs at 650 nm and 20MHz for acquiring a scene with a maximum distance of 7.5 m, with better than 10 cm distance resolution and frame-rates higher than 50 frames/s.

Bellisai, Simone; Guerrieri, Fabrizio; Tisa, Simone; Zappa, Franco

2011-03-01

188

3D painting documentation: evaluation of conservation conditions with 3D imaging and ranging techniques  

NASA Astrophysics Data System (ADS)

The monitoring of paintings, both on canvas and wooden support, is a crucial issue for the preservation and conservation of this kind of artworks. Many environmental factors (e.g. humidity, temperature, illumination, etc.), as well as bad conservation practices (e.g. wrong restorations, inappropriate locations, etc.), can compromise the material conditions over time and deteriorate an artwork. The article presents an on-going project realized by a multidisciplinary team composed by the ENEA UTICT 3D GraphLab, the 3D Optical Metrology Unit of the Bruno Kessler Foundation and the Soprintendenza per i Beni Storico Artistici ed Etnoantropologici of Bologna (Italy). The goal of the project is the multi-temporal 3D documentation and monitoring of paintings - at the moment in bad conservation's situation - and the provision of some metrics to quantify the deformations and damages.

Abate, D.; Menna, F.; Remondino, F.; Gattari, M. G.

2014-06-01

189

RECONSTRUCTION AND REPRESENTATION OF REMAINS IN THE MOJOS PLAIN BY 3D IMAGES  

Microsoft Academic Search

In this study, a 3D representation was produced with a 3D-graphic software from local aerial photographs. Wireframes were made by importing the field survey data of the remains and projecting the images with the polygons. Then, the Boolean operations were processed with configuration of the image data on the wireframes. Finally, 3D images were obtained after rendering. The 3D representation

Kazuki ISHII; Susumu OGAWA; Naoki OKADA; Kota IMAI

190

Intra-operative 3D pose estimation of fractured bone segments for image guided orthopedic surgery  

Microsoft Academic Search

The widespread adoption of minimally invasive surgical techniques have driven the need for 3D intra-operative image guidance. Hence the 3D pose estimation (position and orientation) performed through the registration of pre-operatively prepared 3D anatomical data to intra-operative 2D fluoroscopy images is one of the main research areas of image guided orthopedic surgery. The goal of this 2D-3D registration is to

P. Gamage; S. Q. Xie; P. Delmas; P. Xu; S. Mukherjee

2009-01-01

191

An Efficient 3D Imaging using Structured Light Systems  

NASA Astrophysics Data System (ADS)

Structured light 3D surface imaging has been crucial in the fields of image processing and computer vision, particularly in reconstruction, recognition and others. In this dissertation, we propose the approaches to development of an efficient 3D surface imaging system using structured light patterns including reconstruction, recognition and sampling criterion. To achieve an efficient reconstruction system, we address the problem in its many dimensions. In the first, we extract geometric 3D coordinates of an object which is illuminated by a set of concentric circular patterns and reflected to a 2D image plane. The relationship between the original and the deformed shape of the light patterns due to a surface shape provides sufficient 3D coordinates information. In the second, we consider system efficiency. The efficiency, which can be quantified by the size of data, is improved by reducing the number of circular patterns to be projected onto an object of interest. Akin to the Shannon-Nyquist Sampling Theorem, we derive the minimum number of circular patterns which sufficiently represents the target object with no considerable information loss. Specific geometric information (e.g. the highest curvature) of an object is key to deriving the minimum sampling density. In the third, the object, represented using the minimum number of patterns, has incomplete color information (i.e. color information is given a priori along with the curves). An interpolation is carried out to complete the photometric reconstruction. The results can be approximately reconstructed because the minimum number of the patterns may not exactly reconstruct the original object. But the result does not show considerable information loss, and the performance of an approximate reconstruction is evaluated by performing recognition or classification. In an object recognition, we use facial curves which are deformed circular curves (patterns) on a target object. We simply carry out comparison between the facial curves of different faces or different expressions, and subsequently evaluate the performance of the reconstruction results. Since comparison between all pairs of curves can increase the computational complexity, we propose an approach for classification which is based on the shortest geodesic distances. Shape-based comparison is carried out because it shows robustness to scaling effect and rotation due to varying viewpoints. Previously, linear methods and non-linear methods have been investigated for a dimensional reduction which achieves efficient recognition / classification algorithms. But, existing approaches generate many parameters which leads to an optimization procedures which sometimes do not provide explicit solution. The proposed approach to dimensionality reduction for recognition is based on the property of the Fourier Transform whose magnitude response is symmetric and invariant to time-shift, and the results are much more explicit without loss of intrinsic information of targets. In practice, to achieve the reconstruction of a larger sized object, we use multipleprojector-viewpoints (MPV) system. The minimum number of cameras and projectors is critical part to achieve an efficient MPV system. For an alternative view of reconstruction, we apply the concepts of a system identification to the reconstruction problem. The first one is a general system identification determined by the ratio of the output to input, and the second one is a modulation-demodulation theory used to estimate an input (transmitted) signal from an output (received or observed) signal.

Lee, Deokwoo

192

Needle placement for piriformis injection using 3-D imaging.  

PubMed

Piriformis syndrome is a pain syndrome originating in the buttock and is attributed to 6% - 8% of patients referred for the treatment of back and leg pain. The treatment for piriformis syndrome using fluoroscopy, computed tomography (CT), electromyography (EMG), and ultrasound (US) has become standard practice. The treatment of Piriformis Syndrome has evolved to include fluoroscopy and EMG with CT guidance. We present a case study of 5 successful piriformis injections using 3-D computer-assisted electromagnet needle tracking coupled with ultrasound. A 6-degree of freedom electromagnetic position tracker was attached to the ultrasound probe that allowed the system to detect the position and orientation of the probe in the magnetic field. The tracked ultrasound probe was used to find the posterior superior iliac spine. Subsequently, 3 points were captured to register the ultrasound image with the CT or magnetic resonance image scan. Moreover, after the registration was obtained, the navigation system visualized the tracked needle relative to the CT scan in real-time using 2 orthogonal multi-planar reconstructions centered at the tracked needle tip. Conversely, a recent study revealed that fluoroscopically guided injections had 30% accuracy compared to ultrasound guided injections, which tripled the accuracy percentage. This novel technique exhibited an accurate needle guidance injection precision of 98% while advancing to the piriformis muscle and avoiding the sciatic nerve. The mean ( SD) procedure time was 19.08 ( 4.9) minutes. This technique allows for electromagnetic instrument tip tracking with real-time 3-D guidance to the selected target. As with any new technique, a learning curve is expected; however, this technique could offer an alternative, minimizing radiation exposure. PMID:23703429

Clendenen, Steven R; Candler, Shawn A; Osborne, Michael D; Palmer, Scott C; Duench, Stephanie; Glynn, Laura; Ghazi, Salim M

2013-01-01

193

3D lesion insertion in digital breast tomosynthesis images  

NASA Astrophysics Data System (ADS)

Digital breast tomosynthesis (DBT) is a new volumetric breast cancer screening modality. It is based on the principles of computed tomography (CT) and shows promise for improving sensitivity and specificity compared to digital mammography, which is the current standard protocol. A barrier to critically evaluating any new modality, including DBT, is the lack of patient data from which statistically significant conclusions can be drawn; such studies require large numbers of images from both diseased and healthy patients. Since the number of detected lesions is low in relation to the entire breast cancer screening population, there is a particular need to acquire or otherwise create diseased patient data. To meet this challenge, we propose a method to insert 3D lesions in the DBT images of healthy patients, such that the resulting images appear qualitatively faithful to the modality and could be used in future clinical trials or virtual clinical trials (VCTs). The method facilitates direct control of lesion placement and lesion-to-background contrast and is agnostic to the DBT reconstruction algorithm employed.

Vaz, Michael S.; Besnehard, Quentin; Marchessoux, C閐ric

2011-03-01

194

GPU-accelerated denoising of 3D magnetic resonance images  

SciTech Connect

The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

Howison, Mark; Wes Bethel, E.

2014-05-29

195

3D quantitative Fourier analysis of second harmonic generation microscopy images of collagen structure in cartilage  

NASA Astrophysics Data System (ADS)

One of the main advantages of nonlinear microscopy is that it provides 3D imaging capability. Second harmonic generation is widely used to image the 3D structure of collagen fibers, and several works have highlighted the modification of the collagen fiber fabric in important diseases. By using an ellipsoidal specific fitting technique on the Fourier transformed image, we show, using both synthetic images and SHG images from cartilage, that the 3D direction of the collagen fibers can be robustly determined.

Romijn, Elisabeth I.; Lilledahl, Magnus B.

2013-02-01

196

d Technical Note RING ARRAY TRANSDUCERS FOR REAL-TIME 3-D IMAGING OF AN ATRIAL  

E-print Network

d Technical Note RING ARRAY TRANSDUCERS FOR REAL-TIME 3-D IMAGING OF AN ATRIAL SEPTAL OCCLUDER) real-time 3-D ultrasound scanner. Transducer performance yielded a 颅6 dB fractional bandwidth of 20 with a matching layer. Real-time 3-D rendered images of an en face view of a Gore Helex septal occluder in a water

Smith, Stephen

197

Reconstructing Plants in 3D from a Single Image using Analysis-by-Synthesis  

E-print Network

Reconstructing Plants in 3D from a Single Image using Analysis-by-Synthesis J麓er^ome Gu麓enard1 G from images. However, due to high complexity of plant topology, dedicated methods for generating 3D plant models must be devised. We propose to generate a 3D model of a plant, using an analysis

Paris-Sud XI, Universit茅 de

198

Highly Undersampled 3D Golden Ratio Radial Imaging with Iterative Reconstruction , H. Eggers2  

E-print Network

.5 ms, TR = 7.1 ms, flip angle = 10掳, matrix size 1283 , applying 2D golden section sampling. 32766Highly Undersampled 3D Golden Ratio Radial Imaging with Iterative Reconstruction M. Doneva1 , H of CS for 3D dynamic imaging using highly undersampled 3D radial acquisition with golden ratio profile

L眉beck, Universit盲t zu

199

Johannes M. Steger Fusion of 3D Laser Scans and Stereo Images for  

E-print Network

Johannes M. Steger Fusion of 3D Laser Scans and Stereo Images for Disparity Maps of Natural Scenes Bachelor's Thesis Fusion of 3D Laser Scans and Stereo Images for Disparity Maps of Natural Scenes Johannes for cluttered natural scenes. We directly measured the 3D structure of natural scenes using a laser range

Kallenrode, May-Britt

200

Simulated 3D Ultrasound LV Cardiac Images for Active Shape Model Training  

E-print Network

Simulated 3D Ultrasound LV Cardiac Images for Active Shape Model Training Constantine Butakoff of 3D ultrasound cardiac segmentation using Active Shape Models (ASM) is presented. The proposed resolution MRI scans and the appearance model obtained from simulated 3D ultrasound images. Usually

Frangi, Alejandro

201

ROIC for gated 3D imaging LADAR receiver  

NASA Astrophysics Data System (ADS)

Time of flight laser range finding, deep space communications and scanning video imaging are three applications requiring very low noise optical receivers to achieve detection of fast and weak optical signal. HgCdTe electrons initiated avalanche photodiodes (e-APDs) in linear multiplication mode is the detector of choice thanks to its high quantum efficiency, high gain at low bias, high bandwidth and low noise factor. In this project, a readout integrated circuit of hybrid e-APD focal plane array (FPA) with 100um pitch for 3D-LADAR was designed for gated optical receiver. The ROIC works at 77K, including unit cell circuit, column-level circuit, timing control, bias circuit and output driver. The unit cell circuit is a key component, which consists of preamplifier, correlated double Sampling (CDS), bias circuit and timing control module. Specially, the preamplifier used the capacitor feedback transimpedance amplifier (CTIA) structure which has two capacitors to offer switchable capacitance for passive/active dual mode imaging. The main circuit of column-level circuit is a precision Multiply-by-Two circuit which is implemented by switched-capacitor circuit. Switched-capacitor circuit is quite suitable for the signal processing of readout integrated circuit (ROIC) due to the working characteristics. The output driver uses a simply unity-gain buffer. Because the signal is amplified in column-level circuit, the amplifier in unity-gain buffer uses a rail-rail amplifier. In active imaging mode, the integration time is 80ns. Integrating current from 200nA to 4uA, this circuit shows the nonlinearity is less than 1%. In passive imaging mode, the integration time is 150ns. Integrating current from 1nA to 20nA shows the nonlinearity less than 1%.

Chen, Guoqiang; Zhang, Junling; Wang, Pan; Zhou, Jie; Gao, Lei; Ding, Ruijun

2013-09-01

202

Recent Advances in Retinal Imaging With Adaptive Optics  

E-print Network

Recent Advances in Retinal Imaging With Adaptive Optics 36 Optics & Photonics News January 2005 suggested the use of adaptive optics to improve ground-based astronomy, where the rapidly changing the use of adaptive optics is not limited to astronomical imaging, and in the past few decades there has

Williams, David

203

Deconvolution of adaptive optics retinal images Julian C. Christou  

E-print Network

Deconvolution of adaptive optics retinal images Julian C. Christou Center for Adaptive Optics the contrast of the adaptive optics images. In this work we demonstrate that quantitative information is also by using adaptive optics1 (AO). The wave-front correction is not perfect, however. Although a diffraction

204

In vivo imaging of the retinal pigment epithelial cells  

Microsoft Academic Search

The retinal pigment epithelial (RPE) cells form an important layer of the retina because they are responsible for providing metabolic support to the photoreceptors. Techniques to image the RPE layer include autofluorescence imaging with a scanning laser ophthalmoscope (SLO). However, previous studies were unable to resolve single RPE cells in vivo. This thesis describes the technique of combining autofluorescence, SLO,

Jessica Ijams Wolfing Morgan

2008-01-01

205

Adaptive optics with pupil tracking for high resolution retinal imaging  

E-print Network

for high resolution retinal imaging because eye movements constitute an important part of the ocular of Ireland, Galway, Ireland 2Imagine Eyes, 18 Rue Charles de Gaulle, 91400, Orsay, France betulhn ocular aberrations in real time and results in improved high resolution images that reveal

Dainty, Chris

206

3D imaging of enzymes working in situ.  

PubMed

Today, development of slowly digestible food with positive health impact and production of biofuels is a matter of intense research. The latter is achieved via enzymatic hydrolysis of starch or biomass such as lignocellulose. Free label imaging, using UV autofluorescence, provides a great tool to follow one single enzyme when acting on a non-UV-fluorescent substrate. In this article, we report synchrotron DUV fluorescence in 3-dimensional imaging to visualize in situ the diffusion of enzymes on solid substrate. The degradation pathway of single starch granules by two amylases optimized for biofuel production and industrial starch hydrolysis was followed by tryptophan autofluorescence (excitation at 280 nm, emission filter at 350 nm). The new setup has been specially designed and developed for a 3D representation of the enzyme-substrate interaction during hydrolysis. Thus, this tool is particularly effective for improving knowledge and understanding of enzymatic hydrolysis of solid substrates such as starch and lignocellulosic biomass. It could open up the way to new routes in the field of green chemistry and sustainable development, that is, in biotechnology, biorefining, or biofuels. PMID:24796213

Jamme, F; Bourquin, D; Tawil, G; Viks-Nielsen, A; Bul閛n, A; R閒r間iers, M

2014-06-01

207

Statistical methods for 2D-3D registration of optical and LIDAR images  

E-print Network

Fusion of 3D laser radar (LIDAR) imagery and aerial optical imagery is an efficient method for constructing 3D virtual reality models. One difficult aspect of creating such models is registering the optical image with the ...

Mastin, Dana Andrew

2009-01-01

208

Deformable M-Reps for 3D Medical Image Segmentation  

PubMed Central

M-reps (formerly called DSLs) are a multiscale medial means for modeling and rendering 3D solid geometry. They are particularly well suited to model anatomic objects and in particular to capture prior geometric information effectively in deformable models segmentation approaches. The representation is based on figural models, which define objects at coarse scale by a hierarchy of figures each figure generally a slab representing a solid region and its boundary simultaneously. This paper focuses on the use of single figure models to segment objects of relatively simple structure. A single figure is a sheet of medial atoms, which is interpolated from the model formed by a net, i.e., a mesh or chain, of medial atoms (hence the name m-reps), each atom modeling a solid region via not only a position and a width but also a local figural frame giving figural directions and an object angle between opposing, corresponding positions on the boundary implied by the m-rep. The special capability of an m-rep is to provide spatial and orientational correspondence between an object in two different states of deformation. This ability is central to effective measurement of both geometric typicality and geometry to image match, the two terms of the objective function optimized in segmentation by deformable models. The other ability of m-reps central to effective segmentation is their ability to support segmentation at multiple levels of scale, with successively finer precision. Objects modeled by single figures are segmented first by a similarity transform augmented by object elongation, then by adjustment of each medial atom, and finally by displacing a dense sampling of the m-rep implied boundary. While these models and approaches also exist in 2D, we focus on 3D objects. The segmentation of the kidney from CT and the hippocampus from MRI serve as the major examples in this paper. The accuracy of segmentation as compared to manual, slice-by-slice segmentation is reported. PMID:23825898

Pizer, Stephen M.; Fletcher, P. Thomas; Joshi, Sarang; Thall, Andrew; Chen, James Z.; Fridman, Yonatan; Fritsch, Daniel S.; Gash, Graham; Glotzer, John M.; Jiroutek, Michael R.; Lu, Conglin; Muller, Keith E.; Tracton, Gregg; Yushkevich, Paul; Chaney, Edward L.

2013-01-01

209

Detection of retinal nerve fiber layer defects on retinal fundus images for early diagnosis of glaucoma  

NASA Astrophysics Data System (ADS)

Retinal nerve fiber layer defect (NFLD) is a major sign of glaucoma, which is the second leading cause of blindness in the world. Early detection of NFLDs is critical for improved prognosis of this progressive, blinding disease. We have investigated a computerized scheme for detection of NFLDs on retinal fundus images. In this study, 162 images, including 81 images with 99 NFLDs, were used. After major blood vessels were removed, the images were transformed so that the curved paths of retinal nerves become approximately straight on the basis of ellipses, and the Gabor filters were applied for enhancement of NFLDs. Bandlike regions darker than the surrounding pixels were detected as candidates of NFLDs. For each candidate, image features were determined and the likelihood of a true NFLD was determined by using the linear discriminant analysis and an artificial neural network (ANN). The sensitivity for detecting the NFLDs was 91% at 1.0 false positive per image by using the ANN. The proposed computerized system for the detection of NFLDs can be useful to physicians in the diagnosis of glaucoma in a mass screening.

Muramatsu, Chisako; Hayashi, Yoshinori; Sawada, Akira; Hatanaka, Yuji; Hara, Takeshi; Yamamoto, Tetsuya; Fujita, Hiroshi

2010-01-01

210

Retinal image mosaicing using the radial distortion correction model  

NASA Astrophysics Data System (ADS)

Fundus camera imaging can be used to examine the retina to detect disorders. Similar to looking through a small keyhole into a large room, imaging the fundus with an ophthalmologic camera allows only a limited view at a time. Thus, the generation of a retinal montage using multiple images has the potential to increase diagnostic accuracy by providing larger field of view. A method of mosaicing multiple retinal images using the radial distortion correction (RADIC) model is proposed in this paper. Our method determines the inter-image connectivity by detecting feature correspondences. The connectivity information is converted to a tree structure that describes the spatial relationships between the reference and target images for pairwise registration. The montage is generated by cascading pairwise registration scheme starting from the anchor image downward through the connectivity tree hierarchy. The RADIC model corrects the radial distortion that is due to the spherical-to-planar projection during retinal imaging. Therefore, after radial distortion correction, individual images can be properly mapped onto a montage space by a linear geometric transformation, e.g. affine transform. Compared to the most existing montaging methods, our method is unique in that only a single registration per image is required because of the distortion correction property of RADIC model. As a final step, distance-weighted intensity blending is employed to correct the inter-image differences in illumination encountered when forming the montage. Visual inspection of the experimental results using three mosaicing cases shows our method can produce satisfactory montages.

Lee, Sangyeol; Abr鄊off, Michael D.; Reinhardt, Joseph M.

2008-03-01

211

Nonlinear filtering approach to grayscale interpolation of 3D medical images  

NASA Astrophysics Data System (ADS)

Three-dimensional images are now common in radiology. A 3D image is formed by stacking a contiguous sequence of two-dimensional cross-sectional images, or slices. Typically, the spacing between known slices is greater than the spacing between known points on a slice. Many visualization and image-analysis tasks, however, require the 3D image to have equal sample spacing in all directions. To meet this requirement, one applies an interpolation technique to the known 3D image to generate a new uniformly sampled 3D image. We propose a nonlinear-filter-based approach to gray-scale interpolation of 3D images. The method, referred to as column-fitting interpolation, is reminiscent of the maximum-homogeneity filter used for image enhancement. The method is typically more effective than traditional gray-scale interpolation techniques.

Higgins, William E.; Ledell, Brian E.

1994-05-01

212

3D fingerprint imaging system based on full-field fringe projection profilometry  

NASA Astrophysics Data System (ADS)

As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

2014-01-01

213

Portable, low-priced retinal imager for eye disease screening  

NASA Astrophysics Data System (ADS)

The objective of this project was to develop and demonstrate a portable, low-priced, easy to use non-mydriatic retinal camera for eye disease screening in underserved urban and rural locations. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities or other economically stressed healthcare facilities. Our approach for Smart i-Rx is based primarily on a significant departure from current generations of desktop and hand-held commercial retinal cameras as well as those under development. Our techniques include: 1) Exclusive use of off-the-shelf components; 2) Integration of retinal imaging device into low-cost, high utility camera mount and chin rest; 3) Unique optical and illumination designed for small form factor; and 4) Exploitation of autofocus technology built into present digital SLR recreational cameras; and 5) Integration of a polarization technique to avoid the corneal reflex. In a prospective study, 41 out of 44 diabetics were imaged successfully. No imaging was attempted on three of the subjects due to noticeably small pupils (less than 2mm). The images were of sufficient quality to detect abnormalities related to diabetic retinopathy, such as microaneurysms and exudates. These images were compared with ones taken non-mydriatically with a Canon CR-1 Mark II camera. No cases identified as having DR by expert retinal graders were missed in the Smart i-Rx images.

Soliz, Peter; Nemeth, Sheila; VanNess, Richard; Barriga, E. S.; Zamora, Gilberto

2014-02-01

214

Analyzing 3D Objects in Cluttered Images Mohsen Hejrati  

E-print Network

second stage, using an explicit 3D model of shape and viewpoint. We use a morphable model to capture 3D within-class variation, and use a weak-perspective camera model to capture viewpoint. We learn all model parameters from 2D annotations. We demonstrate state-of-the-art accuracy for detection, viewpoint estimation

Ramanan, Deva

215

DXSoil, a library for 3D image analysis in soil science  

Microsoft Academic Search

A comprehensive series of routines has been developed to extract structural and topological information from 3D images of porous media. The main application aims at feeding a pore network approach to simulate unsaturated hydraulic properties from soil core images. Beyond the application example, the successive algorithms presented in the paper allow, from any 3D object image, the extraction of the

Jean-fran-cois Delerue; Edith Perrier

2002-01-01

216

A Range Image Refinement Technique for Multi-view 3D Model Reconstruction  

E-print Network

A Range Image Refinement Technique for Multi-view 3D Model Reconstruction Soon-Yong Park and Murali-mail: parksy@ece.sunysb.edu Abstract This paper presents a range image refinement technique for generating accurate 3D computer models of real ob- jects. Range images obtained from a stereo-vision system typically

Subbarao, Murali "Rao"

217

3D LASER IMAGING BY BACKPROJECTION JEAN-BAPTISTE BELLET AND GRARD BERGINC  

E-print Network

3D LASER IMAGING BY BACKPROJECTION JEAN-BAPTISTE BELLET AND G蒖ARD BERGINC Abstract. In this paper, we are interested in imaging a 3D scene from a collection of 2D laser images, using advanced algorithm for cone-beam scanning. Then we use a pinhole camera model to describe the laser experiment

Paris-Sud XI, Universit de

218

3D-3D registration of partial capitate bones using spin-images  

NASA Astrophysics Data System (ADS)

It is often necessary to register partial objects in medical imaging. Due to limited field of view (FOV), the entirety of an object cannot always be imaged. This study presents a novel application of an existing registration algorithm to this problem. The spin-image algorithm [1] creates pose-invariant representations of global shape with respect to individual mesh vertices. These `spin-images,' are then compared for two different poses of the same object to establish correspondences and subsequently determine relative orientation of the poses. In this study, the spin-image algorithm is applied to 4DCT-derived capitate bone surfaces to assess the relative accuracy of registration with various amounts of geometry excluded. The limited longitudinal coverage under the 4DCT technique (38.4mm, [2]), results in partial views of the capitate when imaging wrist motions. This study assesses the ability of the spin-image algorithm to register partial bone surfaces by artificially restricting the capitate geometry available for registration. Under IRB approval, standard static CT and 4DCT scans were obtained on a patient. The capitate was segmented from the static CT and one phase of 4DCT in which the whole bone was available. Spin-image registration was performed between the static and 4DCT. Distal portions of the 4DCT capitate (10-70%) were then progressively removed and registration was repeated. Registration accuracy was evaluated by angular errors and the percentage of sub-resolution fitting. It was determined that 60% of the distal capitate could be omitted without appreciable effect on registration accuracy using the spin-image algorithm (angular error < 1.5 degree, sub-resolution fitting < 98.4%).

Breighner, Ryan; Holmes, David R.; Leng, Shuai; An, Kai-Nan; McCollough, Cynthia; Zhao, Kristin

2013-03-01

219

Hydraulic conductivity imaging from 3-D transient hydraulic tomography at several pumping/observation densities  

E-print Network

Hydraulic conductivity imaging from 3-D transient hydraulic tomography at several pumping August 2013; accepted 7 September 2013; published 13 November 2013. [1] 3-D Hydraulic tomography (3-D HT (primarily hydraulic conductivity, K) is estimated by joint inversion of head change data from multiple

Barrash, Warren

220

Imaging cellular network dynamics in three dimensions using fast 3D laser scanning  

E-print Network

Imaging cellular network dynamics in three dimensions using fast 3D laser scanning Werner Go-y scanners to repeatedly scan the laser focus along a closed 3D trajectory. More than 90% of cell somata were lacking. Here we introduce a three-dimensional (3D) line-scan technology for two-photon microscopy

Cai, Long

221

3D coordinate transform model of optical images fusing vector distance information  

NASA Astrophysics Data System (ADS)

For reducing the error of affine transform while matching the three-dimensional targets in optical images, the model of optical images matching was extended to three dimension using distance information of high characteristics in optical images (vector distance information), the three-dimensional (3D) coordinate transform was proposed. Theoretical analysis shows that when the optical imaging model was simplified to pinhole imaging model, the error of 3D coordinate transform didn't exist, while avoiding the nonlinear problem. The amount of calculation of 3D coordinate transform was analyzed using least squares estimation and RANSAC estimation as examples; the amount of calculation of 3D coordinate transform is only four times of affine transform, twice when using RANSAC estimates. The simulation analysis of matching tracking algorithm based on SIFT feature points using 3D coordinate transform was taken by the visual simulation software VegaPrime and MATLAB, and the advantages of 3D coordinate transform has been verified.

Ran, Huanhuan; Huo, Yihua; Huang, Zili

2015-02-01

222

Retinal motion estimation and image dewarping in adaptive optics scanning  

E-print Network

Retinal motion estimation and image dewarping in adaptive optics scanning laser ophthalmoscopy estimation in adaptive optics scanning laser ophthalmoscopy Curtis R. Vogel Department of Mathematical optics scanning laser ophthalmoscopy. 漏 2005 Optical Society of America OCIS codes: (010.1080) Adaptive

Parker, Albert E.

223

Retinal vessel cannulation with an image-guided handheld robot  

Microsoft Academic Search

Cannulation of small retinal vessels is often prohibitively difficult for surgeons, since physiological tremor often exceeds the narrow diameter of the vessel (40-120 ?m). Using an active handheld micromanipulator, we introduce an image-guided robotic system that reduces tremor and provides smooth, scaled motion during the procedure. The micromanipulator assists the surgeon during the approach, puncture, and injection stages of the

Brian C. Becker; Sandrine Voros; Louis A. Lobes; James T. Handa; Gregory D. Hager; Cameron N. Riviere

2010-01-01

224

Blood Flow Magnetic Resonance Imaging of Retinal Degeneration  

E-print Network

Blood Flow Magnetic Resonance Imaging of Retinal Degeneration Yingxia Li,1 Haiying Cheng,1 Qiang. Duong1,2,3,4,5,6,7 PURPOSE. This study aims to investigate quantitative basal blood flow as well as hypercapnia- and hyperoxia-induced blood flow changes in the retinas of the Royal College of Surgeons (RCS

Duong, Timothy Q.

225

3D INDUSTRIAL RECONSTRUCTION BY FITTING CSG MODELS TO A COMBINATION OF IMAGES AND POINT CLOUDS  

Microsoft Academic Search

We present a method for 3D reconstruction of industrial sites using a combination of images and point clouds with a motivation of achieving higher levels of automation, precision, and reliability. Recent advances in 3D scanning technologies have made possible rapid and cost-effective acquisition of dense point clouds for 3D reconstruction. As the point clouds provide explicit 3D information, they have

Tahir Rabbani; Frank van den Heuvel

226

Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation.  

PubMed

Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 25601600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability. PMID:25465067

Wang, Junchen; Suenaga, Hideyuki; Liao, Hongen; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

2015-03-01

227

3-D Seismic Methods for Shallow Imaging Beneath Pavement  

E-print Network

The research presented in this dissertation focuses on survey design and acquisition of near-surface 3D seismic reflection and surface wave data on pavement. Increased efficiency for mapping simple subsurface interfaces through a combined use...

Miller, Brian

2013-05-31

228

GDx-MM: An imaging Mueller matrix retinal polarimeter  

NASA Astrophysics Data System (ADS)

Retinal diseases are a major cause of blindness worldwide. Although widely studied, disease mechanisms are not completely understood, and diagnostic tests may not detect disease early enough for timely intervention. The goal of this research is to contribute to research for more sensitive diagnostic tests that might use the interaction of polarized light with retinal tissue to detect subtle changes in the microstructure. This dissertation describes the GDx-MM, a scanning laser polarimeter which measures a complete 16-element Mueller matrix image of the retina. This full polarization signature may provide new comparative information on the structure of healthy and diseased retinal tissue by highlighting depolarizing structures as well as structures with varying magnitudes and orientations of retardance and diattenuation. The three major components of this dissertation are: (1) Development of methods for polarimeter optimization and error analysis; (2) Design, optimization, assembly, calibration, and validation of the GDx-MM polarimeter; and (3) Analysis of data for several human subjects. Development involved modifications to a Laser Diagnostics GDx, a commercially available scanning laser ophthalmoscope with incomplete polarization capability. Modifications included installation of polarization components, development of a data acquisition system, and implementation of algorithms to convert raw data into polarization parameter images. Optimization involved visualization of polarimeter state trajectories on the Poincare sphere and a condition number analysis of the instrument matrix. Retinal images are collected non-invasively at 20 mum resolution over a 15 visual field in four seconds. Validation of the polarimeter demonstrates a polarimetric measurement accuracy of approximately +/- 5%. Retinal polarization data was collected on normal human subjects at the University of Arizona and at Indiana University School of Optometry. Calculated polarization parameter images reveal properties of the tissue microstructure. For example, retardance images indicate nerve fiber layer thickness and orientation, and depolarization images (uniform for these normal subjects), are predicted to indicate regions of disease-related tissue disruption. This research demonstrates a method for obtaining a full polarization signature of the retina in one measurement using a polarimetrically optimized instrument, and provides a step toward the use of complete retinal imaging polarimetry in the diagnosis and monitoring of retinal disease.

Twietmeyer, Karen Marie

2007-12-01

229

Accuracy of volume measurement using 3D ultrasound and development of CT-3D US image fusion algorithm for prostate cancer radiotherapy  

SciTech Connect

Purpose: To evaluate the accuracy of measuring volumes using three-dimensional ultrasound (3D US), and to verify the feasibility of the replacement of CT-MR fusion images with CT-3D US in radiotherapy treatment planning. Methods: Phantoms, consisting of water, contrast agent, and agarose, were manufactured. The volume was measured using 3D US, CT, and MR devices. A CT-3D US and MR-3D US image fusion software was developed using the Insight Toolkit library in order to acquire three-dimensional fusion images. The quality of the image fusion was evaluated using metric value and fusion images. Results: Volume measurement, using 3D US, shows a 2.8 {+-} 1.5% error, 4.4 {+-} 3.0% error for CT, and 3.1 {+-} 2.0% error for MR. The results imply that volume measurement using the 3D US devices has a similar accuracy level to that of CT and MR. Three-dimensional image fusion of CT-3D US and MR-3D US was successfully performed using phantom images. Moreover, MR-3D US image fusion was performed using human bladder images. Conclusions: 3D US could be used in the volume measurement of human bladders and prostates. CT-3D US image fusion could be used in monitoring the target position in each fraction of external beam radiation therapy. Moreover, the feasibility of replacing the CT-MR image fusion to the CT-3D US in radiotherapy treatment planning was verified.

Baek, Jihye; Huh, Jangyoung; Hyun An, So; Oh, Yoonjin [Department of Medical Sciences, Ewha Womans University, Seoul 158-710 (Korea, Republic of); Kim, Myungsoo; Kim, DongYoung; Chung, Kwangzoo; Cho, Sungho; Lee, Rena [Department of Radiation Oncology, School of Medicine, Ewha Womans University, Seoul 158-710 (Korea, Republic of)

2013-02-15

230

Imaging system for creating 3D block-face cryo-images of whole mice  

NASA Astrophysics Data System (ADS)

We developed a cryomicrotome/imaging system that provides high resolution, high sensitivity block-face images of whole mice or excised organs, and applied it to a variety of biological applications. With this cryo-imaging system, we sectioned cryo-preserved tissues at 2-40 ?m thickness and acquired high resolution brightfield and fluorescence images with microscopic in-plane resolution (as good as 1.2 ?m). Brightfield images of normal and pathological anatomy show exquisite detail, especially in the abdominal cavity. Multi-planar reformatting and 3D renderings allow one to interrogate 3D structures. In this report, we present brightfield images of mouse anatomy, as well as 3D renderings of organs. For BPK mice model of polycystic kidney disease, we compared brightfield cryo-images and kidney volumes to MRI. The color images provided greater contrast and resolution of cysts as compared to in vivo MRI. We note that color cryo-images are closer to what a researcher sees in dissection, making it easier for them to interpret image data. The combination of field of view, depth of field, ultra high resolution and color/fluorescence contrast enables cryo-image volumes to provide details that cannot be found through in vivo imaging or other ex vivo optical imaging approaches. We believe that this novel imaging system will have applications that include identification of mouse phenotypes, characterization of diseases like blood vessel disease, kidney disease, and cancer, assessment of drug and gene therapy delivery and efficacy and validation of other imaging modalities.

Roy, Debashish; Breen, Michael; Salvado, Olivier; Heinzel, Meredith; McKinley, Eliot; Wilson, David

2006-03-01

231

Evidence of outer retinal changes in glaucoma patients as revealed by ultrahigh-resolution in vivo retinal imaging  

PubMed Central

Aims It is well established that glaucoma results in a thinning of the inner retina. To investigate whether the outer retina is also involved, ultrahigh-resolution retinal imaging techniques were utilised. Methods Eyes from 10 glaucoma patients (2578 years old), were imaged using three research-grade instruments: (1) ultrahigh-resolution Fourier-domain optical coherence tomography (UHR-FD-OCT), (2) adaptive optics (AO) UHR-FD-OCT and (3) AO-flood illuminated fundus camera (AO-FC). UHR-FD-OCT and AO-UHR-FD-OCT B-scans were examined for any abnormalities in the retinal layers. On some patients, cone density measurements were made from the AO-FC en face images. Correlations between retinal structure and visual sensitivity were measured by Humphrey visual-field (VF) testing made at the corresponding retinal locations. Results All three in vivo imaging modalities revealed evidence of outer retinal changes along with the expected thinning of the inner retina in glaucomatous eyes with VF loss. AO-UHR-FD-OCT images identified the exact location of structural changes within the cone photoreceptor layer with the AO-FC en face images showing dark areas in the cone mosaic at the same retinal locations with reduced visual sensitivity. Conclusion Losses in cone density along with expected inner retinal changes were demonstrated in well-characterised glaucoma patients with VF loss. PMID:20956277

Choi, Stacey S; Zawadzki, Robert J; Lim, Michele C; Brandt, James D; Keltner, John L; Doble, Nathan; Werner, John S

2010-01-01

232

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING 1 3D-Image Reconstruction in Highly Collimated 3D  

E-print Network

is employed to visualize interior organs within the human body and obtain information of their structural imaging technology such as 64- detector scanners for applications such as angiography [8], [13]. However region of interest within the human body or within an organ is required. This is the typical situation, 1

Labate, Demetrio

233

A novel method for detection of preferred retinal locus (PRL) through simple retinal image processing using MATLAB  

NASA Astrophysics Data System (ADS)

simple and new technique for detection of `Preferred Retinal Locus' (PRL) in human eye is proposed in this paper. Simple MATLAB algorithms for estimating RGB pixel intensity values of retinal images were used. The technique proved non-existence of `S' cones in Fovea Centralis and also proposes that rods are involved in blue color perception. Retinal images of central vision loss and normal retina were taken for image processing. Blue minimum, Red maximum and Red+Green maximum were the three methods used in detecting PRL. Comparative analyses were also performed for these methods with patient's age and visual acuity.

Kalikivayi, V.; Pal, Sudip; Ganesan, A. R.

2013-09-01

234

Restoration of retinal images with space-variant blur  

NASA Astrophysics Data System (ADS)

Retinal images are essential clinical resources for the diagnosis of retinopathy and many other ocular diseases. Because of improper acquisition conditions or inherent optical aberrations in the eye, the images are often degraded with blur. In many common cases, the blur varies across the field of view. Most image deblurring algorithms assume a space-invariant blur, which fails in the presence of space-variant (SV) blur. In this work, we propose an innovative strategy for the restoration of retinal images in which we consider the blur to be both unknown and SV. We model the blur by a linear operation interpreted as a convolution with a point-spread function (PSF) that changes with the position in the image. To achieve an artifact-free restoration, we propose a framework for a robust estimation of the SV PSF based on an eye-domain knowledge strategy. The restoration method was tested on artificially and naturally degraded retinal images. The results show an important enhancement, significant enough to leverage the images' clinical use.

Marrugo, Andr閟 G.; Mill醤, Mar韆 S.; 妎rel, Michal; 妑oubek, Filip

2014-01-01

235

Restoration of retinal images with space-variant blur.  

PubMed

Retinal images are essential clinical resources for the diagnosis of retinopathy and many other ocular diseases. Because of improper acquisition conditions or inherent optical aberrations in the eye, the images are often degraded with blur. In many common cases, the blur varies across the field of view. Most image deblurring algorithms assume a space-invariant blur, which fails in the presence of space-variant (SV) blur. In this work, we propose an innovative strategy for the restoration of retinal images in which we consider the blur to be both unknown and SV. We model the blur by a linear operation interpreted as a convolution with a point-spread function (PSF) that changes with the position in the image. To achieve an artifact-free restoration, we propose a framework for a robust estimation of the SV PSF based on an eye-domain knowledge strategy. The restoration method was tested on artificially and naturally degraded retinal images. The results show an important enhancement, significant enough to leverage the images' clinical use. PMID:24474509

Marrugo, Andr閟 G; Mill醤, Mar韆 S; Sorel, Michal; Sroubek, Filip

2014-01-01

236

Hyperspectral retinal imaging with a spectrally tunable light source  

NASA Astrophysics Data System (ADS)

Hyperspectral retinal imaging can measure oxygenation and identify areas of ischemia in human patients, but the devices used by current researchers are inflexible in spatial and spectral resolution. We have developed a flexible research prototype consisting of a DLP-based spectrally tunable light source coupled to a fundus camera to quickly explore the effects of spatial resolution, spectral resolution, and spectral range on hyperspectral imaging of the retina. The goal of this prototype is to (1) identify spectral and spatial regions of interest for early diagnosis of diseases such as glaucoma, age-related macular degeneration (AMD), and diabetic retinopathy (DR); and (2) define required specifications for commercial products. In this paper, we describe the challenges and advantages of using a spectrally tunable light source for hyperspectral retinal imaging, present clinical results of initial imaging sessions, and describe how this research can be leveraged into specifying a commercial product.

Francis, Robert P.; Zuzak, Karel J.; Ufret-Vincenty, Rafael

2011-03-01

237

Phase aided 3D imaging and modeling: dedicated systems and case studies  

NASA Astrophysics Data System (ADS)

Dedicated prototype systems for 3D imaging and modeling (3DIM) are presented. The 3D imaging systems are based on the principle of phase-aided active stereo, which have been developed in our laboratory over the past few years. The reported 3D imaging prototypes range from single 3D sensor to a kind of optical measurement network composed of multiple node 3D-sensors. To enable these 3D imaging systems, we briefly discuss the corresponding calibration techniques for both single sensor and multi-sensor optical measurement network, allowing good performance of the 3DIM prototype systems in terms of measurement accuracy and repeatability. Furthermore, two case studies including the generation of high quality color model of movable cultural heritage and photo booth from body scanning are presented to demonstrate our approach.

Yin, Yongkai; He, Dong; Liu, Zeyi; Liu, Xiaoli; Peng, Xiang

2014-05-01

238

Imaging Areas of Retinal Nonperfusion in Ischemic Branch Retinal Vein Occlusion With Swept-Source OCT Microangiography.  

PubMed

The authors present the case of a patient with a history of ischemic branch vein occlusion and multimodal imaging of the retinal vasculature by fluorescein angiography (FA) and ultrahigh-speed swept-source optical coherence tomography (SS-OCT) microangiography (SS-OCT laser prototype; 1,050 nm, 100,000 A-scans/second). Multiple images across the macula were acquired (3 3 mm cubes in clusters of four repeated B-scans). En face images of the vasculature were generated by implementing an intensity differentiation algorithm. The retinal vasculature as well areas of nonperfusion could be identified precisely at multiple retinal levels. Ultrahigh-speed SS-OCT microangiography provides noninvasive, three-dimensional, high-resolution images of the retinal vasculature including the capillaries. [Ophthalmic Surg Lasers Imaging Retina. 2015;46:249-252.]. PMID:25707052

Kuehlewein, Laura; An, Lin; Durbin, Mary K; Sadda, SriniVas R

2015-02-01

239

Determining an initial image pair for fixing the scale of a 3d reconstruction from an image  

E-print Network

Determining an initial image pair for fixing the scale of a 3d reconstruction from an image@ipb.uni-bonn.de rsteffen@uni-bonn.de Abstract. Algorithms for metric 3d reconstruction of scenes from cali- brated image of such a stable image pair is proposed. Based on this quality measure a fully automatic initialization phase

240

Reversible decorrelation method for progressive transmission of 3-D medical image.  

PubMed

In this paper, we present a new reversible decorrelation method of three-dimensional (3-D) medical images for progressive transmission. Progressive transmission of an image permits gradual improvement of image quality while being displayed. When the amount of image data is very large, as a 3-D medical image, the progressive transmission plays an important role in viewing or browsing the image. The data structure presented in this paper takes account of interframe correlation as well as intraframe correlation of the 3-D image. This type of data structure has been termed the 3-D hierarchy embedded differential image (3-D-HEDI) as was derived from the earlier HEDI structure. Experiments were conducted to verify the performance of 3-D HEDI in terms of the decorrelation efficiency as well as the progressive transmission efficiency. It is compared with those of conventional hierarchy interpolation (HINT), two-dimensional (2-D) HEDI and differential pulse code modulation (DPCM). Experimental results indicate that 3-D HEDI outperforms HINT, 2-D HEDI and DPCM in both decorrelation efficiency as well as the progressive transmission efficiency on 3-D medical images. PMID:9735902

Kim, Y S; Kim, W Y

1998-06-01

241

Retinal Image Quality during Accommodation in Adult Myopic Eyes  

PubMed Central

Purpose Reduced retinal image contrast produced by accommodative lag is implicated with myopia development. Here, we measure accommodative error and retinal image quality from wavefront aberrations in myopes and emmetropes when they perform visually demanding and naturalistic tasks. Methods Wavefront aberrations were measured in 10 emmetropic and 11 myopic adults at three distances (100, 40, and 20 cm) while performing four tasks (monocular acuity, binocular acuity, reading, and movie watching). For the acuity tasks, measurements of wavefront error were obtained near the end point of the acuity experiment. Refractive state was defined as the target vergence that optimizes image quality using a visual contrast metric (VSMTF) computed from wavefront errors. Results Accommodation was most accurate (and image quality best) during binocular acuity whereas accommodation was least accurate (and image quality worst) while watching a movie. When viewing distance was reduced, accommodative lag increased and image quality (as quantified by VSMTF) declined for all tasks in both refractive groups. For any given viewing distance, computed image quality was consistently worse in myopes than in emmetropes, more so for the acuity than for reading/movie watching. Although myopes showed greater lags and worse image quality for the acuity experiments compared to emmetropes, acuity was not measurably worse in myopes compared to emmetropes. Conclusions Retinal image quality present when performing a visually demanding task (e.g., during clinical examination) is likely to be greater than for less demanding tasks (e.g., reading/movie watching). Although reductions in image quality lead to reductions in acuity, the image quality metric VSMTF is not necessarily an absolute indicator of visual performance because myopes achieved slightly better acuity than emmetropes despite showing greater lags and worse image quality. Reduced visual contrast in myopes compared to emmetropes is consistent with theories of myopia progression that point to image contrast as an inhibitory signal for ocular growth. PMID:24152885

Sreenivasan, Vidhyapriya; Aslakson, Emily; Kornaus, Andrew; Thibos, Larry N.

2014-01-01

242

3D model search and pose estimation from single images using VIP features  

Microsoft Academic Search

This paper describes a method to efficiently search for 3D models in a city-scale database and to compute the camera poses from single query images. The proposed method matches SIFT features (from a single image) to viewpoint invariant patches (VIP) from a 3D model by warping the SIFT features approximately into the orthographic frame of the VIP features. This significantly

Changchang Wu; Friedrich Fraundorfer; Jan-Michael Frahm; Marc Pollefeys

2008-01-01

243

Color Image Segmentation by Fuzzy Morphological Transformation of the 3D Color Histogramm  

Microsoft Academic Search

Summary form only given. We present a color image segmentation method using fuzzy mathematical morphology operators on the 3D color histogram. Segmentation consists in detecting the different modes which are present in the 3D color histogram and associated to homogeneous regions. In order to detect these modes, we show how a color image can be considered as a fuzzy set

Aymeric Gillet; Ludovic Macaire; Claudine Botte-lecocq; Jack-g閞ard Postaire

2001-01-01

244

KIDNEY DETECTION AND REAL-TIME SEGMENTATION IN 3D CONTRAST-ENHANCED ULTRASOUND IMAGES  

E-print Network

KIDNEY DETECTION AND REAL-TIME SEGMENTATION IN 3D CONTRAST-ENHANCED ULTRASOUND IMAGES Raphael an automatic method to segment the kidney in 3D contrast-enhanced ultrasound (CEUS) images. This modality has, the kidney is auto- matically localized by a novel robust ellipsoid detector; then, segmentation is obtained

Cohen, Laurent

245

Multimodal rigid-body registration of 3D brain images using bilateral symmetry  

E-print Network

Multimodal rigid-body registration of 3D brain images using bilateral symmetry Sylvain Prima for rigid-body registration of 3D images are probably those based on the maximisation of a similarity symmetry of the brain with respect to its interhemi- spheric fissure for intra-subject (rigid-body) mono

Paris-Sud XI, Universit茅 de

246

Automatic Detection and Segmentation of Kidneys in 3D CT Images Using Random Forests  

E-print Network

Automatic Detection and Segmentation of Kidneys in 3D CT Images Using Random Forests R麓emi Cuingnet. Kidney segmentation in 3D CT images allows extracting use- ful information for nephrologists- ments. Kidneys are localized with random forests following a coarse- to-fine strategy. Their initial

Boyer, Edmond

247

The ASTM E57 file format for 3D imaging data exchange  

NASA Astrophysics Data System (ADS)

There is currently no general-purpose, open standard for storing data produced by three dimensional (3D) imaging systems, such as laser scanners. As a result, producers and consumers of such data rely on proprietary or ad-hoc formats to store and exchange data. There is a critical need in the 3D imaging industry for open standards that promote data interoperability among 3D imaging hardware and software systems. For the past three years, a group of volunteers has been working within the ASTM E57 Committee on 3D Imaging Systems to develop an open standard for 3D imaging system data exchange to meet this need. The E57 File Format for 3D Imaging Data Exchange (E57 format hereafter) is capable of storing point cloud data from laser scanners and other 3D imaging systems, as well as associated 2D imagery and core meta-data. This paper describes the motivation, requirements, design, and implementation of the E57 format, and highlights the technical concepts developed for the standard. We also compare the format with other proprietary or special purpose 3D imaging formats, such as the LAS format, and we discuss the open source library implementation designed to read, write, and validate E57 files.

Huber, Daniel

2011-03-01

248

Multichannel ultrasound current source density imaging of a 3-D dipole field  

Microsoft Academic Search

Ultrasound Current Source Density Imaging (UCSDI) potentially improves 3-D mapping of bioelectric sources in the body at high spatial resolution, which is especially important for diagnosing and guiding treatment for cardiac and neurologic disorders, including arrhythmia and epilepsy. In this study, we report 4-D imaging of a time varying electric dipole in saline. A 3-D dipole field was produced in

Zhaohui Wang; Ragnar Olafsson; Pier Ingram; Qian Li; Russell S. Witte

2010-01-01

249

Deformable model for 3D intramodal nonrigid breast image registration with fiducial skin markers  

E-print Network

Deformable model for 3D intramodal nonrigid breast image registration with fiducial skin markers of Technology ABSTRACT We implemented a new approach to intramodal non-rigid 3D breast image registration. Our, Syracuse University 4 Department of Mathematics and Computer Science, Ithaca College 5 Department of Civil

250

Quantitative 3-D imaging topogrammetry for telemedicine applications  

NASA Technical Reports Server (NTRS)

The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with precision micro-sewing machines, splice neural connections with laser welds, micro-bore through constricted vessels, and computer combine ultrasound, microradiography, and 3-D mini-borescopes to quickly assess and trace vascular problems in situ. The spatial relationships between organs, robotic arms, and end-effector diagnostic, manipulative, and surgical instruments would be constantly monitored by the robot 'brain' using inputs from its multiple 3-D quantitative 'eyes' remote sensing, as well as by contact and proximity force measuring devices. Methods to create accurate and quantitative 3-D topograms at continuous video data rates are described.

Altschuler, Bruce R.

1994-01-01

251

Radiology Lab 0: Introduction to 2D and 3D Imaging  

NSDL National Science Digital Library

This is a self-directed learning module to introduce students to basic concepts of imaging technology as well as to give students practice going between 2D and 3D imaging using everyday objects.Annotated: true

Shaffer, Kitt

252

Estimating Density Gradients and Drivers from 3D Ionospheric Imaging  

NASA Astrophysics Data System (ADS)

The transition regions at the edges of the ionospheric storm-enhanced density (SED) are important for a detailed understanding of the mid-latitude physical processes occurring during major magnetic storms. At the boundary, the density gradients are evidence of the drivers that link the larger processes of the SED, with its connection to the plasmasphere and prompt-penetration electric fields, to the smaller irregularities that result in scintillations. For this reason, we present our estimates of both the plasma variation with horizontal and vertical spatial scale of 10 - 100 km and the plasma motion within and along the edges of the SED. To estimate the density gradients, we use Ionospheric Data Assimilation Four-Dimensional (IDA4D), a mature data assimilation algorithm that has been developed over several years and applied to investigations of polar cap patches and space weather storms [Bust and Crowley, 2007; Bust et al., 2007]. We use the density specification produced by IDA4D with a new tool for deducing ionospheric drivers from 3D time-evolving electron density maps, called Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE technique has been tested on simulated data from TIMEGCM-ASPEN and on IDA4D-based density estimates with ongoing validation from Arecibo ISR measurements [Datta-Barua et al., 2009a; 2009b]. We investigate the SED that formed during the geomagnetic super storm of November 20, 2003. We run IDA4D at low-resolution continent-wide, and then re-run it at high (~10 km horizontal and ~5-20 km vertical) resolution locally along the boundary of the SED, where density gradients are expected to be highest. We input the high-resolution estimates of electron density to EMPIRE to estimate the ExB drifts and field-aligned plasma velocities along the boundaries of the SED. We expect that these drivers contribute to the density structuring observed along the SED during the storm. Bust, G. S. and G. Crowley (2007), Tracking of polar cap patches using data assimilation, J. Geophys. Res., 112, A05307, doi:10.1029/2005JA011597. Bust, G. S., G. Crowley, T. W. Garner, T. L. Gaussiran II, R. W. Meggs, C. N. Mitchell, P. S. J. Spencer, P. Yin, and B. Zapfe (2007) ,Four Dimensional GPS Imaging of Space-Weather Storms, Space Weather, 5, S02003, doi:10.1029/2006SW000237. Datta-Barua, S., G. S. Bust, G. Crowley, and N. Curtis (2009a), Neutral wind estimation from 4-D ionospheric electron density images, J. Geophys. Res., 114, A06317, doi:10.1029/2008JA014004. Datta-Barua, S., G. Bust, and G. Crowley (2009b), "Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE)," presented at CEDAR, Santa Fe, New Mexico, July 1.

Datta-Barua, S.; Bust, G. S.; Curtis, N.; Reynolds, A.; Crowley, G.

2009-12-01

253

Painting style transfer and 3D interaction by model-based image synthesis  

Microsoft Academic Search

It presents a model-based image synthesis framework to accomplish goal to immerse a 3D object into a 2D model painting. By image synthesis and color transfer technique, the style of the model painting is transferred to the 3D image. The color transfer is automatically accomplished by basic color categories and statistical analysis of model painting and projected image. It proposed

Tian-Ding Chen

2008-01-01

254

Statistical skull models from 3D X-ray images  

E-print Network

We present 2 statistical models of the skull and mandible built upon an elastic registration method of 3D meshes. The aim of this work is to relate degrees of freedom of skull anatomy, as static relations are of main interest for anthropology and legal medicine. Statistical models can effectively provide reconstructions together with statistical precision. In our applications, patient-specific meshes of the skull and the mandible are high-density meshes, extracted from 3D CT scans. All our patient-specific meshes are registrated in a subject-shared reference system using our 3D-to-3D elastic matching algorithm. Registration is based upon the minimization of a distance between the high density mesh and a shared low density mesh, defined on the vertexes, in a multi resolution approach. A Principal Component analysis is performed on the normalised registrated data to build a statistical linear model of the skull and mandible shape variation. The accuracy of the reconstruction is under the millimetre in the shape...

Berar, M; Bailly, G; Payan, Y; Berar, Maxime; Desvignes, Michel; Payan, Yohan

2006-01-01

255

3-D Image Reconstruction in Optical Tomography 1  

Microsoft Academic Search

The Problem: Optical tomography (OT), as a potential diagnostic tool for detecting growths in translucent soft tissue, has been proposed and studied by several groups in recent years. Its principle is to use multiple movable light sources and detectors attached to the tissue surface to collect information on light attenuation, and to reconstruct the internal 3-D absorption and scattering distributions.

Xiaochun Yang; Berthold K. P. Horn

256

Statistical skull models from 3D X-ray images  

Microsoft Academic Search

We present 2 statistical models of the skull and mandible built upon an elastic registration method of 3D meshes. The aim of this work is to relate degrees of freedom of skull anatomy, as static relations are of main interest for anthropology and legal medicine. Statistical models can effectively provide reconstructions together with statistical precision. In our applications, patient-specific meshes

Maxime Berar; Michel Desvignes; G閞ard Bailly; Yohan Payan

2006-01-01

257

Adaptive optics with pupil tracking for high resolution retinal imaging  

PubMed Central

Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics. PMID:22312577

Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris

2012-01-01

258

Robust Retrieval of 3D Structures from Image Stacks  

E-print Network

to analyze image stacks under the piecewise constant image structure model. To reduce the effect of bias algorithm first determines the most reliable seed regions which are then used in a region璯rowing procedure: image segmentation, robust statistics, magnetic resonance images, confocal mi croscope images. #12; 1

259

Advances in Image Pre-Processing to Improve Automated 3d Reconstruction  

NASA Astrophysics Data System (ADS)

Tools and algorithms for automated image processing and 3D reconstruction purposes have become more and more available, giving the possibility to process any dataset of unoriented and markerless images. Typically, dense 3D point clouds (or texture 3D polygonal models) are produced at reasonable processing time. In this paper, we evaluate how the radiometric pre-processing of image datasets (particularly in RAW format) can help in improving the performances of state-of-the-art automated image processing tools. Beside a review of common pre-processing methods, an efficient pipeline based on color enhancement, image denoising, RGB to Gray conversion and image content enrichment is presented. The performed tests, partly reported for sake of space, demonstrate how an effective image pre-processing, which considers the entire dataset in analysis, can improve the automated orientation procedure and dense 3D point cloud reconstruction, even in case of poor texture scenarios.

Ballabeni, A.; Apollonio, F. I.; Gaiani, M.; Remondino, F.

2015-02-01

260

Computer-assisted 3D design software for teaching neuro-ophthalmology of the oculomotor system and training new retinal surgery techniques  

NASA Astrophysics Data System (ADS)

Purpose: To create a more effective method of demonstrating complex subject matter in ophthalmology with the use of high end, 3-D, computer aided animation and interactive multimedia technologies. Specifically, to explore the possibilities of demonstrating the complex nature of the neuroophthalmological basics of the human oculomotor system in a clear and non confusing way, and to demonstrate new forms of retinal surgery in a manner that makes the procedures easier to understand for other retinal surgeons. Methods and Materials: Using Reflektions 4.3, Monzoom Pro 4.5, Cinema 4D XL 5.03, Cinema 4D XL 8 Studio Bundle, Mediator 4.0, Mediator Pro 5.03, Fujitsu-Siemens Pentium III and IV, Gericom Webgine laptop, M.G.I. Video Wave 1.0 and 5, Micrografix Picture Publisher 6.0 and 8, Amorphium 1.0, and Blobs for Windows, we created 3-D animations showing the origin, insertion, course, main direction of pull, and auxiliary direction of pull of the six extra-ocular eye muscles. We created 3-D animations that (a) show the intra-cranial path of the relevant oculomotor cranial nerves and which muscles are supplied by them, (b) show which muscles are active in each of the ten lines of sight, (c) demonstrate the various malfunctions of oculomotor systems, as well as (d) show the surgical techniques and the challenges in radial optic neurotomies and subretinal surgeries. Most of the 3-D animations were integrated in interactive multimedia teaching programs. Their effectiveness was compared to conventional teaching methods in a comparative study performed at the University of Vienna. We also performed a survey to examine the response of students being taught with the interactive programs. We are currently in the process of placing most of the animations in an interactive web site in order to make them freely available to everyone who is interested. Results: Although learning how to use complex 3-D computer animation and multimedia authoring software can be very time consuming and frustrating, we found that once the programs are mastered they can be used to create 3-D animations that drastically improve the quality of medical demonstrations. The comparative study showed a significant advantage of using these technologies over conventional teaching methods. The feedback from medical students, doctors, and retinal surgeons was overwhelmingly positive. A strong interest was expressed to have more subjects and techniques demonstrated in this fashion. Conclusion: 3-D computer technologies should be used in the demonstration of all complex medical subjects. More effort and resources need to be given to the development of these technologies that can improve the understanding of medicine for students, doctors, and patients alike.

Glittenberg, Carl; Binder, Susanne

2004-07-01

261

3D imaging of cone photoreceptors over extended time periods using optical coherence tomography with adaptive optics  

NASA Astrophysics Data System (ADS)

Optical coherence tomography with adaptive optics (AO-OCT) is a highly sensitive, noninvasive method for 3D imaging of the microscopic retina. The purpose of this study is to advance AO-OCT technology by enabling repeated imaging of cone photoreceptors over extended periods of time (days). This sort of longitudinal imaging permits monitoring of 3D cone dynamics in both normal and diseased eyes, in particular the physiological processes of disc renewal and phagocytosis, which are disrupted by retinal diseases such as age related macular degeneration and retinitis pigmentosa. For this study, the existing AO-OCT system at Indiana underwent several major hardware and software improvements to optimize system performance for 4D cone imaging. First, ultrahigh speed imaging was realized using a Basler Sprint camera. Second, a light source with adjustable spectrum was realized by integration of an Integral laser (Femto Lasers, ?c=800nm, ??=160nm) and spectral filters in the source arm. For cone imaging, we used a bandpass filter with ?c=809nm and ??=81nm (2.6 ?m nominal axial resolution in tissue, and 167 KHz A-line rate using 1,408 px), which reduced the impact of eye motion compared to previous AO-OCT implementations. Third, eye motion artifacts were further reduced by custom ImageJ plugins that registered (axially and laterally) the volume videos. In two subjects, cone photoreceptors were imaged and tracked over a ten day period and their reflectance and outer segment (OS) lengths measured. High-speed imaging and image registration/dewarping were found to reduce eye motion to a fraction of a cone width (1 ?m root mean square). The pattern of reflections in the cones was found to change dramatically and occurred on a spatial scale well below the resolution of clinical instruments. Normalized reflectance of connecting cilia (CC) and OS posterior tip (PT) of an exemplary cone was 54+/-4, 47+/-4, 48+/-6, 50+/-5, 56+/-1% and 46+/-4, 53+/-4, 52+/-6, 50+/-5, 44+/-1% for days #1,3,6,8,10 respectively. OS length of the same cone was 28.9, 26.4, 26.4, 30.6, and 28.1 靘 for days #1,3,6,8,10 respectively. It is plausible these changes are an optical correlate of the natural process of OS renewal and shedding.

Kocaoglu, Omer P.; Lee, Sangyeol; Jonnal, Ravi S.; Wang, Qiang; Herde, Ashley E.; Besecker, Jason; Gao, Weihua; Miller, Donald T.

2011-03-01

262

ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images  

NASA Technical Reports Server (NTRS)

ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

2005-01-01

263

Introduction ! Segmentation provides a means of visualizing and measuring specific retinal layers in 3D spectral domain (SD-  

E-print Network

segmentation would provide clinicians with easy access to new visualizations and potential clinical variables differences between adjacent pixels were computed to weight the lattice. Small magnitude eigenvectorsIntroduction ! Segmentation provides a means of visualizing and measuring specific retinal layers

Miller, Gary L.

264

3D pulmonary airway color image reconstruction via shape from shading and virtual bronchoscopy imaging techniques  

NASA Astrophysics Data System (ADS)

The dependence on macro-optical imaging of the human body in the assessment of possible disease is rapidly increasing concurrent with, and as a direct result of, advancements made in medical imaging technologies. Assessing the pulmonary airways through bronchoscopy is performed extensively in clinical practice however remains highly subjective due to limited visualization techniques and the lack of quantitative analyses. The representation of 3D structures in 2D visualization modes, although providing an insight to the structural content of the scene, may in fact skew the perception of the structural form. We have developed two methods for visualizing the optically derived airway mucosal features whilst preserving the structural scene integrity. Shape from shading (SFS) techniques can be used to extract 3D structural information from 2D optical images. The SFS technique presented addresses many limitations previously encountered in conventional techniques resulting in high-resolution 3D color images. The second method presented to combine both color and structural information relies on combined CT and bronchoscopy imaging modalities. External imaging techniques such as CT provide a means of determining the gross structural anatomy of the pulmonary airways, however lack the important optically derived mucosal color. Virtual bronchoscopy is used to provide a direct link between the CT derived structural anatomy and the macro-optically derived mucosal color. Through utilization of a virtual and true bronchoscopy matching technique we are able to directly extract combined structurally sound 3D color segments of the pulmonary airways. Various pulmonary airway diseases are assessed and the resulting combined color and texture results are presented demonstrating the effectiveness of the presented techniques.

Suter, Melissa; Reinhardt, Joseph M.; Hoffman, Eric A.; McLennan, Geoffrey

2005-04-01

265

Multiview image coding scheme transformations: artifact characteristics and effects on perceived 3D quality  

NASA Astrophysics Data System (ADS)

Autostereoscopic multiview 3D displays have been available for number of years, capable of producing a perception of depth in a 3D image without requiring user-worn glasses. Different approaches to compress these 3D images exist. Two compression schemes, and how they affect the 3D image with respect to induced distortion, is investigated in this paper: JPEG 2000 and H.264/AVC. The investigation is conducted in three parts: objective measurement, qualitative subjective evaluation, and a quantitative user test. The objective measurement shows that the Rate-Distortion (RD) characteristic of the two compression schemes differ in character as well as in level of PSNR. The qualitative evaluation is performed at bitrates where the two schemes have the same RD fraction and a number of distortion characteristics are found to be significantly different. However, the quantitative evaluation, performed using 14 non-expert viewers, indicates that the different distortion types do not significantly contribute to the overall perceived 3D quality. The used bitrate, and the content of the original 3D image, is the two factors that most significantly affect the perceived 3D image quality. In addition, the evaluation results suggest that viewers prefer less apparent depth and motion parallax when being exposed to compressed 3D images on an autostereoscopic multiview display.

Olsson, Roger; Sj鰏tr鰉, M錼ten

2010-02-01

266

Automated retinal image analysis for diabetic retinopathy in telemedicine.  

PubMed

There will be an estimated 552 million persons with diabetes globally by the year 2030. Over half of these individuals will develop diabetic retinopathy, representing a nearly insurmountable burden for providing diabetes eye care. Telemedicine programmes have the capability to distribute quality eye care to virtually any location and address the lack of access to ophthalmic services. In most programmes, there is currently a heavy reliance on specially trained retinal image graders, a resource in short supply worldwide. These factors necessitate an image grading automation process to increase the speed of retinal image evaluation while maintaining accuracy and cost effectiveness. Several automatic retinal image analysis systems designed for use in telemedicine have recently become commercially available. Such systems have the potential to substantially improve the manner by which diabetes eye care is delivered by providing automated real-time evaluation to expedite diagnosis and referral if required. Furthermore, integration with electronic medical records may allow a more accurate prognostication for individual patients and may provide predictive modelling of medical risk factors based on broad population data. PMID:25697773

Sim, Dawn A; Keane, Pearse A; Tufail, Adnan; Egan, Catherine A; Aiello, Lloyd Paul; Silva, Paolo S

2015-03-01

267

Imaging of human differentiated 3D neural aggregates using light sheet fluorescence microscopy  

PubMed Central

The development of three dimensional (3D) cell cultures represents a big step for the better understanding of cell behavior and disease in a more natural like environment, providing not only single but multiple cell type interactions in a complex 3D matrix, highly resembling physiological conditions. Light sheet fluorescence microscopy (LSFM) is becoming an excellent tool for fast imaging of such 3D biological structures. We demonstrate the potential of this technique for the imaging of human differentiated 3D neural aggregates in fixed and live samples, namely calcium imaging and cell death processes, showing the power of imaging modality compared with traditional microscopy. The combination of light sheet microscopy and 3D neural cultures will open the door to more challenging experiments involving drug testing at large scale as well as a better understanding of relevant biological processes in a more realistic environment. PMID:25161607

Gualda, Emilio J.; Sim鉶, Daniel; Pinto, Catarina; Alves, Paula M.; Brito, Catarina

2014-01-01

268

Remote laboratory for phase-aided 3D microscopic imaging and metrology  

NASA Astrophysics Data System (ADS)

In this paper, the establishment of a remote laboratory for phase-aided 3D microscopic imaging and metrology is presented. Proposed remote laboratory consists of three major components, including the network-based infrastructure for remote control and data management, the identity verification scheme for user authentication and management, and the local experimental system for phase-aided 3D microscopic imaging and metrology. The virtual network computer (VNC) is introduced to remotely control the 3D microscopic imaging system. Data storage and management are handled through the open source project eSciDoc. Considering the security of remote laboratory, the fingerprint is used for authentication with an optical joint transform correlation (JTC) system. The phase-aided fringe projection 3D microscope (FP-3DM), which can be remotely controlled, is employed to achieve the 3D imaging and metrology of micro objects.

Wang, Meng; Yin, Yongkai; Liu, Zeyi; He, Wenqi; Li, Boqun; Peng, Xiang

2014-05-01

269

Comparison of super-resolution algorithms applied to retinal images  

NASA Astrophysics Data System (ADS)

A critical challenge in biomedical imaging is to optimally balance the trade-off among image resolution, signal-to-noise ratio, and acquisition time. Acquiring a high-resolution image is possible; however, it is either expensive or time consuming or both. Resolution is also limited by the physical properties of the imaging device, such as the nature and size of the input source radiation and the optics of the device. Super-resolution (SR), which is an off-line approach for improving the resolution of an image, is free of these trade-offs. Several methodologies, such as interpolation, frequency domain, regularization, and learning-based approaches, have been developed over the past several years for SR of natural images. We review some of these methods and demonstrate the positive impact expected from SR of retinal images and investigate the performance of various SR techniques. We use a fundus image as an example for simulations.

Thapa, Damber; Raahemifar, Kaamran; Bobier, William R.; Lakshminarayanan, Vasudevan

2014-05-01

270

3D-Holoscopic Imaging: A New Dimension to Enhance Imaging in Minimally Invasive Therapy in Urologic Oncology  

PubMed Central

Abstract Background and Purpose Existing imaging modalities of urologic pathology are limited by three-dimensional (3D) representation on a two-dimensional screen. We present 3D-holoscopic imaging as a novel method of representing Digital Imaging and Communications in Medicine data images taken from CT and MRI to produce 3D-holographic representations of anatomy without special eyewear in natural light. 3D-holoscopic technology produces images that are true optical models. This technology is based on physical principles with duplication of light fields. The 3D content is captured in real time with the content viewed by multiple viewers independently of their position, without 3D eyewear. Methods We display 3D-holoscopic anatomy relevant to minimally invasive urologic surgery without the need for 3D eyewear. Results The results have demonstrated that medical 3D-holoscopic content can be displayed on commercially available multiview auto-stereoscopic display. Conclusion The next step is validation studies comparing 3D-Holoscopic imaging with conventional imaging. PMID:23216303

Aggoun, Amar; Swash, Mohammad; Grange, Philippe C.R.; Challacombe, Benjamin; Dasgupta, Prokar

2013-01-01

271

3D Chemical and Elemental Imaging by STXM Spectrotomography  

Microsoft Academic Search

Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur

J. Wang; A. P. Hitchcock; C. Karunakaran; A. Prange; B. Franz; T. Harkness; Y. Lu; M. Obst; J. Hormes

2011-01-01

272

Motion compensated frequency modulated continuous wave 3D coherent imaging ladar with scannerless architecture.  

PubMed

A principal difficulty of long dwell coherent imaging ladar is its extreme sensitivity to target or platform motion. This paper describes a motion compensated frequency modulated continuous wave 3D coherent imaging ladar method that overcomes this motion sensitivity, making it possible to work with nonstatic targets such as human faces, as well as imaging of targets through refractive turbulence. Key features of this method include scannerless imaging and high range resolution. The reduced motion sensitivity is shown with mathematical analysis and demonstration 3D images. Images of static and dynamic targets are provided demonstrating up to 600800 pixel imaging with millimeter range resolution. PMID:23262614

Krause, Brian W; Tiemann, Bruce G; Gatt, Philip

2012-12-20

273

In vivo imaging of the retinal pigment epithelial cells  

NASA Astrophysics Data System (ADS)

The retinal pigment epithelial (RPE) cells form an important layer of the retina because they are responsible for providing metabolic support to the photoreceptors. Techniques to image the RPE layer include autofluorescence imaging with a scanning laser ophthalmoscope (SLO). However, previous studies were unable to resolve single RPE cells in vivo. This thesis describes the technique of combining autofluorescence, SLO, adaptive optics (AO), and dual-wavelength simultaneous imaging and registration to visualize the individual cells in the RPE mosaic in human and primate retina for the first time in vivo. After imaging the RPE mosaic non-invasively, the cell layer's structure and regularity were characterized using quantitative metrics of cell density, spacing, and nearest neighbor distances. The RPE mosaic was compared to the cone mosaic, and RPE imaging methods were confirmed using histology. The ability to image the RPE mosaic led to the discovery of a novel retinal change following light exposure; 568 nm exposures caused an immediate reduction in autofluorescence followed by either full recovery or permanent damage in the RPE layer. A safety study was conducted to determine the range of exposure irradiances that caused permanent damage or transient autofluorescence reductions. Additionally, the threshold exposure causing autofluorescence reduction was determined and reciprocity of radiant exposure was confirmed. Light exposures delivered by the AOSLO were not significantly different than those delivered by a uniform source. As all exposures tested were near or below the permissible light levels of safety standards, this thesis provides evidence that the current light safety standards need to be revised. Finally, with the retinal damage and autofluorescence reduction thresholds identified, the methods of RPE imaging were modified to allow successful imaging of the individual cells in the RPE mosaic while still ensuring retinal safety. This thesis has provided a highly sensitive method for studying the in vivo morphology of individual RPE cells in normal, diseased, and damaged retinas. The methods presented here also will allow longitudinal studies for tracking disease progression and assessing treatment efficacy in human patients and animal models of retinal diseases affecting the RPE.

Morgan, Jessica Ijams Wolfing

274

Computation of optimized arrays for 3-D electrical imaging surveys  

NASA Astrophysics Data System (ADS)

3-D electrical resistivity surveys and inversion models are required to accurately resolve structures in areas with very complex geology where 2-D models might suffer from artefacts. Many 3-D surveys use a grid where the number of electrodes along one direction (x) is much greater than in the perpendicular direction (y). Frequently, due to limitations in the number of independent electrodes in the multi-electrode system, the surveys use a roll-along system with a small number of parallel survey lines aligned along the x-direction. The `Compare R' array optimization method previously used for 2-D surveys is adapted for such 3-D surveys. Offset versions of the inline arrays used in 2-D surveys are included in the number of possible arrays (the comprehensive data set) to improve the sensitivity to structures in between the lines. The array geometric factor and its relative error are used to filter out potentially unstable arrays in the construction of the comprehensive data set. Comparisons of the conventional (consisting of dipole-dipole and Wenner-Schlumberger arrays) and optimized arrays are made using a synthetic model and experimental measurements in a tank. The tests show that structures located between the lines are better resolved with the optimized arrays. The optimized arrays also have significantly better depth resolution compared to the conventional arrays.

Loke, M. H.; Wilkinson, P. B.; Uhlemann, S. S.; Chambers, J. E.; Oxby, L. S.

2014-12-01

275

Computer-aided diagnostic detection system of venous beading in retinal images  

E-print Network

in retinal images provides an early sign of diabetic retinopathy and plays an important role of retinal diseases such as diabetic retinopathy, hypertension, and various vascular disorders. In addition types of retinal abnormalities, venous beading is an effective indicator of diabetic retinopathy.4

Chang, Chein-I

276

3D current source density imaging based on acoustoelectric effect: a simulation study using unipolar pulses  

PubMed Central

It is of importance to image electrical activity and properties of biological tissues. Recently hybrid imaging modality combing ultrasound scanning and source imaging through the acousto-electric (AE) effect has generated considerable interest. Such modality has the potential to provide high spatial resolution current density imaging by utilizing the pressure induced AE resistivity change confined at the ultrasound focus. In this study, we investigate a novel 3-dimensional (3D) ultrasound current source density imaging (UCSDI) approach using unipolar ultrasound pulses. Utilizing specially designed unipolar ultrasound pulses and by combining AE signals associated to the local resistivity changes at the focusing point, we are able to reconstruct the 3D current density distribution with the boundary voltage measurements obtained while performing a 3D ultrasound scan. We have shown in computer simulation that using the present method, it is feasible to image with high spatial resolution an arbitrary 3D current density distribution in an inhomogeneous conductive media. PMID:21628774

Yang, Renhuan; Li, Xu; Liu, Jun; He, Bin

2011-01-01

277

High resolution MALDI imaging mass spectrometry of retinal tissue lipids.  

PubMed

Matrix assisted laser desorption ionization imaging mass spectrometry (MALDI IMS) has the ability to provide an enormous amount of information on the abundances and spatial distributions of molecules within biological tissues. The rapid progress in the development of this technology significantly improves our ability to analyze smaller and smaller areas and features within tissues. The mammalian eye has evolved over millions of years to become an essential asset for survival, providing important sensory input of an organism's surroundings. The highly complex sensory retina of the eye is comprised of numerous cell types organized into specific layers with varying dimensions, the thinnest of which is the 10 ?m retinal pigment epithelium (RPE). This single cell layer and the photoreceptor layer contain the complex biochemical machinery required to convert photons of light into electrical signals that are transported to the brain by axons of retinal ganglion cells. Diseases of the retina, including age-related macular degeneration (AMD), retinitis pigmentosa, and diabetic retinopathy, occur when the functions of these cells are interrupted by molecular processes that are not fully understood. In this report, we demonstrate the use of high spatial resolution MALDI IMS and FT-ICR tandem mass spectrometry in the Abca4(-/-) knockout mouse model of Stargardt disease, a juvenile onset form of macular degeneration. The spatial distributions and identity of lipid and retinoid metabolites are shown to be unique to specific retinal cell layers. PMID:24819461

Anderson, David M G; Ablonczy, Zsolt; Koutalos, Yiannis; Spraggins, Jeffrey; Crouch, Rosalie K; Caprioli, Richard M; Schey, Kevin L

2014-08-01

278

High Resolution MALDI Imaging Mass Spectrometry of Retinal Tissue Lipids  

PubMed Central

Matrix assisted laser desorption ionization imaging mass spectrometry (MALDI IMS) has the ability to provide an enormous amount of information on the abundances and spatial distributions of molecules within biological tissues. The rapid progress in the development of this technology significantly improves our ability to analyze smaller and smaller areas and features within tissues. The mammalian eye has evolved over millions of years to become an essential asset for survival, providing important sensory input of an organism抯 surroundings. The highly complex sensory retina of the eye is comprised of numerous cell types organized into specific layers with varying dimensions, the thinnest of which is the 10 ?m retinal pigment epithelium (RPE). This single cell layer and the photoreceptor layer contain the complex biochemical machinery required to convert photons of light into electrical signals that are transported to the brain by axons of retinal ganglion cells. Diseases of the retina including age related macular degeneration (AMD), retinitis pigmentosa, and diabetic retinopathy occur when the functions of these cells are interrupted by molecular processes that are not fully understood. In this report, we demonstrate the use of high spatial resolution MALDI IMS and FT-ICR tandem mass spectrometry in the Abca4?/? knockout mouse model of Stargardt disease, a juvenile onset form of macular degeneration. The spatial distributions and identity of lipid and retinoid metabolites are shown to be unique to specific retinal cell layers. PMID:24819461

Anderson, David M. G.; Ablonczy, Zsolt; Koutalos, Yiannis; Spraggins, Jeffrey; Crouch, Rosalie K.; Caprioli, Richard M.; Schey, Kevin L.

2014-01-01

279

High Resolution MALDI Imaging Mass Spectrometry of Retinal Tissue Lipids  

NASA Astrophysics Data System (ADS)

Matrix assisted laser desorption ionization imaging mass spectrometry (MALDI IMS) has the ability to provide an enormous amount of information on the abundances and spatial distributions of molecules within biological tissues. The rapid progress in the development of this technology significantly improves our ability to analyze smaller and smaller areas and features within tissues. The mammalian eye has evolved over millions of years to become an essential asset for survival, providing important sensory input of an organism's surroundings. The highly complex sensory retina of the eye is comprised of numerous cell types organized into specific layers with varying dimensions, the thinnest of which is the 10 ?m retinal pigment epithelium (RPE). This single cell layer and the photoreceptor layer contain the complex biochemical machinery required to convert photons of light into electrical signals that are transported to the brain by axons of retinal ganglion cells. Diseases of the retina, including age-related macular degeneration (AMD), retinitis pigmentosa, and diabetic retinopathy, occur when the functions of these cells are interrupted by molecular processes that are not fully understood. In this report, we demonstrate the use of high spatial resolution MALDI IMS and FT-ICR tandem mass spectrometry in the Abca4 -/- knockout mouse model of Stargardt disease, a juvenile onset form of macular degeneration. The spatial distributions and identity of lipid and retinoid metabolites are shown to be unique to specific retinal cell layers.

Anderson, David M. G.; Ablonczy, Zsolt; Koutalos, Yiannis; Spraggins, Jeffrey; Crouch, Rosalie K.; Caprioli, Richard M.; Schey, Kevin L.

2014-08-01

280

Retinal image analysis for quantification of ocular disease  

NASA Astrophysics Data System (ADS)

In this paper we propose to develop a computer assisted reading (CAR) tool for ocular disease. This involves identification and quantitative description of the patterns in retinal vasculature. The features taken into account are fractal dimension and vessel branching. Subsequently a measure combining all these features are designed which would help in quantifying the progression of the disease. The aim of the research is to develop algorithms that would help with parameterization of the eye fundus images, thus improving the diagnostics.

Acharyya, Mausumi; Chakravarty, Sumit; Raman, Rajiv

2012-06-01

281

Textureless Macula Swelling Detection with Multiple Retinal Fundus Images  

SciTech Connect

Retinal fundus images acquired with non-mydriatic digital fundus cameras are a versatile tool for the diagnosis of various retinal diseases. Because of the ease of use of newer camera models and their relatively low cost, these cameras can be employed by operators with limited training for telemedicine or Point-of-Care applications. We propose a novel technique that uses uncalibrated multiple-view fundus images to analyse the swelling of the macula. This innovation enables the detection and quantitative measurement of swollen areas by remote ophthalmologists. This capability is not available with a single image and prone to error with stereo fundus cameras. We also present automatic algorithms to measure features from the reconstructed image which are useful in Point-of-Care automated diagnosis of early macular edema, e.g., before the appearance of exudation. The technique presented is divided into three parts: first, a preprocessing technique simultaneously enhances the dark microstructures of the macula and equalises the image; second, all available views are registered using non-morphological sparse features; finally, a dense pyramidal optical flow is calculated for all the images and statistically combined to build a naiveheight- map of the macula. Results are presented on three sets of synthetic images and two sets of real world images. These preliminary tests show the ability to infer a minimum swelling of 300 microns and to correlate the reconstruction with the swollen location.

Giancardo, Luca [ORNL; Meriaudeau, Fabrice [ORNL; Karnowski, Thomas Paul [ORNL; Tobin Jr, Kenneth William [ORNL; Grisan, Enrico [University of Padua, Padua, Italy; Favaro, Paolo [Heriot-Watt University, Edinburgh; Ruggeri, Alfredo [University of Padua, Padua, Italy; Chaum, Edward [University of Tennessee, Knoxville (UTK)

2010-01-01

282

Non-contrast Enhanced MR Venography Using 3D Fresh Blood Imaging (FBI): Initial Experience  

Microsoft Academic Search

Objective: This study examined the efficacy of 3D-fresh blood imaging (FBI) in patients with venous disease in the iliac region to lower extremity. Materials and Methods: Fourteen patients with venous disease were examined (8 deep venous thrombosis (DVT) and 6 varix) by 3D-FBI and 2D-TOF MRA. All FBI images and 2D-TOF images were evaluated in terms of visualization of the

Kenichi Yokoyama; Toshiaki Nitatori; Sayuki Inaoka; Taro Takahara; Junichi Hachiya

283

Synthesis of image sequences for Korean sign language using 3D shape model  

NASA Astrophysics Data System (ADS)

This paper proposes a method for offering information and realizing communication to the deaf-mute. The deaf-mute communicates with another person by means of sign language, but most people are unfamiliar with it. This method enables to convert text data into the corresponding image sequences for Korean sign language (KSL). Using a general 3D shape model of the upper body leads to generating the 3D motions of KSL. It is necessary to construct the general 3D shape model considering the anatomical structure of the human body. To obtain a personal 3D shape model, this general model is to adjust to the personal base images. Image synthesis for KSL consists of deforming a personal 3D shape model and texture-mapping the personal images onto the deformed model. The 3D motions for KSL have the facial expressions and the 3D movements of the head, trunk, arms and hands and are parameterized for easily deforming the model. These motion parameters of the upper body are extracted from a skilled signer's motion for each KSL and are stored to the database. Editing the parameters according to the inputs of text data yields to generate the image sequences of 3D motions.

Hong, Mun-Ho; Choi, Chang-Seok; Kim, Chang-Seok; Jeon, Joon-Hyeon

1995-05-01

284

Design of a 3-D Infrared Imaging System Using Structured Light  

Microsoft Academic Search

Two-dimensional infrared thermography (IRT) is widely used in various domains and can be extended to more applications if the spatial information of the temperature distri- bution is provided to form three-dimensional (3-D) thermography. A 3-D infrared (IR) imaging system based on structured light is designed to acquire the 3-D surface temperature distribution. The projector, color camera, and IR camera must

Rongqian Yang; Yazhu Chen

2011-01-01

285

Optical coherence photoacoustic microscopy for in vivo multimodal retinal imaging.  

PubMed

We developed an optical coherence photoacoustic microscopy (OC-PAM) system, which can accomplish optical coherence tomography (OCT) and photoacoustic microscopy (PAM) simultaneously by using a single pulsed broadband light source. With a center wavelength of 800爊m and a bandwidth of 30爊m, the system is suitable for imaging the retina. Generated from the same group of photons, the OCT and PAM images are intrinsically registered in the lateral directions. To test the capabilities of the system on multimodal ophthalmic imaging, we imaged the retina of pigmented rats. The OCT images showed the retinal structures with quality similar to conventional OCT, while the PAM images revealed the distribution of absorbers in the retina. Since the absorption of hemoglobin is relatively weak at around 800爊m, the NIR PAM signals are generated mainly from melanin in the posterior segment of the eye, thus providing melanin-specific imaging of the retina. PMID:25831335

Liu, Xiaojing; Liu, Tan; Wen, Rong; Li, Yiwen; Puliafito, Carmen A; Zhang, Hao F; Jiao, Shuliang

2015-04-01

286

How Accurate Are the Fusion of Cone-Beam CT and 3-D Stereophotographic Images?  

PubMed Central

Background Cone-beam Computed Tomography (CBCT) and stereophotography are two of the latest imaging modalities available for three-dimensional (3-D) visualization of craniofacial structures. However, CBCT provides only limited information on surface texture. This can be overcome by combining the bone images derived from CBCT with 3-D photographs. The objectives of this study were 1) to evaluate the feasibility of integrating 3-D Photos and CBCT images 2) to assess degree of error that may occur during the above processes and 3) to identify facial regions that would be most appropriate for 3-D image registration. Methodology CBCT scans and stereophotographic images from 29 patients were used for this study. Two 3-D images corresponding to the skin and bone were extracted from the CBCT data. The 3-D photo was superimposed on the CBCT skin image using relatively immobile areas of the face as a reference. 3-D colour maps were used to assess the accuracy of superimposition were distance differences between the CBCT and 3-D photo were recorded as the signed average and the Root Mean Square (RMS) error. Principal Findings: The signed average and RMS of the distance differences between the registered surfaces were ?0.018 (0.129) mm and 0.739 (0.239) mm respectively. The most errors were found in areas surrounding the lips and the eyes, while minimal errors were noted in the forehead, root of the nose and zygoma. Conclusions CBCT and 3-D photographic data can be successfully fused with minimal errors. When compared to RMS, the signed average was found to under-represent the registration error. The virtual 3-D composite craniofacial models permit concurrent assessment of bone and soft tissues during diagnosis and treatment planning. PMID:23185372

Jayaratne, Yasas S. N.; McGrath, Colman P. J.; Zwahlen, Roger A.

2012-01-01

287

Adaptive Optics Retinal Imaging Clinical Opportunities and Challenges  

PubMed Central

The array of therapeutic options available to clinicians for treating retinal disease is expanding. With these advances comes the need for better understanding of the etiology of these diseases on a cellular level as well as improved non-invasive tools for identifying the best candidates for given therapies and monitoring the efficacy of those therapies. While spectral domain optical coherence tomography (SD-OCT) offers a widely available tool for clinicians to assay the living retina, it suffers from poor lateral resolution due to the eye抯 monochromatic aberrations. Adaptive optics (AO) is a technique to compensate for the eye抯 aberrations and provide nearly diffraction-limited resolution. The result is the ability to visualize the living retina with cellular resolution. While AO is unquestionably a powerful research tool, many clinicians remain undecided on the clinical potential of AO imaging putting many at a crossroads with respect to adoption of this technology. This review will briefly summarize the current state of AO retinal imaging, discuss current as well as future clinical applications of AO retinal imaging, and finally provide some discussion of research needs to facilitate more widespread clinical use. PMID:23621343

Carroll, Joseph; Kay, David B.; Scoles, Drew; Dubra, Alfredo; Lombardo, Marco

2014-01-01

288

A novel method for blood vessel detection from retinal images  

PubMed Central

Background The morphological changes of the retinal blood vessels in retinal images are important indicators for diseases like diabetes, hypertension and glaucoma. Thus the accurate segmentation of blood vessel is of diagnostic value. Methods In this paper, we present a novel method to segment retinal blood vessels to overcome the variations in contrast of large and thin vessels. This method uses adaptive local thresholding to produce a binary image then extract large connected components as large vessels. The residual fragments in the binary image including some thin vessel segments (or pixels), are classified by Support Vector Machine (SVM). The tracking growth is applied to the thin vessel segments to form the whole vascular network. Results The proposed algorithm is tested on DRIVE database, and the average sensitivity is over 77% while the average accuracy reaches 93.2%. Conclusions In this paper, we distinguish large vessels by adaptive local thresholding for their good contrast. Then identify some thin vessel segments with bad contrast by SVM, which can be lengthened by tracking. This proposed method can avoid heavy computation and manual intervention. PMID:20187975

2010-01-01

289

A new approach towards image based virtual 3D city modeling by using close range photogrammetry  

NASA Astrophysics Data System (ADS)

3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country and high resolution satellite images are costly. In this study, proposed method is based on only simple video recording of area. Thus this proposed method is suitable for 3D city modeling. Photo-realistic, scalable, geo-referenced virtual 3D city model is useful for various kinds of applications such as for planning in navigation, tourism, disasters management, transportations, municipality, urban and environmental managements, real-estate industry. Thus this study will provide a good roadmap for geomatics community to create photo-realistic virtual 3D city model by using close range photogrammetry.

Singh, S. P.; Jain, K.; Mandla, V. R.

2014-05-01

290

A Workstation for Interactive Display and Quantitative Analysis of 3-D and 4-D Biomedical Images  

PubMed Central

The capability to extract objective and quantitatively accurate information from 3-D radiographic biomedical images has not kept pace with the capabilities to produce the images themselves. This is rather an ironic paradox, since on the one hand the new 3-D and 4-D imaging capabilities promise significant potential for providing greater specificity and sensitivity (i.e., precise objective discrimination and accurate quantitative measurement of body tissue characteristics and function) in clinical diagnostic and basic investigative imaging procedures than ever possible before, but on the other hand, the momentous advances in computer and associated electronic imaging technology which have made these 3-D imaging capabilities possible have not been concomitantly developed for full exploitation of these capabilities. Therefore, we have developed a powerful new microcomputer-based system which permits detailed investigations and evaluation of 3-D and 4-D (dynamic 3-D) biomedical images. The system comprises a special workstation to which all the information in a large 3-D image data base is accessible for rapid display, manipulation, and measurement. The system provides important capabilities for simultaneously representing and analyzing both structural and functional data and their relationships in various organs of the body. This paper provides a detailed description of this system, as well as some of the rationale, background, theoretical concepts, and practical considerations related to system implementation. ImagesFigure 5Figure 7Figure 8Figure 9Figure 10Figure 11Figure 12Figure 13Figure 14Figure 15Figure 16

Robb, R.A.; Heffeman, P.B.; Camp, J.J.; Hanson, D.P.

1986-01-01

291

Recovering 3D Shape and Motion from Image Streams using Non-Linear Least Squares  

Microsoft Academic Search

The simultaneous recovery of 3D shape and motion from image sequences is one of the moredifficult problems in computer vision. Classical approaches to the problem rely on using algebraictechniques to solve for these unknowns given two or more images. More recently, a batch analysisof image streams (the temporal tracks of distinguishable image features) under orthography hasresulted in highly accurate reconstructions.

Richard Szeliski; Sing Bing Kang

1993-01-01

292

Restoration of 3D medical images with total variation scheme on wavelet domains (TVW).  

E-print Network

Restoration of 3D medical images with total variation scheme on wavelet domains (TVW). Arnaud Ogier in medical imaging leads to different noises. Non infor- mative noise can damage the image interpretation highly noisy image data from non informative noise without sophisticated modeling of the noise statistics

Paris-Sud XI, Universit de

293

3-D imaging beneath water-bottom channels in the Gippsland Basin: A case study  

SciTech Connect

Water-bottom channels in the Gippsland Basin cause nonhyperbolic moveout and time delays resulting in disrupted and distorted structural images on time sections. In 3-D processing, prestack depth migration is the ultimate processing technique for resolving nonhyperbolic moveout and producing correct depth images. But, the high cost and slow turnaround time of 3-D prestack depth migration limits its application. An alternative 3-D processing approach, consisting of prestack 3-D replacement dynamics, stacking, poststack inverse 3-D replacement dynamics, and 3-D poststack depth migration, was applied to a 3-D survey located near the shelf-slope edge of the Gippsland Basin. Application of 3-D replacement dynamics reduced nonhyperbolic moveout caused by variable water depth and improved the quality of stacked traces. We found that the interval velocity distribution below the sea floor is largely controlled by compaction or depth below the sea floor. Poststack depth migration, with a compaction-based velocity field, removed the structural distortion beneath the channels, resulting in an accurate 3-D depth image.

Young, K.T.J.; Schneider, W.A. Jr. (Exxon Production Research Co., Houston, TX (United States)); Moore, J.F. (Esso Australia Limited, Melbourne (Australia)) (and others)

1996-01-01

294

3-D imaging beneath water-bottom channels in the Gippsland Basin: A case study  

SciTech Connect

Water-bottom channels in the Gippsland Basin cause nonhyperbolic moveout and time delays resulting in disrupted and distorted structural images on time sections. In 3-D processing, prestack depth migration is the ultimate processing technique for resolving nonhyperbolic moveout and producing correct depth images. But, the high cost and slow turnaround time of 3-D prestack depth migration limits its application. An alternative 3-D processing approach, consisting of prestack 3-D replacement dynamics, stacking, poststack inverse 3-D replacement dynamics, and 3-D poststack depth migration, was applied to a 3-D survey located near the shelf-slope edge of the Gippsland Basin. Application of 3-D replacement dynamics reduced nonhyperbolic moveout caused by variable water depth and improved the quality of stacked traces. We found that the interval velocity distribution below the sea floor is largely controlled by compaction or depth below the sea floor. Poststack depth migration, with a compaction-based velocity field, removed the structural distortion beneath the channels, resulting in an accurate 3-D depth image.

Young, K.T.J.; Schneider, W.A. Jr. [Exxon Production Research Co., Houston, TX (United States); Moore, J.F. [Esso Australia Limited, Melbourne (Australia)] [and others

1996-12-31

295

ASTM E57 3D imaging systems committee: an update on the standards development effort  

NASA Astrophysics Data System (ADS)

In June 2006, a new ASTM committee (E57) was established to develop standards for 3D imaging systems. This committee is the result of a 4-year effort at the National Institute of Standards and Technology to develop performance evaluation and characterization methods for such systems. The initial focus for the committee will be on standards for 3D imaging systems typically used for applications including, but not limited to, construction and maintenance, surveying, mapping and terrain characterization, manufacturing (e.g., aerospace, shipbuilding), transportation, mining, mobility, historic preservation, and forensics. This paper reports the status of current efforts of the ASTM E57 3D Imaging Systems committee.

Lytle, Alan M.; Cheok, Gerry; Saidi, Kamel

2007-04-01

296

Algorithm of pulmonary emphysema extraction using low dose thoracic 3D CT images  

NASA Astrophysics Data System (ADS)

Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to 100 thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Omatsu, H.; Tominaga, K.; Eguchi, K.; Moriyama, N.

2006-03-01

297

Algorithm of pulmonary emphysema extraction using thoracic 3D CT images  

NASA Astrophysics Data System (ADS)

Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

2007-03-01

298

Processing sequence for non-destructive inspection based on 3D terahertz images  

NASA Astrophysics Data System (ADS)

In this paper we present an innovative data and image processing sequence to perform non-destructive inspection from 3D terahertz (THz) images. We develop all the steps starting from a 3D tomographic reconstruction of a sample from its radiographs acquired with a monochromatic millimetre wave imaging system. Thus an automated segmentation provides the different volumes of interest (VOI) composing the sample. Then a 3D visualization and dimensional measurements are performed on these VOI, separately, in order to provide an accurate nondestructive testing (NDT) of the studied sample. This sequence is implemented onto an unique software and validated through the analysis of different objects

Balacey, H.; Perraud, Jean-Baptiste; Bou Sleiman, J.; Guillet, Jean-Paul; Recur, B.; Mounaix, P.

2014-11-01

299

Retinal, anterior segment and full eye imaging using ultrahigh speed swept source OCT with vertical-cavity surface emitting lasers.  

PubMed

We demonstrate swept source OCT utilizing vertical-cavity surface emitting laser (VCSEL) technology for in vivo high speed retinal, anterior segment and full eye imaging. The MEMS tunable VCSEL enables long coherence length, adjustable spectral sweep range and adjustable high sweeping rate (50-580 kHz axial scan rate). These features enable integration of multiple ophthalmic applications into one instrument. The operating modes of the device include: ultrahigh speed, high resolution retinal imaging (up to 580 kHz); high speed, long depth range anterior segment imaging (100 kHz) and ultralong range full eye imaging (50 kHz). High speed imaging enables wide-field retinal scanning, while increased light penetration at 1060 nm enables visualization of choroidal vasculature. Comprehensive volumetric data sets of the anterior segment from the cornea to posterior crystalline lens surface are also shown. The adjustable VCSEL sweep range and rate make it possible to achieve an extremely long imaging depth range of ~50 mm, and to demonstrate the first in vivo 3D OCT imaging spanning the entire eye for non-contact measurement of intraocular distances including axial eye length. Swept source OCT with VCSEL technology may be attractive for next generation integrated ophthalmic OCT instruments. PMID:23162712

Grulkowski, Ireneusz; Liu, Jonathan J; Potsaid, Benjamin; Jayaraman, Vijaysekhar; Lu, Chen D; Jiang, James; Cable, Alex E; Duker, Jay S; Fujimoto, James G

2012-11-01

300

Retinal, anterior segment and full eye imaging using ultrahigh speed swept source OCT with vertical-cavity surface emitting lasers  

PubMed Central

We demonstrate swept source OCT utilizing vertical-cavity surface emitting laser (VCSEL) technology for in vivo high speed retinal, anterior segment and full eye imaging. The MEMS tunable VCSEL enables long coherence length, adjustable spectral sweep range and adjustable high sweeping rate (50580 kHz axial scan rate). These features enable integration of multiple ophthalmic applications into one instrument. The operating modes of the device include: ultrahigh speed, high resolution retinal imaging (up to 580 kHz); high speed, long depth range anterior segment imaging (100 kHz) and ultralong range full eye imaging (50 kHz). High speed imaging enables wide-field retinal scanning, while increased light penetration at 1060 nm enables visualization of choroidal vasculature. Comprehensive volumetric data sets of the anterior segment from the cornea to posterior crystalline lens surface are also shown. The adjustable VCSEL sweep range and rate make it possible to achieve an extremely long imaging depth range of ~50 mm, and to demonstrate the first in vivo 3D OCT imaging spanning the entire eye for non-contact measurement of intraocular distances including axial eye length. Swept source OCT with VCSEL technology may be attractive for next generation integrated ophthalmic OCT instruments. PMID:23162712

Grulkowski, Ireneusz; Liu, Jonathan J.; Potsaid, Benjamin; Jayaraman, Vijaysekhar; Lu, Chen D.; Jiang, James; Cable, Alex E.; Duker, Jay S.; Fujimoto, James G.

2012-01-01

301

Spatial Mutual Information as Similarity Measure for 3-D Brain Image Registration  

PubMed Central

Information theoretic-based similarity measures, in particular mutual information, are widely used for intermodal/intersubject 3-D brain image registration. However, conventional mutual information does not consider spatial dependency between adjacent voxels in images, thus reducing its efficacy as a similarity measure in image registration. This paper first presents a review of the existing attempts to incorporate spatial dependency into the computation of mutual information (MI). Then, a recently introduced spatially dependent similarity measure, named spatial MI, is extended to 3-D brain image registration. This extension also eliminates its artifact for translational misregistration. Finally, the effectiveness of the proposed 3-D spatial MI as a similarity measure is compared with three existing MI measures by applying controlled levels of noise degradation to 3-D simulated brain images. PMID:24851197

RAZLIGHI, QOLAMREZA R.; KEHTARNAVAZ, NASSER

2014-01-01

302

3D-snapshot flash NMR imaging of the human heart.  

PubMed

SNAPSHOT-FLASH is a recently developed, ultrafast imaging technique, based on conventional FLASH imaging. The application of this new variant to 3D imaging allows the acquisition of a 128 x 128 x 32 data set in 12.5 seconds without triggering, or for cardiac imaging with gating within 32 heartbeats. Compared to standard 3D-FLASH this is 128 times faster, because triggering is only required when the 3D phase-encoding gradient is incremented. The method depicts for the first time fast three-dimensional views of the human heart without motional artifacts. The images are spin-density weighted. Using suitable prepulses any desired T1- or T2-contrast may be achieved. The generation of 3D movies is possible without an increase of the total scan time. PMID:2392025

Henrich, D; Haase, A; Matthaei, D

1990-01-01

303

Retinal oxygen saturation evaluation by multi-spectral fundus imaging  

NASA Astrophysics Data System (ADS)

Purpose: To develop a multi-spectral method to measure oxygen saturation of the retina in the human eye. Methods: Five Cynomolgus monkeys with normal eyes were anesthetized with intramuscular ketamine/xylazine and intravenous pentobarbital. Multi-spectral fundus imaging was performed in five monkeys with a commercial fundus camera equipped with a liquid crystal tuned filter in the illumination light path and a 16-bit digital camera. Recording parameters were controlled with software written specifically for the application. Seven images at successively longer oxygen-sensing wavelengths were recorded within 4 seconds. Individual images for each wavelength were captured in less than 100 msec of flash illumination. Slightly misaligned images of separate wavelengths due to slight eye motion were registered and corrected by translational and rotational image registration prior to analysis. Numerical values of relative oxygen saturation of retinal arteries and veins and the underlying tissue in between the artery/vein pairs were evaluated by an algorithm previously described, but which is now corrected for blood volume from averaged pixels (n > 1000). Color saturation maps were constructed by applying the algorithm at each image pixel using a Matlab script. Results: Both the numerical values of relative oxygen saturation and the saturation maps correspond to the physiological condition, that is, in a normal retina, the artery is more saturated than the tissue and the tissue is more saturated than the vein. With the multi-spectral fundus camera and proper registration of the multi-wavelength images, we were able to determine oxygen saturation in the primate retinal structures on a tolerable time scale which is applicable to human subjects. Conclusions: Seven wavelength multi-spectral imagery can be used to measure oxygen saturation in retinal artery, vein, and tissue (microcirculation). This technique is safe and can be used to monitor oxygen uptake in humans. This work is original and is not under consideration for publication elsewhere.

Khoobehi, Bahram; Ning, Jinfeng; Puissegur, Elise; Bordeaux, Kimberly; Balasubramanian, Madhusudhanan; Beach, James

2007-03-01

304

Automatic arteriovenous crossing phenomenon detection on retinal fundus images  

NASA Astrophysics Data System (ADS)

Arteriolosclerosis is one cause of acquired blindness. Retinal fundus image examination is useful for early detection of arteriolosclerosis. In order to diagnose the presence of arteriolosclerosis, the physicians find the silver-wire arteries, the copper-wire arteries and arteriovenous crossing phenomenon on retinal fundus images. The focus of this study was to develop the automated detection method of the arteriovenous crossing phenomenon on the retinal images. The blood vessel regions were detected by using a double ring filter, and the crossing sections of artery and vein were detected by using a ring filter. The center of that ring was an interest point, and that point was determined as a crossing section when there were over four blood vessel segments on that ring. And two blood vessels gone through on the ring were classified into artery and vein by using the pixel values on red and blue component image. Finally, V2-to-V1 ratio was measured for recognition of abnormalities. V1 was the venous diameter far from the blood vessel crossing section, and V2 was the venous diameter near from the blood vessel crossing section. The crossing section with V2-to-V1 ratio over 0.8 was experimentally determined as abnormality. Twenty four images, including 27 abnormalities and 54 normal crossing sections, were used for preliminary evaluation of the proposed method. The proposed method was detected 73% of crossing sections when the 2.8 sections per image were mis-detected. And, 59% of abnormalities were detected by measurement of V1-to-V2 ratio when the 1.7 sections per image were mis-detected.

Hatanaka, Yuji; Muramatsu, Chisako; Hara, Takeshi; Fujita, Hiroshi

2011-03-01

305

3D image-based scatter estimation and correction for multi-detector CT imaging  

NASA Astrophysics Data System (ADS)

The aim of this work is to implement and evaluate a 3D image-based approach for the estimation of scattered radiation in multi-detector CT. Based on a reconstructed CT image volume, the scattered radiation contribution is calculated in 3D fan-beam geometry in the framework of an extended point-scatter kernel (PSK) model of scattered radiation. The PSK model is based on the calculation of elemental scatter contributions propagating the rays from the focal spot to the detector across the object for defined interaction points on a 3D fan beam grid. Each interaction point in 3D leads to an individual elemental 2D scatter distribution on the detector. The sum of all elemental contributions represents the total scatter intensity distribution on the detector. Our proposed extended PSK depends on the scattering angle (defined by the interaction point and the considered detector channel) and the line integral between the interaction point on a 3D fan beam ray and the intersection of the same ray with the detector. The PSK comprises single- and multiple scattering as well as the angular selectivity characteristics of the anti-scatter grid on detector. Our point-scatter kernels were obtained from a low-noise Monte-Carlo simulation of water-equivalent spheres with different radii for a particular CT scanner geometry. The model allows obtaining noise-free scatter intensity distribution estimates with a lower computational load compared to Monte-Carlo methods. In this work, we give a description of the algorithm and the proposed PSK. Furthermore, we compare resulting scatter intensity distributions (obtained for numerical phantoms) to Monte-Carlo results.

Petersilka, M.; Allmendinger, T.; Stierstorfer, K.

2014-03-01

306

Mutual information as a measure of image quality for 3D dynamic lung imaging with EIT.  

PubMed

We report on a pilot study of dynamic lung electrical impedance tomography (EIT) at the University of Manchester. Low-noise EIT data at 100 frames per second were obtained from healthy male subjects during controlled breathing, followed by magnetic resonance imaging (MRI) subsequently used for spatial validation of the EIT reconstruction. The torso surface in the MR image and electrode positions obtained using MRI fiducial markers informed the construction of a 3D finite element model extruded along the caudal-distal axis of the subject. Small changes in the boundary that occur during respiration were accounted for by incorporating the sensitivity with respect to boundary shape into a robust temporal difference reconstruction algorithm. EIT and MRI images were co-registered using the open source medical imaging software, 3D Slicer. A quantitative comparison of quality of different EIT reconstructions was achieved through calculation of the mutual information with a lung-segmented MR image. EIT reconstructions using a linear shape correction algorithm reduced boundary image artefacts, yielding better contrast of the lungs, and had 10% greater mutual information compared with a standard linear EIT reconstruction. PMID:24710978

Crabb, M G; Davidson, J L; Little, R; Wright, P; Morgan, A R; Miller, C A; Naish, J H; Parker, G J M; Kikinis, R; McCann, H; Lionheart, W R B

2014-05-01

307

Infrared imaging of the polymer 3D-printing process  

NASA Astrophysics Data System (ADS)

Both mid-wave and long-wave IR cameras are used to measure various temperature profiles in thermoplastic parts as they are printed. Two significantly different 3D-printers are used in this study. The first is a small scale commercially available Solidoodle 3 printer, which prints parts with layer thicknesses on the order of 125?m. The second printer used is a "Big Area Additive Manufacturing" (BAAM) 3D-printer developed at Oak Ridge National Laboratory. The BAAM prints parts with a layer thicknesses of 4.06 mm. Of particular interest is the temperature of the previously deposited layer as the new hot layer is about to be extruded onto it. The two layers are expected have a stronger bond if the temperature of the substrate layer is above the glass transition temperature. This paper describes the measurement technique and results for a study of temperature decay and substrate layer temperature for ABS thermoplastic with and without the addition of chopped carbon fibers.

Dinwiddie, Ralph B.; Kunc, Vlastimil; Lindal, John M.; Post, Brian; Smith, Rachel J.; Love, Lonnie; Duty, Chad E.

2014-05-01

308

Compact Ambient Light Cancellation Design and Optimization for 3D Time-of-Flight Image Sensors  

E-print Network

Compact Ambient Light Cancellation Design and Optimization for 3D Time-of-Flight Image Sensors, Gunsan, Jeonbuk, South Korea A highly compact ambient-light-cancellation (ALC) circuit for 3D time performance is also presented. The QVGA sensor has been demonstrated at up to 40k lux of ambient light

Fossum, Eric R.

309

Parallel implementation of image segmentation for tracking 3D salt boundaries  

Microsoft Academic Search

We distribute the modied normalized cuts image segmentation with random boundaries algorithm on a parallel network to track 3D salt boundaries. We identify two key steps of this algorithm for parallelization. Firstly, we parallelize the calculation of the weight matrix. Secondly, we parallelize the matrix-vector product of the eigenvector calculation. This method is demonstrated to be effective on a 3D

Jesse Lomask; Robert G. Clapp

2006-01-01

310

Heterogeneous Deformation Model for 3D Shape and Motion Recovery from Multi-Viewpoint Images  

Microsoft Academic Search

This paper presents a framework for dynamic 3D shape and motion reconstruction from multi-viewpoint images us- ing a deformable mesh model. By deforming a mesh at a frame to that at the next frame, we can obtain both 3D shape and motion of the object simultaneously. The deformation process of our mesh model is heterogeneous. Each vertex changes its deformation

Shohei Nobuhara; Takashi Matsuyama

2004-01-01

311

Finite Element Methods for Active Contour Models and Balloons for 2D and 3D Images  

Microsoft Academic Search

The use of energy-minimizing curves, known as "snakes" to extract features of interest in images has been introduced by Kass, Witkin and Terzopoulos [23]. A balloon model was introduced in [12] as a way to generalize and solve some of the problems encountered with the original method. We present a 3D generalization of the balloon model as a 3D deformable

Laurent D. Cohen; Isaac Cohen

1991-01-01

312

EXTRACTION OF 3D STRAIGHT LINES USING LIDAR DATA AND AERIAL IMAGES  

Microsoft Academic Search

Light Detection and Ranging (LIDAR) is a technology used for collecting topographic data. Nowadays there are diverse application areas of LIDAR data that include DTM generation and building extraction. 3D representations of buildings extracted from LIDAR data are often quite inaccurate regarding their location of the building edges. High resolution aerial images have the potential to accurately extract 3D straight

A. Miraliakbari; M. Hahn; H. Arefi; J. Engels

313

High-Definition Texture Reconstruction for 3D Image-based Hoang Minh Nguyen  

E-print Network

as a means to create realistic 3D digital models of real- world objects. Applications range from games and e-commerce entertainment (games, movies) to engineering and architecture (design), e-commerce (ad- vertisementHigh-Definition Texture Reconstruction for 3D Image-based Modeling Hoang Minh Nguyen The University

Sun, Jing

314

Blood vessel detection in retinal images and its application in diabetic retinopathy screening  

E-print Network

doctors. ...................... 102 46 A microaneurysm-like object that is in fact a dust on the camera lens. ................. 103 47 Left, an image with unknown object; right, an image with age related macular degeneration... in retinal images. (Source: [5] . Courtesy of Dr. J.V. Forrester) Several retinal implications have been related to vascular anomalies and structure change, which includes diabetic retinopathy, glaucoma, retinal artery occlusion and macular...

Zhang, Ming

2009-05-15

315

3-D Target Location from Stereoscopic SAR Images  

SciTech Connect

SAR range-Doppler images are inherently 2-dimensional. Targets with a height offset lay over onto offset range and azimuth locations. Just which image locations are laid upon depends on the imaging geometry, including depression angle, squint angle, and target bearing. This is the well known layover phenomenon. Images formed with different aperture geometries will exhibit different layover characteristics. These differences can be exploited to ascertain target height information, in a stereoscopic manner. Depending on the imaging geometries, height accuracy can be on the order of horizontal position accuracies, thereby rivaling the best IFSAR capabilities in fine resolution SAR images. All that is required for this to work are two distinct passes with suitably different geometries from any plain old SAR.

DOERRY,ARMIN W.

1999-10-01

316

GPU implementation of a deformable 3D image registration algorithm.  

PubMed

We present a parallel implementation of a new deformable image registration algorithm using the Computer Unified Device Architecture (CUDA). The algorithm co-registers preoperative and intraoperative 3-dimensional magnetic resonance (MR) images of a deforming organ. It employs a linear elastic dynamic finite-element model of the deformation and distance measures such as mutual information and sum of squared differences to align volumetric image data sets. Computationally intensive elements of the method such as interpolation, displacement and force calculation are significantly accelerated using a Graphics Processing Unit (GPU). The result of experiments carried out with a realistic breast phantom tissue shows a 37 fold speedup for the GPU-based implementation compared with an optimized CPU-based implementation in high resolution MR image registration. The GPU implementation is capable of registering 512 512 136 image sets in just over 2 seconds, making it suitable for clinical applications requiring fast and accurate processing of medical images. PMID:22255436

Mousazadeh, Hamed; Marami, Bahram; Sirouspour, Shahin; Patriciu, Alexandru

2011-01-01

317

Fast non local means denoising for 3D MR images.  

PubMed

One critical issue in the context of image restoration is the problem of noise removal while keeping the integrity of relevant image information. Denoising is a crucial step to increase image conspicuity and to improve the performances of all the processings needed for quantitative imaging analysis. The method proposed in this paper is based on an optimized version of the Non Local (NL) Means algorithm. This approach uses the natural redundancy of information in image to remove the noise. Tests were carried out on synthetic datasets and on real 3T MR images. The results show that the NL-means approach outperforms other classical denoising methods, such as Anisotropic Diffusion Filter and Total Variation. PMID:17354753

Coup, Pierrick; Yger, Pierre; Barillot, Christian

2006-01-01

318

2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.  

PubMed

3D imaging has a significant impact on many challenges in life sciences, because biology is a 3-dimensional phenomenon. Current 3D imaging-technologies (various types MRI, PET, SPECT) are labeled, i.e. they trace the localization of a specific compound in the body. In contrast, 3D MALDI mass spectrometry-imaging (MALDI-MSI) is a label-free method imaging the spatial distribution of molecular compounds. It complements 3D imaging labeled methods, immunohistochemistry, and genetics-based methods. However, 3D MALDI-MSI cannot tap its full potential due to the lack of statistical methods for analysis and interpretation of large and complex 3D datasets. To overcome this, we established a complete and robust 3D MALDI-MSI pipeline combined with efficient computational data analysis methods for 3D edge preserving image denoising, 3D spatial segmentation as well as finding colocalized m/z values, which will be reviewed here in detail. Furthermore, we explain, why the integration and correlation of the MALDI imaging data with other imaging modalities allows to enhance the interpretation of the molecular data and provides visualization of molecular patterns that may otherwise not be apparent. Therefore, a 3D data acquisition workflow is described generating a set of 3 different dimensional images representing the same anatomies. First, an in-vitro MRI measurement is performed which results in a three-dimensional image modality representing the 3D structure of the measured object. After sectioning the 3D object into N consecutive slices, all N slices are scanned using an optical digital scanner, enabling for performing the MS measurements. Scanning the individual sections results into low-resolution images, which define the base coordinate system for the whole pipeline. The scanned images conclude the information from the spatial (MRI) and the mass spectrometric (MALDI-MSI) dimension and are used for the spatial three-dimensional reconstruction of the object performed by image registration techniques. Different strategies for automatic serial image registration applied to MS datasets are outlined in detail. The third image modality is histology driven, i.e. a digital scan of the histological stained slices in high-resolution. After fusion of reconstructed scan images and MRI the slice-related coordinates of the mass spectra can be propagated into 3D-space. After image registration of scan images and histological stained images, the anatomical information from histology is fused with the mass spectra from MALDI-MSI. As a result of the described pipeline we have a set of 3 dimensional images representing the same anatomies, i.e. the reconstructed slice scans, the spectral images as well as corresponding clustering results, and the acquired MRI. Great emphasis is put on the fact that the co-registered MRI providing anatomical details improves the interpretation of 3D MALDI images. The ability to relate mass spectrometry derived molecular information with in vivo and in vitro imaging has potentially important implications. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. PMID:23467008

Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

2014-01-01

319

3D printing based on imaging data: review of medical applications  

Microsoft Academic Search

Purpose牋Generation of graspable three-dimensional objects applied for surgical planning, prosthetics and related applications using\\u000a 3D printing or rapid prototyping is summarized and evaluated.\\u000a \\u000a \\u000a \\u000a \\u000a Materials and methods牋Graspable 3D objects overcome the limitations of 3D visualizations which can only be displayed on flat screens. 3D objects\\u000a can be produced based on CT or MRI volumetric medical images. Using dedicated post-processing algorithms, a

F. Rengier; A. Mehndiratta; H. von Tengg-Kobligk; C. M. Zechmann; R. Unterhinninghofen; H.-U. Kauczor; F. L. Giesel

2010-01-01

320

Abstract Title: Image Informatics Tools for the Analysis of Retinal Images  

E-print Network

Abstract Title: Image Informatics Tools for the Analysis of Retinal Images Presentation Start for Bioimage Informatics, C Dept. of Electrical and Computer Engineering, 1 University of California Santa/image analysis: non-clinical Purpose: To develop software tools that can be used for the enhancement

California at Santa Barbara, University of

321

Rapid Elastic Image Registration for 3D Ultrasound  

Microsoft Academic Search

A Subvolume-based algorithm for elastic Ultrasound REgistration (SURE) was developed and evaluated. Designed primarily to improve spatial resolution in three-dimensional compound imaging, the algorithm registers individual image volumes nonlinearly before combination into compound volumes. SURE works in one or two stages, optionally using MIAMI Fuse software first to determine a global affine registration before iteratively dividing the volume into subvolumes

Jochen F. Krucker; Gerald L. Lecarpentier; J. Brian Fowlkes; Paul L. Carson

2002-01-01

322

Recovering 3D tumor locations from 2D bioluminescence images  

E-print Network

to facilitate its use in analyzing the repeated imaging of a same animal transplanted with gene marked cells present our method using both phantom studies and real studies on small animals. 1. Introduction for monitoring molecular events in intact living animals. Important applications of this imaging technique

Huang, Xiaolei

323

A high-level 3D visualization API for Java and ImageJ  

PubMed Central

Background Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Results Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Conclusions Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de. PMID:20492697

2010-01-01

324

Light sheet adaptive optics microscope for 3D live imaging  

NASA Astrophysics Data System (ADS)

We report on the incorporation of adaptive optics (AO) into the imaging arm of a selective plane illumination microscope (SPIM). SPIM has recently emerged as an important tool for life science research due to its ability to deliver high-speed, optically sectioned, time-lapse microscope images from deep within in vivo selected samples. SPIM provides a very interesting system for the incorporation of AO as the illumination and imaging paths are decoupled and AO may be useful in both paths. In this paper, we will report the use of AO applied to the imaging path of a SPIM, demonstrating significant improvement in image quality of a live GFP-labeled transgenic zebrafish embryo heart using a modal, wavefront sensorless approach and a heart synchronization method. These experimental results are linked to a computational model showing that significant aberrations are produced by the tube holding the sample in addition to the aberration from the biological sample itself.

Bourgenot, C.; Taylor, J. M.; Saunter, C. D.; Girkin, J. M.; Love, G. D.

2013-02-01

325

Automatic 3D ultrasound calibration for image guided therapy using intramodality image registration.  

PubMed

Many real time ultrasound (US) guided therapies can benefit from management of motion-induced anatomical changes with respect to a previously acquired computerized anatomy model. Spatial calibration is a prerequisite to transforming US image information to the reference frame of the anatomy model. We present a new method for calibrating 3D US volumes using intramodality image registration, derived from the 'hand-eye' calibration technique. The method is fully automated by implementing data rejection based on sensor displacements, automatic registration over overlapping image regions, and a self-consistency error metric evaluated continuously during calibration. We also present a novel method for validating US calibrations based on measurement of physical phantom displacements within US images. Both calibration and validation can be performed on arbitrary phantoms. Results indicate that normalized mutual information and localized cross correlation produce the most accurate 3D US registrations for calibration. Volumetric image alignment is more accurate and reproducible than point selection for validating the calibrations, yielding <1.5爉m root mean square error, a significant improvement relative to previously reported hand-eye US calibration results. Comparison of two different phantoms for calibration and for validation revealed significant differences for validation (p = 0.003) but not for calibration (p = 0.795). PMID:24099806

Schlosser, Jeffrey; Kirmizibayrak, Can; Shamdasani, Vijay; Metz, Steve; Hristov, Dimitre

2013-11-01

326

3D printing of intracranial artery stenosis based on the source images of magnetic resonance angiograph  

PubMed Central

Background and purpose Three dimensional (3D) printing techniques for brain diseases have not been widely studied. We attempted to 憄rint the segments of intracranial arteries based on magnetic resonance imaging. Methods Three dimensional magnetic resonance angiography (MRA) was performed on two patients with middle cerebral artery (MCA) stenosis. Using scale-adaptive vascular modeling, 3D vascular models were constructed from the MRA source images. The magnified (ten times) regions of interest (ROI) of the stenotic segments were selected and fabricated by a 3D printer with a resolution of 30 祄. A survey to 8 clinicians was performed to evaluate the accuracy of 3D printing results as compared with MRA findings (4 grades, grade 1: consistent with MRA and provide additional visual information; grade 2: consistent with MRA; grade 3: not consistent with MRA; grade 4: not consistent with MRA and provide probable misleading information). If a 3D printing vessel segment was ideally matched to the MRA findings (grade 2 or 1), a successful 3D printing was defined. Results Seven responders marked 揼rade 1 to 3D printing results, while one marked 揼rade 4. Therefore, 87.5% of the clinicians considered the 3D printing were successful. Conclusions Our pilot study confirms the feasibility of using 3D printing technique in the research field of intracranial artery diseases. Further investigations are warranted to optimize this technique and translate it into clinical practice. PMID:25333049

Liu, Jia; Li, Ming-Li; Sun, Zhao-Yong; Chen, Jie

2014-01-01

327

Microwave image reconstruction from 3-D fields coupled to 2-D parameter estimation.  

PubMed

An efficient Gauss-Newton iterative imaging technique utilizing a three-dimensional (3-D) field solution coupled to a two-dimensional (2-D) parameter estimation scheme (3-D/2-D) is presented for microwave tomographic imaging in medical applications. While electromagnetic wave propagation is described fully by a 3-D vector field, a 3-D scalar model has been applied to improve the efficiency of the iterative reconstruction process with apparently limited reduction in accuracy. In addition, the image recovery has been restricted to 2-D but is generalizable to three dimensions. Image artifacts related primarily to 3-D effects are reduced when compared with results from an entirely two-dimensional inversion (2-D/2-D). Important advances in terms of improving algorithmic efficiency include use of a block solver for computing the field solutions and application of the dual mesh scheme and adjoint approach for Jacobian construction. Methods which enhance the image quality such as the log-magnitude/unwrapped phase minimization were also applied. Results obtained from synthetic measurement data show that the new 3-D/2-D algorithm consistently outperforms its 2-D/2-D counterpart in terms of reducing the effective imaging slice thickness in both permittivity and conductivity images over a range of inclusion sizes and background medium contrasts. PMID:15084072

Fang, Qianqian; Meaney, Paul M; Geimer, Shireen D; Streltsov, Anatoly V; Paulsen, Keith D

2004-04-01

328

Hands-on guide for 3D image creation for geological purposes  

NASA Astrophysics Data System (ADS)

Geological structures in outcrops or hand specimens are inherently three dimensional (3D), and therefore better understandable if viewed in 3D. While 3D models can easily be created, manipulated, and looked at from all sides on the computer screen (e.g., using photogrammetry or laser scanning data), 3D visualizations for publications or conference posters are much more challenging as they have to live in a 2D-world (i.e., on a sheet of paper). Perspective 2D visualizations of 3D models do not fully transmit the "feeling and depth of the third dimension" to the audience; but this feeling is desirable for a better examination and understanding in 3D of the structure under consideration. One of the very few possibilities to generate real 3D images, which work on a 2D display, is by using so-called stereoscopic images. Stereoscopic images are two images of the same object recorded from two slightly offset viewpoints. Special glasses and techniques have to be used to make sure that one image is seen only by one eye, and the other image is seen by the other eye, which together lead to the "3D effect". Geoscientists are often familiar with such 3D images. For example, geomorphologists traditionally view stereographic orthophotos by employing a mirror-steroscope. Nowadays, petroleum-geoscientists examine high-resolution 3D seismic data sets in special 3D visualization rooms. One of the methods for generating and viewing a stereoscopic image, which does not require a high-tech viewing device, is to create a so-called anaglyph. The principle is to overlay two images saturated in red and cyan, respectively. The two images are then viewed through red-cyan-stereoscopic glasses. This method is simple and cost-effective, but has some drawbacks in preserving colors accurately. A similar method is used in 3D movies, where polarized light or shuttering techniques are used to separate the left from the right image, which allows preserving the original colors. The advantage of red-cyan anaglyphs is their simplicity and the possibility to print them on normal paper or project them using a conventional projector. Producing 3D stereoscopic images is much easier than commonly thought. Our hands-on poster provides an easy-to-use guide for producing 3D stereoscopic images. Few simple rules-of-thumb are presented that define how photographs of any scene or object have to be shot to produce good-looking 3D images. We use the free software Stereophotomaker (http://stereo.jpn.org/eng/stphmkr) to produce anaglyphs and provide red-cyan 3D glasses for viewing them. Our hands-on poster is easy to adapt and helps any geologist to present his/her field or hand specimen photographs in a much more fashionable 3D way for future publications or conference posters.

Frehner, Marcel; Tisato, Nicola

2013-04-01

329

Accelerated computation of hologram patterns by use of interline redundancy of 3-D object images  

NASA Astrophysics Data System (ADS)

We present a new approach for accelerated computation of hologram patterns of a three-dimensional (3-D) image by taking into account of its interline redundant data. Interline redundant data of a 3-D image are extracted with the differential pulse code modulation (DPCM) algorithm, and then the CGH patterns for these compressed line images are generated with the novel lookup table (N-LUT) technique. To confirm the feasibility of the proposed method, experiments with four kinds of 3-D test objects are carried out, and the results are comparatively discussed with the conventional methods in terms of the number of object points and the computation time. Experimental results show that the number of calculated object points and the computation time for one object point have been reduced by 73.3 and 83.9%, on the average, for four test 3-D images in the proposed method employing a top-down scanning method, compared to the conventional method.

Kim, Seung-Cheol; Choe, Woo-Young; Kim, Eun-Soo

2011-09-01

330

Dual-Mode Intracranial Catheter Integrating 3D Ultrasound Imaging and Hyperthermia for Neuro-oncology  

E-print Network

Dual-Mode Intracranial Catheter Integrating 3D Ultrasound Imaging and Hyperthermia for Neuro-oncology and both probes were used in an in vivo canine brain model to im- age anatomical structures and color

Smith, Stephen

331

Real-time drill monitoring and control using building information models augmented with 3D imaging data  

E-print Network

Real-time drill monitoring and control using building information models augmented with 3D imaging- veloped. The 3D imaging technologies were used to map the locations of rebar within a section of a railway

Kamat, Vineet R.

332

A novel automatic image processing algorithm for detection of hard exudates based on retinal image analysis.  

PubMed

We present an automatic image processing algorithm to detect hard exudates. Automatic detection of hard exudates from retinal images is an important problem since hard exudates are associated with diabetic retinopathy and have been found to be one of the most prevalent earliest signs of retinopathy. The algorithm is based on Fisher's linear discriminant analysis and makes use of colour information to perform the classification of retinal exudates. We prospectively assessed the algorithm performance using a database containing 58 retinal images with variable colour, brightness, and quality. Our proposed algorithm obtained a sensitivity of 88% with a mean number of 4.83+/-4.64 false positives per image using the lesion-based performance evaluation criterion, and achieved an image-based classification accuracy of 100% (sensitivity of 100% and specificity of 100%). PMID:17556004

S醤chez, Clara I; Hornero, Roberto; L髉ez, Mar韆 I; Aboy, Mateo; Poza, Jes鷖; Ab醩olo, Daniel

2008-04-01

333

Use of enhancement algorithm to suppress reflections in 3-D reconstructed capsule endoscopy images  

PubMed Central

In capsule endoscopy (CE), there is research to develop hardware that enables 憫real拻 three-dimensional (3-D) video. However, it should not be forgotten that 憫true拻 3-D requires dual video images. Inclusion of two cameras within the shell of a capsule endoscope though might be unwieldy at present. Therefore, in an attempt to approximate a 3-D reconstruction of the digestive tract surface, a software that recovers information-using gradual variation of shading-from monocular two-dimensional CE images has been proposed. Light reflections on the surface of the digestive tract are still a significant problem. Therefore, a phantom model and simulator has been constructed in an attempt to check the validity of a highlight suppression algorithm. Our results confirm that 3-D representation software performs better with simultaneous application of a highlight reduction algorithm. Furthermore, 3-D representation follows a good approximation of the real distance to the lumen surface. PMID:24044049

Koulaouzidis, Anastasios; Karargyris, Alexandros

2013-01-01

334

3D Modeling of Outdoor Environments by Integrating Omnidirectional Range and Color Images  

Microsoft Academic Search

This paper describes a 3D modeling method for wide area outdoorenvironmentswhichis basedonintegrating omnidi- rectional range and color images. In the proposed method, outdoor scenes can be efficiently digitized by an omnidirec- tional laser rangefinder which can obtain a 3D shape with high-accuracy andby an omnidirectionalmulti-camerasys- tem (OMS) which can capture a high-resolution color im- age. Multiple range images are registered

Toshihiro Asai; Masayuki Kanbara; Naokazu Yokoya

2005-01-01

335

Characterization of Dynamic 3-D PET Imaging for Functional Brain Mapping  

Microsoft Academic Search

Methods for optimizing the acquisition, reconstruction and analysis of positron emission tomography (PET) images for functional brain mapping have been investigated. The scatter fraction and noise-equivalent count rate characteristics were measured for the ECAT 951\\/31R PET scanner operating in septa-extended two-dimensional (2-D) and septa-retracted three-dimensional (3-D) modes. The 3-D mode is shown to provide higher signal-to-noise images than the 2-D

David Barnes; Gary Egan; G. O'keefe; D. Abbott

1997-01-01

336

Remarks on 3D human body posture reconstruction from multiple camera images  

Microsoft Academic Search

This paper proposes a human body posture estimation method based on back projection of human silhouette images extracted from multi-camera images. To achieve real-time 3D human body posture estimation, a server-client system is introduced into the multi-camera system, improvements of the background subtraction and back projection are investigated. To evaluate the feasibility of the proposed method, 3D estimation experiments of

Yusuke Nagasawa; Takako Ohta; Yukiko Mutsuji; Kazuhiko Takahashi; Masafumi Hashimoto

2007-01-01

337

Reconstructing 3D Human Body Pose from Stereo Image Sequences Using Hierarchical Human Body Model Learning  

Microsoft Academic Search

This paper presents a novel method for reconstructing a 3D human body pose using depth information based on top-down learning. The human body pose is represented by a linear combination of prototypes of 2D depth images and their corresponding 3D body models in terms of the position of a predetermined set of joints. In a 2D depth image, the optimal

Hee-deok Yang; Seong-whan Lee

2006-01-01

338

Registration of 3D Photographs with Spiral CT Images for Soft Tissue Simulation in Maxillofacial Surgery  

Microsoft Academic Search

Prediction of the facial outcome after maxillofacial surgery is not only of major interest for surgeons but also for patients.\\u000a A mirror-like image of the expected surgical outcome gives important information for the patient and provides the surgeon\\u000a with a good communication tool. This paper presents a method for registration of 3D photographs with 3D CT images, to provide\\u000a the

Pieter De Groeve; Filip Schutyser; Johan Van Cleynenbreugel; Paul Suetens

2001-01-01

339

An image encryption algorithm based on 3D cellular automata and chaotic maps  

NASA Astrophysics Data System (ADS)

A novel encryption algorithm to cipher digital images is presented in this work. The digital image is rendering into a three-dimensional (3D) lattice and the protocol consists of two phases: the confusion phase where 24 chaotic Cat maps are applied and the diffusion phase where a 3D cellular automata is evolved. The encryption method is shown to be secure against the most important cryptanalytic attacks.

Del Rey, A. Mart韓; S醤chez, G. Rodr韌uez

2015-05-01

340

Image-based 3D scene analysis for navigation of autonomous airborne systems  

Microsoft Academic Search

In this paper we describe a method for automatic determination of sensor pose (position and orientation) related to a 3D landmark or scene model. The method is based on geometrical matching of 2D image structures with projected elements of the associated 3D model. For structural image analysis and scene interpretation, a blackboard-based production system is used resulting in a symbolic

Klaus Jaeger; Karlheinz Bers

2001-01-01

341

Application of Medical Imaging Software to 3D Visualization of Astronomical Data  

Microsoft Academic Search

The AstroMed project at Harvard University's Initiative in Innovative Computing (IIC) is working on improved visualization and data-sharing solutions that are applicable to the fields of both astronomy and medicine. The current focus is on the application of medical image visualization and analysis techniques to three-dimensional (3D) astronomical data. The 3D Slicer and OsiriX medical imaging tools have been used

M. Borkin; A. Goodman; M. Halle; D. Alan

2007-01-01

342

A dual-modal retinal imaging system with adaptive optics  

PubMed Central

An adaptive optics scanning laser ophthalmoscope (AO-SLO) is adapted to provide optical coherence tomography (OCT) imaging. The AO-SLO function is unchanged. The system uses the same light source, scanning optics, and adaptive optics in both imaging modes. The result is a dual-modal system that can acquire retinal images in both en face and cross-section planes at the single cell level. A new spectral shaping method is developed to reduce the large sidelobes in the coherence profile of the OCT imaging when a non-ideal source is used with a minimal introduction of noise. The technique uses a combination of two existing digital techniques. The thickness and position of the traditionally named inner segment/outer segment junction are measured from individual photoreceptors. In-vivo images of healthy and diseased human retinas are demonstrated. PMID:24514529

Meadway, Alexander; Girkin, Christopher A.; Zhang, Yuhua

2013-01-01

343

3D object-oriented image analysis in 3D geophysical modelling: Analysing the central part of the East African Rift System  

NASA Astrophysics Data System (ADS)

Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D interactive modelling environment IGMAS+, and their density contrast values were calculated using an object-based inversion technique to calculate the forward signal of the objects and compare it with the measured satellite gravity. Thus, a new object-based approach was implemented to interpret and extract the 3D subsurface objects from 3D geophysical data. We also introduce a new approach to constrain the interpretation of the satellite gravity measurements that can be applied using any 3D geophysical model.

Fadel, I.; van der Meijde, M.; Kerle, N.; Lauritsen, N.

2015-03-01

344

3D Image Interpolation Based on Directional Coherence Yongmei Wang Zhunping Zhang  

E-print Network

3D Image Interpolation Based on Directional Coherence Yongmei Wang Zhunping Zhang Baining Guo Dept participated in this work when he was working as an intern at Microsoft Research Asia. Abstract Image when compared with traditional image interpolation methods. The basis of DCI is a form of directional

Duncan, James S.

345

Pearling: 3D interactive extraction of tubular structures from volumetric images  

E-print Network

Pearling: 3D interactive extraction of tubular structures from volumetric images J. Rossignac1 , B and Reasoning Department, Princeton, NJ 08540 Abstract. This paper presents Pearling, a novel three image. Given a user-supplied initialization, Pearling extracts runs of pearls (balls) from the image

Rossignac, Jarek

346

Use of potential functions in 3D rendering of fractal images from complex functions  

Microsoft Academic Search

Computer graphics is important in developing fractal images visualizing the Mandelbrot and Julia sets from a complex function. Computer rendering is a central tool for obtaining nice fractal images. We render 3D objects with the height of each complex point of a fractal image considering the diverging speed of its orbit. A potential function helps approximate this speed. We propose

Young Bong Kim; Hyoung Seok Kim; Hong Oh Kim; Sung Yong Shin

1996-01-01

347

3-D Flaw Imaging by Inverse Scattering Analysis Using Ultrasonic Array Transducer  

NASA Astrophysics Data System (ADS)

Ultrasonic matrix array transducers have the advantage of receiving flaw echoes simultaneously at various points on a flat surface of the test material. Here we propose 3-D imaging techniques to reconstruct flaw shapes with the array transducer. These techniques are based on linearized inverse scattering methods in the frequency domain. The principal operation of these methods is the integration of the wave data in the K-space. In this study, the 3-D fast Fourier transform is introduced into the inversion algorithm to evaluate the integral in the K-space. Performance of the 3-D imaging technique is demonstrated by using the numerically calculated waveforms by the fast multipole BEM.

Nakahata, K.; Saitoh, T.; Hirose, S.

2007-03-01

348

3-D scalable medical image compression with optimized volume of interest coding.  

PubMed

We present a novel 3-D scalable compression method for medical images with optimized volume of interest (VOI) coding. The method is presented within the framework of interactive telemedicine applications, where different remote clients may access the compressed 3-D medical imaging data stored on a central server and request the transmission of different VOIs from an initial lossy to a final lossless representation. The method employs the 3-D integer wavelet transform and a modified EBCOT with 3-D contexts to create a scalable bit-stream. Optimized VOI coding is attained by an optimization technique that reorders the output bit-stream after encoding, so that those bits belonging to a VOI are decoded at the highest quality possible at any bit-rate, while allowing for the decoding of background information with peripherally increasing quality around the VOI. The bit-stream reordering procedure is based on a weighting model that incorporates the position of the VOI and the mean energy of the wavelet coefficients. The background information with peripherally increasing quality around the VOI allows for placement of the VOI into the context of the 3-D image. Performance evaluations based on real 3-D medical imaging data showed that the proposed method achieves a higher reconstruction quality, in terms of the peak signal-to-noise ratio, than that achieved by 3D-JPEG2000 with VOI coding, when using the MAXSHIFT and general scaling-based methods. PMID:20562038

Sanchez, Victor; Abugharbieh, Rafeef; Nasiopoulos, Panos

2010-10-01

349

Dicom Color Medical Image Compression using 3D-SPIHT for Pacs Application.  

PubMed

The proposed algorithm presents an application of 3D-SPIHT algorithm to color volumetric dicom medical images using 3D wavelet decomposition and a 3D spatial dependence tree. The wavelet decomposition is accomplished with biorthogonal 9/7 filters. 3D-SPIHT is the modern-day benchmark for three dimensional image compressions. The three-dimensional coding is based on the observation that the sequences of images are contiguous in the temporal axis and there is no motion between slices. Therefore, the 3D discrete wavelet transform can fully exploit the inter-slices correlations. The set partitioning techniques involve a progressive coding of the wavelet coefficients. The 3D-SPIHT is implemented and the Rate-distortion (Peak Signal-to-Noise Ratio (PSNR) vs. bit rate) performances are presented for volumetric medical datasets by using biorthogonal 9/7. The results are compared with the previous results of JPEG 2000 standards. Results show that 3D-SPIHT method exploits the color space relationships as well as maintaining the full embeddedness required by color image sequences compression and gives better performance in terms of the PSNR and compression ratio than the JPEG 2000. The results suggest an effective practical implementation for PACS applications. PMID:23675076

Kesavamurthy, T; Rani, Subha

2008-06-01

350

OVERALL PROCEDURES PROTOCOL AND PATIENT ENROLLMENT PROTOCOL: TESTING FEASIBILITY OF 3D ULTRASOUND DATA ACQUISITION AND RELIABILITY OF DATA RETRIEVAL FROM STORED 3D IMAGES  

EPA Science Inventory

The purpose of this study is to examine the feasibility of collecting, transmitting, and analyzing 3-D ultrasound data in the context of a multi-center study of pregnant women. The study will also examine the reliability of measurements obtained from 3-D imag...

351

3D segmentation and image annotation for quantitative diagnosis in lung CT images with pulmonary lesions  

NASA Astrophysics Data System (ADS)

Pulmonary nodules and ground glass opacities are highly significant findings in high-resolution computed tomography (HRCT) of patients with pulmonary lesion. The appearances of pulmonary nodules and ground glass opacities show a relationship with different lung diseases. According to corresponding characteristic of lesion, pertinent segment methods and quantitative analysis are helpful for control and treat diseases at an earlier and potentially more curable stage. Currently, most of the studies have focused on two-dimensional quantitative analysis of these kinds of deceases. Compared to two-dimensional images, three-dimensional quantitative analysis can take full advantage of isotropic image data acquired by using thin slicing HRCT in space and has better quantitative precision for clinical diagnosis. This presentation designs a computer-aided diagnosis component to segment 3D disease areas of nodules and ground glass opacities in lung CT images, and use AIML (Annotation and image makeup language) to annotate the segmented 3D pulmonary lesions with information of quantitative measurement which may provide more features and information to the radiologists in clinical diagnosis.

Li, Suo; Zhu, Yanjie; Sun, Jianyong; Zhang, Jianguo

2013-03-01

352

A Kalman filter approach for denoising and deblurring 3-D microscopy images.  

PubMed

This paper proposes a new method for removing noise and blurring from 3D microscopy images. The main contribution is the definition of a space-variant generating model of a 3-D signal, which is capable to stochastically describe a wide class of 3-D images. Unlike other approaches, the space-variant structure allows the model to consider the information on edge locations, if available. A suitable description of the image acquisition process, including blurring and noise, is then associated to the model. A state-space realization is finally derived, which is amenable to the application of standard Kalman filter as an image restoration algorithm. The so obtained method is able to remove, at each spatial step, both blur and noise, via a linear minimum variance recursive one-shot procedure, which does not require the simultaneous processing of the whole image. Numerical results on synthetic and real microscopy images confirm the merit of the approach. PMID:24122555

Conte, Francesco; Germani, Alfredo; Iannello, Giulio

2013-12-01

353

Landmark-based 3D fusion of SPECT and CT images  

NASA Astrophysics Data System (ADS)

In this paper we present interactive visualization procedures for registration of SPECT and CT images based on landmarks. Because of the poor anatomic detail available in many SPECT images, registration of SPECT images with other modalities often requires the use of external markers. These markers may correspond to anatomic structures identifiable in the other modality image. In this work, we present a method to nonrigidly register SPECT and CT images based on automatic marker localization and interactive anatomic localization using 3D surface renderings of skin. The images are registered in 3D by fitting low order polynomials which are constrained to be near rigid. The method developed here exploits 3D information to attain greater accuracy and reduces the amount of time needed for expert interaction.

Brown, Lisa G.; Maguire, Gerald Q., Jr.; Noz, Marilyn E.

1993-08-01

354

Space Radar Image Isla Isabela in 3-D  

NASA Technical Reports Server (NTRS)

This is a three-dimensional view of Isabela, one of the Galapagos Islands located off the western coast of Ecuador, South America. This view was constructed by overlaying a Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) image on a digital elevation map produced by TOPSAR, a prototype airborne interferometric radar which produces simultaneous image and elevation data. The vertical scale in this image is exaggerated by a factor of 1.87. The SIR-C/X-SAR image was taken on the 40th orbit of space shuttle Endeavour. The image is centered at about 0.5 degree south latitude and 91 degrees west longitude and covers an area of 75 by 60 kilometers (47 by 37 miles). The radar incidence angle at the center of the image is about 20 degrees. The western Galapagos Islands, which lie about 1,200 kilometers (750 miles)west of Ecuador in the eastern Pacific, have six active volcanoes similar to the volcanoes found in Hawaii and reflect the volcanic processes that occur where the ocean floor is created. Since the time of Charles Darwin's visit to the area in 1835, there have been more than 60 recorded eruptions on these volcanoes. This SIR-C/X-SAR image of Alcedo and Sierra Negra volcanoes shows the rougher lava flows as bright features, while ash deposits and smooth pahoehoe lava flows appear dark. Vertical exaggeration of relief is a common tool scientists use to detect relationships between structure (for example, faults, and fractures) and topography. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI).

1999-01-01

355

3D imaging of nuclear reactions using GEM TPC  

NASA Astrophysics Data System (ADS)

We present a prototype of time projection chamber with planar electronic readout. The particular aspect of the readout is the arrangement and connection of pads in three linear arrays. A track of an ionizing particle may be reconstructed by applying the specially developed algorithms to the signals generated simultaneously in the three linear arrays of strips rotated by 60. This provides the measurement of the coordinates of the track segment corresponding to the defined time slice in the plane perpendicular to the drift vector. The relative coordinate in the orthogonal direction is provided by the measurement of time sequence of signals at the known drift velocity value. It is expected to achieve comparable precision of 3D reconstruction of charged tracks in nuclear reactions at low energies as for the pixel readout but with significantly reduced costs of electronics. In this work the results of the first experiments using this TPC are presented. The reconstructed tracks of ? particles from decay of 222Rn using simple algorithm are shown. The encouraging results confirm the capability of such TPC to measure low-energy charged products of nuclear reactions and nuclear decays.

Biha?owicz, Jan S.; ?wiok, Miko?aj; Dominik, Wojciech; Kasprowicz, Grzegorz; Po?niak, Krzysztof

2014-11-01

356

3D image reconstruction of fiber systems using electron tomography.  

PubMed

Over the past several years, electron microscopists and materials researchers have shown increased interest in electron tomography (reconstruction of three-dimensional information from a tilt series of bright field images obtained in a transmission electron microscope (TEM)). In this research, electron tomography has been used to reconstruct a three-dimensional image for fiber structures from secondary electron images in a scanning electron microscope (SEM). The implementation of this technique is used to examine the structure of fiber system before and after deformation. A test sample of steel wool was tilted around a single axis from -10 to 60 by one-degree steps with images taken at every degree; three-dimensional images were reconstructed for the specimen of fine steel fibers. This method is capable of reconstructing the three-dimensional morphology of this type of lineal structure, and to obtain features such as tortuosity, contact points, and linear density that are of importance in defining the mechanical properties of these materials. PMID:25464156

Fakron, Osama M; Field, David P

2015-02-01

357

Radar Imaging of Spheres in 3D using MUSIC  

SciTech Connect

We have shown that multiple spheres can be imaged by linear and planar EM arrays using only one component of polarization. The imaging approach involves calculating the SVD of the scattering response matrix, selecting a subset of singular values that represents noise, and evaluating the MUSIC functional. The noise threshold applied to the spectrum of singular values for optimal performance is typically around 1%. The resulting signal subspace includes more than one singular value per sphere. The presence of reflections from the ground improves height localization, even for a linear array parallel to the ground. However, the interference between direct and reflected energy modulates the field, creating periodic nulls that can obscure targets in typical images. These nulls are largely eliminated by normalizing the MUSIC functional with the broadside beam pattern of the array. The resulting images show excellent localization for 1 and 2 spheres. The performance for the 3 sphere configurations are complicated by shadowing effects and the greater range of the 3rd sphere in case 2. Two of the three spheres are easily located by MUSIC but the third is difficult to distinguish from other local maxima of the complex imaging functional. Improvement is seen when the linear array is replace with a planar array, which increases the effective aperture height. Further analysis of the singular values and their relationship to modes of scattering from the spheres, as well as better ways to exploit polarization, should improve performance. Work along these lines is currently being pursued by the authors.

Chambers, D H; Berryman, J G

2003-01-21

358

3-D capacitance density imaging of fluidized bed  

DOEpatents

A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved.

Fasching, George E. (653 Vista Pl., Morgantown, WV 26505)

1990-01-01

359

Noninvasive Imaging of Retinal Morphology and Microvasculature in Obese Mice Using Optical Coherence Tomography and Optical Microangiography  

PubMed Central

Purpose. To evaluate early diabetes-induced changes in retinal thickness and microvasculature in a type 2 diabetic mouse model by using optical coherence tomography (OCT)/optical microangiography (OMAG). Methods. Twenty-two-week-old obese (OB) BTBR mice (n = 10) and wild-type (WT) control mice (n = 10) were imaged. Three-dimensional (3D) data volumes were captured with spectral domain OCT using an ultrahigh-sensitive OMAG scanning protocol for 3D volumetric angiography of the retina and dense A-scan protocol for measurement of the total retinal blood flow (RBF) rate. The thicknesses of the nerve fiber layer (NFL) and that of the NFL to the inner plexiform layer (IPL) were measured and compared between OB and WT mice. The linear capillary densities within intermediate and deep capillary layers were determined by the number of capillaries crossing a 500-?m line. The RBF rate was evaluated using an en face Doppler approach. These quantitative measurements were compared between OB and WT mice. Results. The retinal thickness of the NFL to IPL was significantly reduced in OB mice (P < 0.01) compared to that in WT mice, whereas the NFL thickness between the two was unchanged. 3D depth-resolved OMAG angiography revealed the first in vivo 3D model of mouse retinal microcirculation. Although no obvious differences in capillary vessel densities of the intermediate and deep capillary layers were detected between normal and OB mice, the total RBF rate was significantly lower (P < 0.05) in OB mice than in WT mice. Conclusions. We conclude that OB BTBR mice have significantly reduced NFL朓PL thicknesses and total RBF rates compared with those of WT mice, as imaged by OCT/OMAG. OMAG provides an unprecedented capability for high-resolution depth-resolved imaging of mouse retinal vessels and blood flow that may play a pivotal role in providing a noninvasive method for detecting early microvascular changes in patients with diabetic retinopathy. PMID:24458155

Zhi, Zhongwei; Chao, Jennifer R.; Wietecha, Tomasz; Hudkins, Kelly L.; Alpers, Charles E.; Wang, Ruikang K.

2014-01-01

360

Image-based indoor localization system based on 3D SfM model  

NASA Astrophysics Data System (ADS)

Indoor localization is an important research topic for both of the robot and signal processing communities. In recent years, image-based localization is also employed in indoor environment for the easy availability of the necessary equipment. After capturing an image and sending it to an image database, the best matching image is returned with the navigation information. By allowing further camera pose estimation, the image-based localization system with the use of Structure-from-Motion reconstruction model can achieve higher accuracy than the methods of searching through a 2D image database. However, this emerging technique is still only on the use of outdoor environment. In this paper, we introduce the 3D SfM model based image-based localization system into the indoor localization task. We capture images of the indoor environment and reconstruct the 3D model. On the localization task, we simply use the images captured by a mobile to match the 3D reconstructed model to localize the image. In this process, we use the visual words and the approximate nearest neighbor methods to accelerate the process of nding the query feature's correspondences. Within the visual words, we conduct linear search in detecting the correspondences. From the experiments, we nd that the image-based localization method based on 3D SfM model gives good localization result based on both accuracy and speed.

Lu, Guoyu; Kambhamettu, Chandra

2013-12-01

361

Nonlinear Probabilistic Estimation of 3D Geometry from Images  

E-print Network

constraints involving non颅Euclidean domains, such as those found in the 3颅D vision geometry problems. Using are geometrically poorly leveraged by the image fea颅 tures, involve nonlinear relationships, and have non颅Euclidean state domains. To model such domains, a manifold颅tangent framework is developed which allows non颅Euclidean

362

Efficient 3-D adaptive filtering for medical image enhancement  

Microsoft Academic Search

Tensor based orientation adaptive filtering, an explicit methodology for anisotropic filtering, constitutes a flexible framework for medi- cal image enhancement. The technique features post-filtering steer- ability and allows user interaction and direct control over the high- frequency contents of the signal. A new class of filters for local structure analysis together with filter networks significantly lowers the complexity to meet

Bj鰎n Svensson; Mats T. Andersson; 謗jan Smedby; Hans Knutsson

2006-01-01

363

DIGITAL HOLOGRAPHIC MICROSCOPY, A NEW 3D IMAGING TECHNIQUE  

Microsoft Academic Search

Digital Holographic Microscopy (DHM) is a new imaging technique, which is developing rapidly, offering both sub-wavelength resolution and real time observation capabilities. The method is based on the acquisition of a hologram formed by an object beam passing through a microscope objective and interfering with a reference beam. The object field is recovered when the hologram is re-illuminated by a

C. Depeursinge; F. Charri鑢e; T. Colomb; J. K黨n; Y. Emery; E. Cuche

364

Space Radar Image of Kilauea, Hawaii in 3-D  

NASA Technical Reports Server (NTRS)

This is a three-dimensional perspective view of a false-color image of the eastern part of the Big Island of Hawaii. It was produced using all three radar frequencies -- X-band, C-band and L-band -- from the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) flying on the space shuttle Endeavour, overlaid on a U.S. Geological Survey digital elevation map. Visible in the center of the image in blue are the summit crater (Kilauea Caldera) which contains the smaller Halemaumau Crater, and the line of collapse craters below them that form the Chain of Craters Road. The image was acquired on April 12, 1994 during orbit 52 of the space shuttle. The area shown is approximately 34 by 57 kilometers (21 by 35 miles) with the top of the image pointing toward northwest. The image is centered at about 155.25 degrees west longitude and 19.5 degrees north latitude. The false colors are created by displaying three radar channels of different frequency. Red areas correspond to high backscatter at L-HV polarization, while green areas exhibit high backscatter at C-HV polarization. Finally, blue shows high return at X-VV polarization. Using this color scheme, the rain forest appears bright on the image, while the green areas correspond to lower vegetation. The lava flows have different colors depending on their types and are easily recognizable due to their shapes. The flows at the top of the image originated from the Mauna Loa volcano. Kilauea volcano has been almost continuously active for more than the last 11 years. Field teams that were on the ground specifically to support these radar observations report that there was vigorous surface activity about 400 meters (one-quartermile) inland from the coast. A moving lava flow about 200 meters (650 feet) in length was observed at the time of the shuttle overflight, raising the possibility that subsequent images taken during this mission will show changes in the landscape. Currently, most of the lava that is erupted travels the 8 kilometers (5 miles) from the Pu'u O'o crater (the active vent) just outside this image to the coast through a series of lava tubes, but in the past there have been many large lava flows that have traveled this distance, destroying houses and parts of the Hawaii Volcanoes National Park. This SIR-C/X-SAR image shows two types of lava flows that are common to Hawaiian volcanoes. Pahoehoe lava flows are relatively smooth, and appear very dark blue because much of the radar energy is reflected away from the radar. In contrast other lava flows are relatively rough and bounce much of the radar energy back to the radar, making that part of the image bright blue. This radar image is valuable because it allows scientists to study an evolving lava flow field from the Pu'u O'o vent. Much of the area on the northeast side (right) of the volcano is covered with tropical rain forest, and because trees reflect a lot of the radar energy, the forest appears bright in this radar scene. The linear feature running from Kilauea Crater to the right of the image is Highway 11leading to the city of Hilo which is located just beyond the right edge of this image. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA)

1999-01-01

365

Determining 3-D motion and structure from image sequences  

NASA Technical Reports Server (NTRS)

A method of determining three-dimensional motion and structure from two image frames is presented. The method requires eight point correspondences between the two frames, from which motion and structure parameters are determined by solving a set of eight linear equations and a singular value decomposition of a 3x3 matrix. It is shown that the solution thus obtained is unique.

Huang, T. S.

1982-01-01

366

A survey of 3D medical imaging technologies  

Microsoft Academic Search

Three-dimensional medical imaging methodologies are surveyed with respect to hardware versus software, stand-alone versus on-the-scanner, speed, interaction, rendering methodology, fidelity, ease of use, cost, and quantitative capability. The question of volume versus surface rendering is considered in more detail. Research results are cited to illustrate the capabilities discussed

G. T. Herman

1990-01-01

367

The pulsed all fiber laser application in the high-resolution 3D imaging LIDAR system  

NASA Astrophysics Data System (ADS)

An all fiber laser with master-oscillator-power-amplifier (MOPA) configuration at 1064nm/1550nm for the high-resolution three-dimensional (3D) imaging light detection and ranging (LIDAR) system was reported. The pulsewidth and the repetition frequency could be arbitrarily tuned 1ns~10ns and 10KHz~1MHz, and the peak power exceeded 100kW could be obtained with the laser. Using this all fiber laser in the high-resolution 3D imaging LIDAR system, the image resolution of 1024x1024 and the distance precision of +/-1.5 cm was obtained at the imaging distance of 1km.

Gao, Cunxiao; Zhu, Shaolan; Niu, Linquan; Feng, Li; He, Haodong; Cao, Zongying

2014-05-01

368

3D soft tissue imaging with a mobile C-arm.  

PubMed

We introduce a clinical prototype for 3D soft tissue imaging to support surgical or interventional procedures based on a mobile C-arm. An overview of required methods and materials is followed by first clinical images of animals and human patients including dosimetry. The mobility and flexibility of 3D C-arms gives free access to the patient and therefore avoids relocation of the patient between imaging and surgical intervention. Image fusion with diagnostic data (MRI, CT, PET) is demonstrated and promising applications for brachytherapy, RFTT and others are discussed. PMID:17188841

Ritter, Dieter; Orman, Jasmina; Schmidgunst, Christian; Graumann, Rainer

2007-03-01

369

Prospective gating for 3D imaging of the beating zebrafish heart in embryonic development studies  

NASA Astrophysics Data System (ADS)

We demonstrate the use of prospective gating from continuously acquired brightfield images of zebrafish embryos to trigger the acquisition of fluorescence images with the heart at a precisely selected position in its cycle. The laser exposure of the sample is reduced by an order of magnitude compared to alternative techniques which acquire many separate fluorescence images for each section before selecting the most appropriate ones to build up a consistent 3D image stack. We present results obtained using our SPIM system including 3D reconstructions of the living, beating heart, acquired using optical gating without the need for any pharmacological or electrophysiological intervention, and discuss possible wider applications of our technique.

Taylor, J. M.; Saunter, C. D.; Love, G. D.; Girkin, J. M.

2012-03-01

370

Reconstructing photorealistic 3D models from image sequence using domain decomposition method  

NASA Astrophysics Data System (ADS)

In the fields of industrial design, artistic design and heritage conservation, physical objects are usually digitalized by reverse engineering through some 3D scanning methods. Structured light and photogrammetry are two main methods to acquire 3D information, and both are expensive. Even if these expensive instruments are used, photorealistic 3D models are seldom available. In this paper, a new method to reconstruction photorealistic 3D models using a single camera is proposed. A square plate glued with coded marks is used to place the objects, and a sequence of about 20 images is taken. From the coded marks, the images are calibrated, and a snake algorithm is used to segment object from the background. A rough 3d model is obtained using shape from silhouettes algorithm. The silhouettes are decomposed into a combination of convex curves, which are used to partition the rough 3d model into some convex mesh patches. For each patch, the multi-view photo consistency constraints and smooth regulations are expressed as a finite element formulation, which can be resolved locally, and the information can be exchanged along the patches boundaries. The rough model is deformed into a fine 3d model through such a domain decomposition finite element method. The textures are assigned to each element mesh, and a photorealistic 3D model is got finally. A toy pig is used to verify the algorithm, and the result is exciting.

Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei

2009-11-01

371

Validation of retinal image registration algorithms by a projective imaging distortion model.  

PubMed

Fundus camera imaging of the retina is widely used to document ophthalmologic disorders including diabetic retinopathy, glaucoma, and age-related macular degeneration. The retinal images typically have a limited field of view due mainly to the curvedness of human retina, so multiple images are to be joined together using image registration technique to form a montage with a larger field of view. A variety of methods for retinal image registration have been proposed, but evaluating such methods objectively is difficult due to the lack of a reference standard for the true alignment of the individual images that make up the montage. A method of generating simulated retinal image set by modeling geometric distortions due to the eye geometry and the image acquisition process is described in this paper. We also present the validation tool for any retinal image registration method by tracing back the distortion path and accessing the geometric misalignment from the coordinate system of reference standard. The quantitative comparison for different registration methods is given in the experiment, so the registration performance is evaluated in an objective manner. PMID:18003507

Lee, Sangyeol; Abramoff, Michael D; Reinhardt, Joseph M

2007-01-01

372

Space Radar Image of Karakax Valley, China 3-D  

NASA Technical Reports Server (NTRS)

This three-dimensional perspective of the remote Karakax Valley in the northern Tibetan Plateau of western China was created by combining two spaceborne radar images using a technique known as interferometry. Visualizations like this are helpful to scientists because they reveal where the slopes of the valley are cut by erosion, as well as the accumulations of gravel deposits at the base of the mountains. These gravel deposits, called alluvial fans, are a common landform in desert regions that scientists are mapping in order to learn more about Earth's past climate changes. Higher up the valley side is a clear break in the slope, running straight, just below the ridge line. This is the trace of the Altyn Tagh fault, which is much longer than California's San Andreas fault. Geophysicists are studying this fault for clues it may be able to give them about large faults. Elevations range from 4000 m (13,100 ft) in the valley to over 6000 m (19,700 ft) at the peaks of the glaciated Kun Lun mountains running from the front right towards the back. Scale varies in this perspective view, but the area is about 20 km (12 miles) wide in the middle of the image, and there is no vertical exaggeration. The two radar images were acquired on separate days during the second flight of the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the space shuttle Endeavour in October 1994. The interferometry technique provides elevation measurements of all points in the scene. The resulting digital topographic map was used to create this view, looking northwest from high over the valley. Variations in the colors can be related to gravel, sand and rock outcrops. This image is centered at 36.1 degrees north latitude, 79.2 degrees east longitude. Radar image data are draped over the topography to provide the color with the following assignments: Red is L-band vertically transmitted, vertically received; green is the average of L-band vertically transmitted, vertically received and C-band vertically transmitted, vertically received; and blue is C-band vertically transmitted, vertically received. SIR-C/X-SAR, a joint mission of the German, Italian and United States space agencies, is part of NASA's Mission to Planet Earth.

1994-01-01

373

Space Radar Image of Mammoth, California in 3-D  

NASA Technical Reports Server (NTRS)

This is a three-dimensional perspective of Mammoth Mountain, California. This view was constructed by overlaying a Spaceborne Imaging Radar-C (SIR-C) radar image on a U.S. Geological Survey digital elevation map. Vertical exaggeration is 1.87 times. The image is centered at 37.6 degrees north, 119.0 degrees west. It was acquired from the Spaceborne Imaging Radar C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) onboard space shuttle Endeavour on its 67th orbit on April 13, 1994. In this color representation, red is C-band HV-polarization, green is C-band VV-polarization and blue is the ratio of C-band VV to C-band HV. Blue areas are smooth, and yellow areas are rock out-crops with varying amounts of snow and vegetation. Crowley Lake is in the foreground, and Highway 395 crosses in the middle of the image. Mammoth Mountain is shown in the upper right. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI).

1999-01-01

374

Space Radar Image of Missoula, Montana in 3-D  

NASA Technical Reports Server (NTRS)

This is a three-dimensional perspective view of Missoula, Montana, created by combining two spaceborne radar images using a technique known as interferometry. Visualizations like this are useful because they show scientists the shapes of the topographic features such as mountains and valleys. This technique helps to clarify the relationships of the different types of materials on the surface detected by the radar. The view is looking north-northeast. The blue circular area at the lower left corner is a bend of the Bitterroot River just before it joins the Clark Fork, which runs through the city. Crossing the Bitterroot River is the bridge of U.S. Highway 93. Highest mountains in this image are at elevations of 2,200 meters (7,200 feet). The city is about 975 meters (3,200 feet) above sea level. The bright yellow areas are urban and suburban zones, dark brown and blue-green areas are grasslands, bright green areas are farms, light brown and purple areas are scrub and forest, and bright white and blue areas are steep rocky slopes. The two radar images were taken on successive days by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) onboard the space shuttle Endeavour in October 1994. The digital elevation map was produced using radar interferometry, a process in which radar data are acquired on different passes of the space shuttle. The two data passes are compared to obtain elevation information. Radar image data are draped over the topography to provide the color with the following assignments: red is L-band vertically transmitted, vertically received; green is C-band vertically transmitted, vertically received; and blue are differences seen in the L-band data between the two days. This image is centered near 46.9 degrees north latitude and 114.1 degrees west longitude. No vertical exaggeration factor has been applied to the data. SIR-C/X-SAR, a joint mission of the German, Italian and United States space agencies, is part of NASA's Mission to Planet Earth program.

1994-01-01

375

Segmentation of UAV-based images incorporating 3D point cloud information  

NASA Astrophysics Data System (ADS)

Numerous applications related to urban scene analysis demand automatic recognition of buildings and distinct sub-elements. For example, if LiDAR data is available, only 3D information could be leveraged for the segmentation. However, this poses several risks, for instance, the in-plane objects cannot be distinguished from their surroundings. On the other hand, if only image based segmentation is performed, the geometric features (e.g., normal orientation, planarity) are not readily available. This renders the task of detecting the distinct sub-elements of the building with similar radiometric characteristic infeasible. In this paper the individual sub-elements of buildings are recognized through sub-segmentation of the building using geometric and radiometric characteristics jointly. 3D points generated from Unmanned Aerial Vehicle (UAV) images are used for inferring the geometric characteristics of roofs and facades of the building. However, the image-based 3D points are noisy, error prone and often contain gaps. Hence the segmentation in 3D space is not appropriate. Therefore, we propose to perform segmentation in image space using geometric features from the 3D point cloud along with the radiometric features. The initial detection of buildings in 3D point cloud is followed by the segmentation in image space using the region growing approach by utilizing various radiometric and 3D point cloud features. The developed method was tested using two data sets obtained with UAV images with a ground resolution of around 1-2 cm. The developed method accurately segmented most of the building elements when compared to the plane-based segmentation using 3D point cloud alone.

Vetrivel, A.; Gerke, M.; Kerle, N.; Vosselman, G.

2015-03-01

376

Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery.  

PubMed

Computer-assisted oral and maxillofacial surgery (OMS) has been rapidly evolving since the last decade. State-of-the-art surgical navigation in OMS still suffers from bulky tracking sensors, troublesome image registration procedures, patient movement, loss of depth perception in visual guidance, and low navigation accuracy. We present an augmented reality navigation system with automatic marker-free image registration using 3-D image overlay and stereo tracking for dental surgery. A customized stereo camera is designed to track both the patient and instrument. Image registration is performed by patient tracking and real-time 3-D contour matching, without requiring any fiducial and reference markers. Real-time autostereoscopic 3-D imaging is implemented with the help of a consumer-level graphics processing unit. The resulting 3-D image of the patient's anatomy is overlaid on the surgical site by a half-silvered mirror using image registration and IP-camera registration to guide the surgeon by exposing hidden critical structures. The 3-D image of the surgical instrument is also overlaid over the real one for an augmented display. The 3-D images present both stereo and motion parallax from which depth perception can be obtained. Experiments were performed to evaluate various aspects of the system; the overall image overlay error of the proposed system was 0.71爉m. PMID:24658253

Wang, Junchen; Suenaga, Hideyuki; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro; Liao, Hongen

2014-04-01

377

Label free cell tracking in 3D tissue engineering constructs with high resolution imaging  

NASA Astrophysics Data System (ADS)

Within the field of tissue engineering there is an emphasis on studying 3-D live tissue structures. Consequently, to investigate and identify cellular activities and phenotypes in a 3-D environment for all in vitro experiments, including shape, migration/proliferation and axon projection, it is necessary to adopt an optical imaging system that enables monitoring 3-D cellular activities and morphology through the thickness of the construct for an extended culture period without cell labeling. This paper describes a new 3-D tracking algorithm developed for Cell-IQ, an automated cell imaging platform, which has been equipped with an environmental chamber optimized to enable capturing time-lapse sequences of live cell images over a long-term period without cell labeling. As an integral part of the algorithm, a novel auto-focusing procedure was developed for phase contrast microscopy equipped with 20x and 40x objectives, to provide a more accurate estimation of cell growth/trajectories by allowing 3-D voxels to be computed at high spatiotemporal resolution and cell density. A pilot study was carried out in a phantom system consisting of horizontally aligned nanofiber layers (with precise spacing between them), to mimic features well exemplified in cellular activities of neuronal growth in a 3-D environment. This was followed by detailed investigations concerning axonal projections and dendritic circuitry formation in a 3-D tissue engineering construct. Preliminary work on primary animal neuronal cells in response to chemoattractant and topographic cue within the scaffolds has produced encouraging results.

Smith, W. A.; Lam, K.-P.; Dempsey, K. P.; Mazzocchi-Jones, D.; Richardson, J. B.; Yang, Y.

2014-02-01

378

Automated quantification of 3D regional myocardial wall thickening from gated Magnetic Resonance images  

PubMed Central

Purpose To develop 3D quantitative measures of regional myocardial wall motion and thickening using cardiac MRI and to validate them by comparison to standard visual scoring assessment. Materials and Methods 53 consecutive subjects with short-axis slices and mid-ventricular 2-chamber/4-chamber views were analyzed. After correction for breath-hold related misregistration, 3D myocardial boundaries were fitted to images, and edited by an imaging cardiologist. Myocardial thickness was quantified at end-diastole and end-systole by computing the 3D distances using Laplace抯 equation. 3D thickening was represented using the standard 17-segment polar coordinates. 3D thickening was compared with 3D wall motion and with expert visual scores (6-point visual scoring of wall motion and wall thickening; 0=normal; 5=greatest abnormality) assigned by imaging cardiologists. Results Correlation between ejection fraction and thickening measurements was (r=0.84; p<0.001) compared to correlation between ejection fraction and motion measurements (r= 0.86; p<0.001). Good negative correlation between summed visual scores and global wall thickening and motion measurements were also obtained (rthick = -0.79; rmotion= -0.74). Additionally, overall good correlation between individual segmental visual scores with thickening/wall motion (rthick=-0.69; rmotion=-0.65) was observed (p<0.0001). Conclusion 3D quantitative regional thickening and wall motion measures obtained from MRI correlate strongly with expert clinical scoring. PMID:20099344

Prasad, Mithun; Ramesh, Amit; Kavanagh, Paul; Tamarappoo, Balaji K.; Nakazato, Ryo; Gerlach, James; Cheng, Victor; Thomson, Louise E. J.; Berman, Daniel S.; Germano, Guido; Slomka, Piotr J.

2010-01-01

379

3D and 4D magnetic susceptibility tomography based on complex MR images  

DOEpatents

Magnetic susceptibility is the physical property for T2*-weighted magnetic resonance imaging (T2*MRI). The invention relates to methods for reconstructing an internal distribution (3D map) of magnetic susceptibility values, .chi. (x,y,z), of an object, from 3D T2*MRI phase images, by using Computed Inverse Magnetic Resonance Imaging (CIMRI) tomography. The CIMRI technique solves the inverse problem of the 3D convolution by executing a 3D Total Variation (TV) regularized iterative convolution scheme, using a split Bregman iteration algorithm. The reconstruction of .chi. (x,y,z) can be designed for low-pass, band-pass, and high-pass features by using a convolution kernel that is modified from the standard dipole kernel. Multiple reconstructions can be implemented in parallel, and averaging the reconstructions can suppress noise. 4D dynamic magnetic susceptibility tomography can be implemented by reconstructing a 3D susceptibility volume from a 3D phase volume by performing 3D CIMRI magnetic susceptibility tomography at each snapshot time.

Chen, Zikuan; Calhoun, Vince D

2014-11-11

380

Critical Comparison of 3-d Imaging Approaches for NGST  

E-print Network

Currently three imaging spectrometer architectures, tunable filter, dispersive, and Fourier transform, are viable for imaging the universe in three dimensions. There are domains of greatest utility for each of these architectures. The optimum choice among the various alternative architectures is dependent on the nature of the desired observations, the maturity of the relevant technology, and the character of the backgrounds. The domain appropriate for each of the alternatives is delineated; both for instruments having ideal performance as well as for instrumentation based on currently available technology. The environment and science objectives for the Next Generation Space Telescope will be used as a specific representative case to provide a basis for comparison of the various alternatives.

Charles L. Bennett

1999-08-22

381

Free-Breathing 3D Whole Heart Black Blood Imaging with Motion Sensitized Driven Equilibrium  

PubMed Central

Purpose To assess the efficacy and robustness of motion sensitized driven equilibrium (MSDE) for blood suppression in volumetric 3D whole heart cardiac MR. Materials and Methods To investigate the efficacy of MSDE on blood suppression and myocardial SNR loss on different imaging sequences. 7 healthy adult subjects were imaged using 3D ECG-triggered MSDE-prep T1-weighted turbo spin echo (TSE), and spoiled gradient echo (GRE), after optimization of MSDE parameters in a pilot study of 5 subjects. Imaging artifacts, myocardial and blood SNR were assessed. Subsequently, the feasibility of isotropic spatial resolution MSDE-prep black-blood was assessed in 6 subjects. Finally, 15 patients with known or suspected cardiovascular disease were recruited to be imaged using conventional multi-slice 2D DIR TSE imaging sequence and 3D MSDE-prep spoiled GRE. Results The MSDE-prep yields significant blood suppression (75-92%), enabling a volumetric 3D black-blood assessment of the whole heart with significantly improved visualization of the chamber walls. The MSDE-prep also allowed successful acquisition of black-blood images with isotropic spatial resolution. In the patient study, 3D black-blood MSDE-prep and DIR resulted in similar blood suppression in LV and RV walls but the MSDE prep had superior myocardial signal and wall sharpness. Conclusion MSDE-prep allows volumetric black-blood imaging of the heart. PMID:22517477

Srinivasan, Subashini; Hu, Peng; Kissinger, Kraig V.; Goddu, Beth; Goepfert, Lois; Schmidt, Ehud J.; Kozerke, Sebastian; Nezafat, Reza

2012-01-01

382

An open-source deconvolution software package for 3-D quantitative fluorescence microscopy imaging  

PubMed Central

Summary Deconvolution techniques have been widely used for restoring the 3-D quantitative information of an unknown specimen observed using a wide-field fluorescence microscope. Deconv, an open-source deconvolution software package, was developed for 3-D quantitative fluorescence microscopy imaging and was released under the GNU Public License. Deconv provides numerical routines for simulation of a 3-D point spread function and deconvolution routines implemented three constrained iterative deconvolution algorithms: one based on a Poisson noise model and two others based on a Gaussian noise model. These algorithms are presented and evaluated using synthetic images and experimentally obtained microscope images, and the use of the library is explained. Deconv allows users to assess the utility of these deconvolution algorithms and to determine which are suited for a particular imaging application. The design of Deconv makes it easy for deconvolution capabilities to be incorporated into existing imaging applications. PMID:19941558

SUN, Y.; DAVIS, P.; KOSMACEK, E. A.; IANZINI, F.; MACKEY, M. A.

2010-01-01

383

A novel 3D resolution measure for optical microscopes with applications to single molecule imaging  

NASA Astrophysics Data System (ADS)

The advent of single molecule microscopy has generated significant interest in imaging single biomolecular interactions within a cellular environment in three dimensions. It is widely believed that the classical 2D (3D) resolution limit of optical microscopes precludes the study of single molecular interactions at distances of less than 200 nm (1 micron). However, it is well known that the classical resolution limit is based on heuristic notions. In fact, recent single molecule experiments have shown that the 2D resolution limit, i.e. Rayleigh's criterion, can be surpassed in an optical microscope setup. This illustrates that Rayleigh's criterion is inadequate for modern imaging approaches, thereby necessitating a re-assessment of the resolution limits of optical microscopes. Recently, we proposed a new modern resolution measure that overcomes the limitations of Rayleigh's criterion. Known as the fundamental resolution measure FREM, the new result predicts that distances well below the classical 2D resolution limit can be resolved in an optical microscope. By imaging closely spaced single molecules, it was experimentally verified that the new resolution measure can be attained in an optical microscope setup. In the present work, we extend this result to the 3D case and propose a 3D fundamental resolution measure 3D FREM that overcomes the limitations of the classical 3D resolution limit. We obtain an analytical expression for the 3D FREM. We show how the photon count of the single molecules affects the 3D FREM. We also investigate the effect of deteriorating experimental factors such as pixelation of the detector and extraneous noise sources on the new resolution measure. In contrast to the classical 3D resolution criteria, our new result predicts that distances well below the classical limit can be resolved. We expect that our results would provide novel tools for the design and analysis of 3D single molecule imaging experiments.

Ram, Sripad; Abraham, Anish V.; Ward, E. Sally; Ober, Raimund J.

2007-02-01

384

Real-Depth imaging: a new (no glasses) 3D imaging technology with video\\/data projection applications  

Microsoft Academic Search

Floating Images, Inc. has developed the software and hardware for anew, patent pending, 'floating 3D, off-the- screen-experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet\\/Intranet viewing, and television. Current '3D Graphics' technologies are actually flat on screen. Floating Images technology actually produce images at different depths

Eugene Dolgoff

1997-01-01

385

Anesthesiology training using 3D imaging and virtual reality  

NASA Astrophysics Data System (ADS)

Current training for regional nerve block procedures by anesthesiology residents requires expert supervision and the use of cadavers; both of which are relatively expensive commodities in today's cost-conscious medical environment. We are developing methods to augment and eventually replace these training procedures with real-time and realistic computer visualizations and manipulations of the anatomical structures involved in anesthesiology procedures, such as nerve plexus injections (e.g., celiac blocks). The initial work is focused on visualizations: both static images and rotational renderings. From the initial results, a coherent paradigm for virtual patient and scene representation will be developed.

Blezek, Daniel J.; Robb, Richard A.; Camp, Jon J.; Nauss, Lee A.

1996-04-01

386

Image denoising with multiple layer block matching and 3D filtering  

NASA Astrophysics Data System (ADS)

Block Matching and 3-D Filtering (BM3D) algorithm is currently considered as one of the most successful denoising algorithms. Despite its excellent results, BM3D still has room for improvements. Image details and sharp edges, such as text in document images are challenging, as they usually do not produce sparse representations under the linear transformations. Various artifacts such as ringing and blurring can be introduced as a result. This paper proposes a Multiple Layer BM3D (MLBM3D) denoising algorithm. The basic idea is to decompose image patches that contain high contrast details into multiple layers, and each layer is then collaboratively filtered separately. The algorithm contains a Basic Estimation step and a Final Estimation step. The first (Basic Estimation) step is identical to the one in BM3D. In the second (Final Estimation) step, image groups are determined to be single-layer or multilayer. A single-layer group is filtered in a same manner as in BM3D. For a multi-layer group, each image patch within the group is decomposed with the three-layer model. All the top layers in the group are stacked and collaboratively filtered. So are the bottom layers. The filtered top and bottom layers are re-assembled to form the estimation of the blocks. The final estimation of the image is obtained by aggregating estimations of all blocks, including both single-layer and multi-layer ones. The proposed algorithm shows excellent results, particularly for the images containing high contrast edges.

Fan, Zhigang

2014-03-01

387

Sliding-aperture multiview 3D camera-projector system and its application for 3D image transmission and IR to visible conversion  

NASA Astrophysics Data System (ADS)

A new architecture of the 3-D multiview camera and projector is presented. Camera optical system consist of a single wide aperture objective, a secondary (small) objective, a field lens and a scanner. Projector supplementary includes rear projection pupil forming screen. The system is intended for sequential 2-D prospective images acquisition and projection while the small working aperture is sliding across the opening of the big size objective lens or the spherical mirror. Both horizontal and full parallax imaging are possible. The system can transmit 3-D images in real time through fiber bundles, free space, and video image transmission lines. This system can also be used for real time conversion of infrared 3-D images. With this system, clear multiview stereoscopic images of real scene can be displayed with 30 degrees view zone angle.

Shestak, Serguei A.; Son, Jung-Young; Jeon, Hyung-Wook; Komar, Victor G.

1997-05-01

388

Information Theoretic Integrated Segmentation and Registration of Dual 2D Portal Images and 3D CT Images  

E-print Network

D Portal Images and 3D CT Images A Dissertation Presented to the Faculty of the Graduate SchoolAbstract Information Theoretic Integrated Segmentation and Registration of Dual 2D Portal Images where the segmen- tation and registration of dual anterior璸osterior and left lateral portal images

Duncan, James S.

389

A Novel 3D Building Damage Detection Method Using Multiple Overlapping UAV Images  

NASA Astrophysics Data System (ADS)

In this paper, a novel approach is presented that applies multiple overlapping UAV imagesto building damage detection. Traditional building damage detection method focus on 2D changes detection (i.e., those only in image appearance), whereas the 2D information delivered by the images is often not sufficient and accurate when dealing with building damage detection. Therefore the detection of building damage in 3D feature of scenes is desired. The key idea of 3D building damage detection is the 3D Change Detection using 3D point cloud obtained from aerial images through Structure from motion (SFM) techniques. The approach of building damage detection discussed in this paper not only uses the height changes of 3D feature of scene but also utilizes the image's shape and texture feature. Therefore, this method fully combines the 2D and 3D information of the real world to detect the building damage. The results, tested through field study, demonstrate that this method is feasible and effective in building damage detection. It has also shown that the proposed method is easily applicable and suited well for rapid damage assessment after natural disasters.

Sui, H.; Tu, J.; Song, Z.; Chen, G.; Li, Q.

2014-09-01

390

Fully automatic 2D to 3D conversion with aid of high-level image features  

NASA Astrophysics Data System (ADS)

With the recent advent in 3D display technology, there is an increasing need for conversion of existing 2D content into rendered 3D views. We propose a fully automatic 2D to 3D conversion algorithm that assigns relative depth values to the various objects in a given 2D image/scene and generates two different views (stereo pair) using a Depth Image Based Rendering (DIBR) algorithm for 3D displays. The algorithm described in this paper creates a scene model for each image based on certain low-level features like texture, gradient and pixel location and estimates a pseudo depth map. Since the capture environment is unknown, using low-level features alone creates inaccuracies in the depth map. Using such flawed depth map for 3D rendering will result in various artifacts, causing an unpleasant viewing experience. The proposed algorithm also uses certain high-level image features to overcome these imperfections and generates an enhanced depth map for improved viewing experience. Finally, we show several 3D results generated with our algorithm in the results section.

Appia, Vikram; Batur, Umit

2014-03-01

391

3D Imaging for hand gesture recognition: Exploring the software-hardware interaction of current technologies  

NASA Astrophysics Data System (ADS)

Interaction with 3D information is one of the fundamental and most familiar tasks in virtually all areas of engineering and science. Several recent technological advances pave the way for developing hand gesture recognition capabilities available to all, which will lead to more intuitive and efficient 3D user interfaces (3DUI). These developments can unlock new levels of expression and productivity in all activities concerned with the creation and manipulation of virtual 3D shapes and, specifically, in engineering design. Building fully automated systems for tracking and interpreting hand gestures requires robust and efficient 3D imaging techniques as well as potent shape classifiers. We survey and explore current and emerging 3D imaging technologies, and focus, in particular, on those that can be used to build interfaces between the users' hands and the machine. The purpose of this paper is to categorize and highlight the relevant differences between these existing 3D imaging approaches in terms of the nature of the information provided, output data format, as well as the specific conditions under which these approaches yield reliable data. Furthermore we explore the impact of each of these approaches on the computational cost and reliability of the required image processing algorithms. Finally we highlight the main challenges and opportunities in developing natural user interfaces based on hand gestures, and conclude with some promising directions for future research. [Figure not available: see fulltext.

Periverzov, Frol; Ilie?, Horea T.

2012-09-01

392

A new gold-standard dataset for 2D/3D image registration evaluation  

NASA Astrophysics Data System (ADS)

In this paper, we propose a new gold standard data set for the validation of 2D/3D image registration algorithms for image guided radiotherapy. A gold standard data set was calculated using a pig head with attached fiducial markers. We used several imaging modalities common in diagnostic imaging or radiotherapy which include 64-slice computed tomography (CT), magnetic resonance imaging (MRI) using T1, T2 and proton density (PD) sequences, and cone beam CT (CBCT) imaging data. Radiographic data were acquired using kilovoltage (kV) and megavoltage (MV) imaging techniques. The image information reflects both anatomy and reliable fiducial marker information, and improves over existing data sets by the level of anatomical detail and image data quality. The markers of three dimensional (3D) and two dimensional (2D) images were segmented using Analyze 9.0 (AnalyzeDirect, Inc) and an in-house software. The projection distance errors (PDE) and the expected target registration errors (TRE) over all the image data sets were found to be less than 1.7 mm and 1.3 mm, respectively. The gold standard data set, obtained with state-of-the-art imaging technology, has the potential to improve the validation of 2D/3D registration algorithms for image guided therapy.

Pawiro, Supriyanto; Markelj, Primoz; Gendrin, Christelle; Figl, Michael; Stock, Markus; Bloch, Christoph; Weber, Christoph; Unger, Ewald; N鯾auer, Iris; Kainberger, Franz; Bergmeister, Helga; Georg, Dietmar; Bergmann, Helmar; Birkfellner, Wolfgang

2010-02-01

393

Registration of 3-D images using weighted geometrical features  

SciTech Connect

In this paper, the authors present a weighted geometrical features (WGF) registration algorithm. Its efficacy is demonstrated by combining points and a surface. The technique is an extension of Besl and McKay`s iterative closest point (ICP) algorithm. The authors use the WGF algorithm to register X-ray computed tomography (CT) and T2-weighted magnetic resonance (MR) volume head images acquired from eleven patients that underwent craniotomies in a neurosurgical clinical trial. Each patient had five external markers attached to transcutaneous posts screwed into the outer table of the skull. The authors define registration error as the distance between positions of corresponding markers that are not used for registration. The CT and MR images are registered using fiducial points (marker positions) only, a surface only, and various weighted combinations of points and a surface. The CT surface is derived from contours corresponding to the inner surface of the skull. The MR surface is derived from contours corresponding to the cerebrospinal fluid (CSF)-dura interface. Registration using points and a surface is found to be significantly more accurate than registration using only points or a surface.

Maurer, C.R. Jr.; Aboutanos, G.B.; Dawant, B.M.; Maciunas, R.J.; Fitzpatrick, J.M. [Vanderbilt Univ., Nashville, TN (United States)] [Vanderbilt Univ., Nashville, TN (United States)

1996-12-01

394

Modeling of the rhodopsin bleaching with variational analysis of retinal images J. Dobrosotskayaa  

E-print Network

Modeling of the rhodopsin bleaching with variational analysis of retinal images J. Dobrosotskayaa density and modeling the bleaching kinetics. This work supports the characterization and detection about the bleaching parameters allows to separate and classify certain elements in the retinal image

Ehler, Martin

395

3D imaging of microbial biofilms: integration of synchrotron imaging and an interactive visualization interface.  

PubMed

Understanding the structure of microbial biofilms and other complex microbial communities is now possible through x-ray microtomography imaging. Feature detection and image processing for this type of data focuses on efficiently identifying and segmenting biofilm biomass in the datasets. These datasets are very large and segmentation often requires manual interventions due to low contrast between objects and high noise levels. New software is required for the effectual interpretation and analysis of such data. This work specifies the evolution and ability to analyze and visualize high resolution x-ray microtomography datasets. Major functionalities include read/write with multiple popular file formats, down-sampling large datasets to generate quick-views on low-power computers, image processing, and generating high quality output images and videos. These capabilities have been wrapped into a new interactive software toolkit, BiofilmViewer. A major focus of our work is to facilitate data transfer and to utilize the capabilities of existing powerful visualization and analytical tools including MATLAB, ImageJ, Paraview, Chimera, Vaa3D, Cell Profiler, Icy, BioImageXD, and Drishti. PMID:25570697

Thomas, Mathew; Marshall, Matthew J; Miller, Erin A; Kuprat, Andrew P; Kleese-van Dam, Kerstin; Carson, James P

2014-01-01

396

Towards a 3D-representation of microcalcification clusters using images of digital mammographic units  

Microsoft Academic Search

Mammography is a widespread imaging technique for the early detection of breast cancer. Microcalcification clusters, visible in X-ray images, are important indicators for the diagnosis. In the past, many image processing methods were developed to detect and to classify lesions as being malignant or benign using only cluster data extracted from the 2D-images. However, a microcalcification cluster is a 3D-entity

Ch. Daul; R. Graebling; D. Wolf

2005-01-01

397

3D reconstruction of the optic nerve head using stereo fundus images for computer-aided diagnosis of glaucoma  

NASA Astrophysics Data System (ADS)

The shape of the optic nerve head (ONH) is reconstructed automatically using stereo fundus color images by a robust stereo matching algorithm, which is needed for a quantitative estimate of the amount of nerve fiber loss for patients with glaucoma. Compared to natural scene stereo, fundus images are noisy because of the limits on illumination conditions and imperfections of the optics of the eye, posing challenges to conventional stereo matching approaches. In this paper, multi scale pixel feature vectors which are robust to noise are formulated using a combination of both pixel intensity and gradient features in scale space. Feature vectors associated with potential correspondences are compared with a disparity based matching score. The deep structures of the optic disc are reconstructed with a stack of disparity estimates in scale space. Optical coherence tomography (OCT) data was collected at the same time, and depth information from 3D segmentation was registered with the stereo fundus images to provide the ground truth for performance evaluation. In experiments, the proposed algorithm produces estimates for the shape of the ONH that are close to the OCT based shape, and it shows great potential to help computer-aided diagnosis of glaucoma and other related retinal diseases.

Tang, Li; Kwon, Young H.; Alward, Wallace L. M.; Greenlee, Emily C.; Lee, Kyungmoo; Garvin, Mona K.; Abr鄊off, Michael D.

2010-03-01

398

Terahertz Lasers Reveal Information for 3D Images  

NASA Technical Reports Server (NTRS)

After taking off her shoes and jacket, she places them in a bin. She then takes her laptop out of its case and places it in a separate bin. As the items move through the x-ray machine, the woman waits for a sign from security personnel to pass through the metal detector. Today, she was lucky; she did not encounter any delays. The man behind her, however, was asked to step inside a large circular tube, raise his hands above his head, and have his whole body scanned. If you have ever witnessed a full-body scan at the airport, you may have witnessed terahertz imaging. Terahertz wavelengths are located between microwave and infrared on the electromagnetic spectrum. When exposed to these wavelengths, certain materials such as clothing, thin metal, sheet rock, and insulation become transparent. At airports, terahertz radiation can illuminate guns, knives, or explosives hidden underneath a passenger s clothing. At NASA s Kennedy Space Center, terahertz wavelengths have assisted in the inspection of materials like insulating foam on the external tanks of the now-retired space shuttle. "The foam we used on the external tank was a little denser than Styrofoam, but not much," says Robert Youngquist, a physicist at Kennedy. The problem, he explains, was that "we lost a space shuttle by having a chunk of foam fall off from the external fuel tank and hit the orbiter." To uncover any potential defects in the foam covering, such as voids or air pockets, that could keep the material from staying in place, NASA employed terahertz imaging to see through the foam. For many years, the technique ensured the integrity of the material on the external tanks.

2013-01-01

399

Definition and construction of a 3D compact representation of image sequences  

NASA Astrophysics Data System (ADS)

This paper tackles the 3D model-based representation of video sequences of real scenes and focuses on sequences derived from a camera moving in static scenes. 3D models can be used advantageously for compression purposes. Moreover, they are quite adapted to interactive viewpoint synthesis. The object is here to build a compact 3D-based representation of a given video sequence. It first requires establishing correspondence between images from which viewpoint parameters and depth maps are estimated. The representation is then built by selecting in the image sequence the data that are necessary and sufficient to reconstruct the sequence at a given quality level. This paper presents this second part of the system. Our representation is a structured view-dependent 3D model composed of an ordered set of rectangular patches describing 2.5D regions (flag+texture+depth for each pixel) with attached viewpoint parameters sets.

Nicolas, Yannick; Robert, Philippe

2000-05-01

400

3D topography of biologic tissue by multiview imaging and structured light illumination  

NASA Astrophysics Data System (ADS)

Obtaining three-dimensional (3D) information of biologic tissue is important in many medical applications. This paper presents two methods for reconstructing 3D topography of biologic tissue: multiview imaging and structured light illumination. For each method, the working principle is introduced, followed by experimental validation on a diabetic foot model. To compare the performance characteristics of these two imaging methods, a coordinate measuring machine (CMM) is used as a standard control. The wound surface topography of the diabetic foot model is measured by multiview imaging and structured light illumination methods respectively and compared with the CMM measurements. The comparison results show that the structured light illumination method is a promising technique for 3D topographic imaging of biologic tissue.

Liu, Peng; Zhang, Shiwu; Xu, Ronald

2014-02-01