These are representative sample records from Science.gov related to your search topic.
For comprehensive and current results, perform a real-time search at Science.gov.
1

Exact surface registration of retinal surfaces from 3-d optical coherence tomography images.  

PubMed

Nonrigid registration of optical coherence tomography (OCT) images is an important problem in studying eye diseases, evaluating the effect of pharmaceuticals in treating vision loss, and performing group-wise cross-sectional analysis. High dimensional nonrigid registration algorithms required for cross-sectional and longitudinal analysis are still being developed for accurate registration of OCT image volumes, with the speckle noise in images presenting a challenge for registration. Development of algorithms for segmentation of OCT images to generate surface models of retinal layers has advanced considerably and several algorithms are now available that can segment retinal OCT images into constituent retinal surfaces. Important morphometric measurements can be extracted if accurate surface registration algorithm for registering retinal surfaces onto corresponding template surfaces were available. In this paper, we present a novel method to perform multiple and simultaneous retinal surface registration, targeted to registering surfaces extracted from ocular volumetric OCT images. This enables a point-to-point correspondence (homology) between template and subject surfaces, allowing for a direct, vertex-wise comparison of morphometric measurements across subject groups. We demonstrate that this approach can be used to localize and analyze regional changes in choroidal and nerve fiber layer thickness among healthy and glaucomatous subjects, allowing for cross-sectional population wise analysis. We also demonstrate the method's ability to track longitudinal changes in optic nerve head morphometry, allowing for within-individual tracking of morphometric changes. This method can also, in the future, be used as a precursor to 3-D OCT image registration to better initialize nonrigid image registration algorithms closer to the desired solution. PMID:25312906

Lee, Sieun; Lebed, Evgeniy; Sarunic, Marinko V; Beg, Mirza Faisal

2015-02-01

2

3D OCT imaging in clinical settings: toward quantitative measurements of retinal structures  

NASA Astrophysics Data System (ADS)

The acquisition speed of current FD-OCT (Fourier Domain - Optical Coherence Tomography) instruments allows rapid screening of three-dimensional (3D) volumes of human retinas in clinical settings. To take advantage of this ability requires software used by physicians to be capable of displaying and accessing volumetric data as well as supporting post processing in order to access important quantitative information such as thickness maps and segmented volumes. We describe our clinical FD-OCT system used to acquire 3D data from the human retina over the macula and optic nerve head. B-scans are registered to remove motion artifacts and post-processed with customized 3D visualization and analysis software. Our analysis software includes standard 3D visualization techniques along with a machine learning support vector machine (SVM) algorithm that allows a user to semi-automatically segment different retinal structures and layers. Our program makes possible measurements of the retinal layer thickness as well as volumes of structures of interest, despite the presence of noise and structural deformations associated with retinal pathology. Our software has been tested successfully in clinical settings for its efficacy in assessing 3D retinal structures in healthy as well as diseased cases. Our tool facilitates diagnosis and treatment monitoring of retinal diseases.

Zawadzki, Robert J.; Fuller, Alfred R.; Zhao, Mingtao; Wiley, David F.; Choi, Stacey S.; Bower, Bradley A.; Hamann, Bernd; Izatt, Joseph A.; Werner, John S.

2006-02-01

3

Adaptive-optics optical coherence tomography for high-resolution and high-speed 3D retinal in vivo imaging  

PubMed Central

We have combined Fourier-domain optical coherence tomography (FD-OCT) with a closed-loop adaptive optics (AO) system using a Hartmann-Shack wavefront sensor and a bimorph deformable mirror. The adaptive optics system measures and corrects the wavefront aberration of the human eye for improved lateral resolution (~4 ?m) of retinal images, while maintaining the high axial resolution (~6 ?m) of stand alone OCT. The AO-OCT instrument enables the three-dimensional (3D) visualization of different retinal structures in vivo with high 3D resolution (4󫶚 ?m). Using this system, we have demonstrated the ability to image microscopic blood vessels and the cone photoreceptor mosaic. PMID:19096728

Zawadzki, Robert J.; Jones, Steven M.; Olivier, Scot S.; Zhao, Mingtao; Bower, Bradley A.; Izatt, Joseph A.; Choi, Stacey; Laut, Sophie; Werner, John S.

2008-01-01

4

3D OCT imaging in clinical settings: toward quantitative measurements of retinal structures  

Microsoft Academic Search

The acquisition speed of current FD-OCT (Fourier Domain - Optical Coherence Tomography) instruments allows rapid screening of three-dimensional (3D) volumes of human retinas in clinical settings. To take advantage of this ability requires software used by physicians to be capable of displaying and accessing volumetric data as well as supporting post processing in order to access important quantitative information such

Robert J. Zawadzki; Alfred R. Fuller; Mingtao Zhao; David F. Wiley; Stacey S. Choi; Bradley A. Bower; Bernd Hamann; Joseph A. Izatt; John S. Werner

2006-01-01

5

3-D segmentation of retinal blood vessels in spectral-domain OCT volumes of the optic nerve head  

NASA Astrophysics Data System (ADS)

Segmentation of retinal blood vessels can provide important information for detecting and tracking retinal vascular diseases including diabetic retinopathy, arterial hypertension, arteriosclerosis and retinopathy of prematurity (ROP). Many studies on 2-D segmentation of retinal blood vessels from a variety of medical images have been performed. However, 3-D segmentation of retinal blood vessels from spectral-domain optical coherence tomography (OCT) volumes, which is capable of providing geometrically accurate vessel models, to the best of our knowledge, has not been previously studied. The purpose of this study is to develop and evaluate a method that can automatically detect 3-D retinal blood vessels from spectral-domain OCT scans centered on the optic nerve head (ONH). The proposed method utilized a fast multiscale 3-D graph search to segment retinal surfaces as well as a triangular mesh-based 3-D graph search to detect retinal blood vessels. An experiment on 30 ONH-centered OCT scans (15 right eye scans and 15 left eye scans) from 15 subjects was performed, and the mean unsigned error in 3-D of the computer segmentations compared with the independent standard obtained from a retinal specialist was 3.4 +/- 2.5 voxels (0.10 +/- 0.07 mm).

Lee, Kyungmoo; Abr鄊off, Michael D.; Niemeijer, Meindert; Garvin, Mona K.; Sonka, Milan

2010-03-01

6

Retinal imaging in uveitis  

PubMed Central

Ancillary investigations are the backbone of uveitis workup for posterior segment inflammations. They help in establishing the differential diagnosis and making certain diagnosis by ruling out certain pathologies and are a useful aid in monitoring response to therapy during follow-up. These investigations include fundus photography including ultra wide field angiography, fundus autofluorescence imaging, fluorescein angiography, optical coherence tomography and multimodal imaging. This review aims to be an overview describing the role of these retinal investigations for posterior uveitis. PMID:24843301

Gupta, Vishali; Al-Dhibi, Hassan A.; Arevalo, J. Fernando

2014-01-01

7

Minimizing camera-eye optical aberrations during the 3D reconstruction of retinal structures  

NASA Astrophysics Data System (ADS)

3D reconstruction of blood vessels is a powerful visualization tool for physicians, since it allows them to refer to qualitative representation of their subject of study. In this paper we propose a 3D reconstruction method of retinal vessels from fundus images. The reconstruction method propose herein uses images of the same retinal structure in epipolar geometry. Images are preprocessed by RISA system for segmenting blood vessels and obtaining feature points for correspondences. The correspondence points process is solved using correlation. The LMedS analysis and Graph Transformation Matching algorithm are used for outliers suppression. Camera projection matrices are computed with the normalized eight point algorithm. Finally, we retrieve 3D position of the retinal tree points by linear triangulation. In order to increase the power of visualization, 3D tree skeletons are represented by surfaces via generalized cylinders whose radius correspond to morphological measurements obtained by RISA. In this paper the complete calibration process including the fundus camera and the optical properties of the eye, the so called camera-eye system is proposed. On one hand, the internal parameters of the fundus camera are obtained by classical algorithms using a reference pattern. On the other hand, we minimize the undesirable efects of the aberrations induced by the eyeball optical system assuming that contact enlarging lens corrects astigmatism, spherical and coma aberrations are reduced changing the aperture size and eye refractive errors are suppressed adjusting camera focus during image acquisition. Evaluation of two self-calibration proposals and results of 3D blood vessel surface reconstruction are presented.

Aldana-Iuit, Javier; Martinez-Perez, M. Elena; Espinosa-Romero, Arturo; Diaz-Uribe, Rufino

2010-05-01

8

Retinal image blood vessel segmentation  

Microsoft Academic Search

The appearance and structure of blood vessels in retinal images play an important role in diagnosis of eye diseases. This paper proposes a method for segmentation of blood vessels in color retinal images. We present a method that uses 2-D Gabor wavelet to enhance the vascular pattern. We locate and segment the blood vessels using adaptive thresholding. The technique is

M. Usman Akram; Anam Tariq; Shoab A. Khan

2009-01-01

9

Consistent stylization of stereoscopic 3D images  

Microsoft Academic Search

The application of stylization filters to photographs is common, Instagram being a popular recent example. These image manipulation applications work great for 2D images. However, stereoscopic 3D cameras are increasingly available to consumers (Nintendo 3DS, Fuji W3 3D, HTC Evo 3D). How will users apply these same stylizations to stereoscopic images?

Lesley Northam; Paul Asente; Craig S. Kaplan

2012-01-01

10

A New Classification Mechanism for Retinal Images  

Microsoft Academic Search

In this paper, we propose a classification mechanism for retinal images so that the retinal images can be successfully distinguished from nonretinal images, egg yolk images for example. The proposed classification mechanism consists of two procedures: training and classification. The image features of retinal images and nonretinal images are extracted at the beginning of the training procedure to make sure

Chin-Chen Chang; Yen-Chang Chen; Chia-Chen Lin

2009-01-01

11

Analysis of posterior retinal layers in spectral optical coherence tomography images of the normal retina and retinal pathologies  

E-print Network

We present a computationally efficient, semiautomated method for analysis of posterior retinal layers in three-dimensional (3-D) images obtained by spectral optical coherence tomography (SOCT). The method consists of two ...

Szkulmowski, Maciej

12

Retinal Image Quality During Accommodation  

PubMed Central

Purpose We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Methods Subjects viewed a monochromatic (552nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Results Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Conclusions Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye抯 higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. PMID:23786386

L髉ez-Gil, N.; Martin, J.; Liu, T.; Bradley, A.; D韆z-Mu駉z, D.; Thibos, L.

2013-01-01

13

ATR for 3D medical imaging  

NASA Astrophysics Data System (ADS)

This paper presents a novel concept of Automatic Target Recognition (ATR) for 3D medical imaging. Such 3D imaging can be obtained from X-ray Computerized Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), Ultrasonography (USG), functional MRI, and others. In the case of CT, such 3D imaging can be derived from 3D-mapping of X-ray linear attenuation coefficients, related to 3D Fourier transform of Radon transform, starting from frame segmentation (or contour definition) into an object and background. Then, 3D template matching is provided, based on inertial tensor invariants, adopted from rigid body mechanics, by comparing the mammographic data base with a real object of interest, such as a malignant breast tumor. The method is more general than CAD breast mammography.

Jannson, Tomasz; Kostrzewski, Andrew; Paki Amouzou, P.

2007-09-01

14

3D Imaging Of Wet Granular Matter  

E-print Network

3D Imaging Of Wet Granular Matter Leonard Goff Advisor: Dr. Wolfgang Losert With Application to Penetrometer Insertion #12;3D Imaging Of Wet Granular Matter Leonard Goff, Advisor: Dr. Wolfgang Losert Background 路Granular matter: 路 Interesting Question: How does a stable configuration of granular matter fail

Anlage, Steven

15

3D ultrafast ultrasound imaging in vivo  

NASA Astrophysics Data System (ADS)

Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32? ?32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra梐nd inter-observer variability.

Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

2014-10-01

16

3D ultrafast ultrasound imaging in vivo.  

PubMed

Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32? ?32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra--and inter-observer variability. PMID:25207828

Provost, Jean; Papadacci, Clement; Arango, Juan Esteban; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

2014-10-01

17

Automated three-dimensional choroidal vessel segmentation of 3D 1060 nm OCT retinal data  

PubMed Central

A fully automated, robust vessel segmentation algorithm has been developed for choroidal OCT, employing multiscale 3D edge filtering and projection of 損robability cones to determine the vessel 揷ore, even in the tomograms with low signal-to-noise ratio (SNR). Based on the ideal vessel response after registration and multiscale filtering, with computed depth related SNR, the vessel core estimate is dilated to quantify the full vessel diameter. As a consequence, various statistics can be computed using the 3D choroidal vessel information, such as ratios of inner (smaller) to outer (larger) choroidal vessels or the absolute/relative volume of choroid vessels. Choroidal vessel quantification can be displayed in various forms, focused and averaged within a special region of interest, or analyzed as the function of image depth. In this way, the proposed algorithm enables unique visualization of choroidal watershed zones, as well as the vessel size reduction when investigating the choroid from the sclera towards the retinal pigment epithelium (RPE). To the best of our knowledge, this is the first time that an automatic choroidal vessel segmentation algorithm is successfully applied to 1060 nm 3D OCT of healthy and diseased eyes. PMID:23304653

Kaji?, Vedran; Esmaeelpour, Marieh; Glittenberg, Carl; Kraus, Martin F.; Honegger, Joachim; Othara, Richu; Binder, Susanne; Fujimoto, James G.; Drexler, Wolfgang

2012-01-01

18

Fundamentals of 3D Laplacian Image pyramids  

E-print Network

INF555 Fundamentals of 3D Lecture 9: Laplacian Image pyramids Expectation-Maximization + Overview of the hat Stripes of the hair Interpreting Fourier spectra #12;Laplacian image pyramids Used also. Interpolate and estimate Laplacian image pyramids Residual Reconstruction Precursors of wavelets #12;Blurring

Nielsen, Frank

19

Intra-retinal layer segmentation of 3D optical coherence tomography using coarse grained diffusion map  

PubMed Central

Optical coherence tomography (OCT) is a powerful and noninvasive method for retinal imaging. In this paper, we introduce a fast segmentation method based on a new variant of spectral graph theory named diffusion maps. The research is performed on spectral domain (SD) OCT images depicting macular and optic nerve head appearance. The presented approach does not require edge-based image information in localizing most of boundaries and relies on regional image texture. Consequently, the proposed method demonstrates robustness in situations of low image contrast or poor layer-to-layer image gradients. Diffusion mapping applied to 2D and 3D OCT datasets is composed of two steps, one for partitioning the data into important and less important sections, and another one for localization of internal layers. In the first step, the pixels/voxels are grouped in rectangular/cubic sets to form a graph node. The weights of the graph are calculated based on geometric distances between pixels/voxels and differences of their mean intensity. The first diffusion map clusters the data into three parts, the second of which is the area of interest. The other two sections are eliminated from the remaining calculations. In the second step, the remaining area is subjected to another diffusion map assessment and the internal layers are localized based on their textural similarities. The proposed method was tested on 23 datasets from two patient groups (glaucoma and normals). The mean unsigned border positioning errors (mean SD) was 8.52 3.13 and 7.56 2.95 ?m for the 2D and 3D methods, respectively. PMID:23837966

Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D.; Sonka, Milan

2013-01-01

20

ICER-3D Hyperspectral Image Compression Software  

NASA Technical Reports Server (NTRS)

Software has been developed to implement the ICER-3D algorithm. ICER-3D effects progressive, three-dimensional (3D), wavelet-based compression of hyperspectral images. If a compressed data stream is truncated, the progressive nature of the algorithm enables reconstruction of hyperspectral data at fidelity commensurate with the given data volume. The ICER-3D software is capable of providing either lossless or lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The compression algorithm, which was derived from the ICER image compression algorithm, includes wavelet-transform, context-modeling, and entropy coding subalgorithms. The 3D wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of sets of hyperspectral image data, while facilitating elimination of spectral ringing artifacts, using a technique summarized in "Improving 3D Wavelet-Based Compression of Spectral Images" (NPO-41381), NASA Tech Briefs, Vol. 33, No. 3 (March 2009), page 7a. Correlation is further exploited by a context-modeling subalgorithm, which exploits spectral dependencies in the wavelet-transformed hyperspectral data, using an algorithm that is summarized in "Context Modeler for Wavelet Compression of Hyperspectral Images" (NPO-43239), which follows this article. An important feature of ICER-3D is a scheme for limiting the adverse effects of loss of data during transmission. In this scheme, as in the similar scheme used by ICER, the spatial-frequency domain is partitioned into rectangular error-containment regions. In ICER-3D, the partitions extend through all the wavelength bands. The data in each partition are compressed independently of those in the other partitions, so that loss or corruption of data from any partition does not affect the other partitions. Furthermore, because compression is progressive within each partition, when data are lost, any data from that partition received prior to the loss can be used to reconstruct that partition at lower fidelity. By virtue of the compression improvement it achieves relative to previous means of onboard data compression, this software enables (1) increased return of hyperspectral scientific data in the presence of limits on the rates of transmission of data from spacecraft to Earth via radio communication links and/or (2) reduction in spacecraft radio-communication power and/or cost through reduction in the amounts of data required to be downlinked and stored onboard prior to downlink. The software is also suitable for compressing hyperspectral images for ground storage or archival purposes.

Xie, Hua; Kiely, Aaron; Klimesh, matthew; Aranki, Nazeeh

2010-01-01

21

Acquisition and applications of 3D images  

NASA Astrophysics Data System (ADS)

The moir fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

Sterian, Paul; Mocanu, Elena

2007-08-01

22

Integration of retinal image sequences  

NASA Astrophysics Data System (ADS)

In this paper a method for noise reduction in ocular fundus image sequences is described. The eye is the only part of the human body where the capillary network can be observed along with the arterial and venous circulation using a non invasive technique. The study of the retinal vessels is very important both for the study of the local pathology (retinal disease) and for the large amount of information it offers on systematic haemodynamics, such as hypertension, arteriosclerosis, and diabetes. In this paper a method for image integration of ocular fundus image sequences is described. The procedure can be divided in two step: registration and fusion. First we describe an automatic alignment algorithm for registration of ocular fundus images. In order to enhance vessel structures, we used a spatially oriented bank of filters designed to match the properties of the objects of interest. To evaluate interframe misalignment we adopted a fast cross-correlation algorithm. The performances of the alignment method have been estimated by simulating shifts between image pairs and by using a cross-validation approach. Then we propose a temporal integration technique of image sequences so as to compute enhanced pictures of the overall capillary network. Image registration is combined with image enhancement by fusing subsequent frames of a same region. To evaluate the attainable results, the signal-to-noise ratio was estimated before and after integration. Experimental results on synthetic images of vessel-like structures with different kind of Gaussian additive noise as well as on real fundus images are reported.

Ballerini, Lucia

1998-10-01

23

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING 1 3D-Image Reconstruction in Highly Collimated 3D  

E-print Network

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING 1 3D-Image Reconstruction in Highly Collimated 3D introduces a novel method for image reconstruction in 3D tomography, called Searchlight Computed Tomography, Inverse Problems I. INTRODUCTION Computed Tomography (CT) is a widely used medical imaging method which

Labate, Demetrio

24

Texture anisotropy in 3-D images  

Microsoft Academic Search

Two approaches to the characterization of three-dimensional (3-D) textures are presented: one based on gradient vectors and one on generalized co-occurrence matrices. They are investigated with the help of simulated data for their behavior in the presence of noise and for various values of the parameters they depend on. They are also applied to several medical volume images characterized by

Vassili A. Kovalev; Maria Petrou; Yaroslav S. Bondar

1999-01-01

25

Retinal detachment repair - series (image)  

MedlinePLUS

Retinal detachments are associated with a tear or hole in the retina through which the internal fluids of ... often caused by trauma, and the risk of retinal detachment after minor trauma, such a blow to the ...

26

Imaging retinal mosaics in the living eye  

PubMed Central

Adaptive optics imaging of cone photoreceptors has provided unique insight into the structure and function of the human visual system and has become an important tool for both basic scientists and clinicians. Recent advances in adaptive optics retinal imaging instrumentation and methodology have allowed us to expand beyond cone imaging. Multi-wavelength and fluorescence imaging methods with adaptive optics have allowed multiple retinal cell types to be imaged simultaneously. These new methods have recently revealed rod photoreceptors, retinal pigment epithelium (RPE) cells, and the smallest retinal blood vessels. Fluorescence imaging coupled with adaptive optics has been used to examine ganglion cells in living primates. Two-photon imaging combined with adaptive optics can evaluate photoreceptor function non-invasively in the living primate retina. PMID:21390064

Rossi, E A; Chung, M; Dubra, A; Hunter, J J; Merigan, W H; Williams, D R

2011-01-01

27

3D goes digital: from stereoscopy to modern 3D imaging techniques  

NASA Astrophysics Data System (ADS)

In the 19th century, English physicist Charles Wheatstone discovered stereopsis, the basis for 3D perception. His construction of the first stereoscope established the foundation for stereoscopic 3D imaging. Since then, many optical instruments were influenced by these basic ideas. In recent decades, the advent of digital technologies revolutionized 3D imaging. Powerful readily available sensors and displays combined with efficient pre- or post-processing enable new methods for 3D imaging and applications. This paper draws an arc from basic concepts of 3D imaging to modern digital implementations, highlighting instructive examples from its 175 years of history.

Kerwien, N.

2014-11-01

28

Pattern based 3D image Steganography  

NASA Astrophysics Data System (ADS)

This paper proposes a new high capacity Steganographic scheme using 3D geometric models. The novel algorithm re-triangulates a part of a triangle mesh and embeds the secret information into newly added position of triangle meshes. Up to nine bits of secret data can be embedded into vertices of a triangle without causing any changes in the visual quality and the geometric properties of the cover image. Experimental results show that the proposed algorithm is secure, with high capacity and low distortion rate. Our algorithm also resists against uniform affine transformations such as cropping, rotation and scaling. Also, the performance of the method is compared with other existing 3D Steganography algorithms. [Figure not available: see fulltext.

Thiyagarajan, P.; Natarajan, V.; Aghila, G.; Prasanna Venkatesan, V.; Anitha, R.

2013-03-01

29

Teat Morphology Characterization With 3D Imaging.  

PubMed

The objective of this study was to visualize, in a novel way, the morphological characteristics of bovine teats to gain a better understanding of the detailed teat morphology. We applied silicone casting and 3D digital imaging in order to obtain a more detailed image of the teat structures than that seen in previous studies. Teat samples from 65 dairy cows over 12 months of age were obtained from cows slaughtered at an abattoir. The teats were classified according to the teat condition scoring used in Finland and the lengths of the teat canals were measured. Silicone molds were made from the external teat surface surrounding the teat orifice and from the internal surface of the teat consisting of the papillary duct, F黵stenberg's rosette, and distal part of the teat cistern. The external and internal surface molds of 35 cows were scanned with a 3D laser scanner. The molds and the digital 3D models were used to evaluate internal and external teat surface morphology. A number of measurements were taken from the silicone molds. The 3D models reproduced the morphology of the teats accurately with high repeatability. Breed didn't correlate with the teat classification score. The rosette was found to have significant variation in its size and number of mucosal folds. The internal surface morphology of the rosette did not correlate with the external surface morphology of the teat implying that it is relatively independent of milking parameters that may impact the teat canal and the external surface of the teat. Anat Rec, 2014. 2014 Wiley Periodicals, Inc. PMID:25382725

Vesterinen, Heidi M; Corfe, Ian J; Sinkkonen, Ville; Iivanainen, Antti; Jernvall, Jukka; Laakkonen, Juha

2014-11-01

30

First Results for 3D Image Segmentation with Topological Map  

E-print Network

First Results for 3D Image Segmentation with Topological Map Alexandre Dupas1 and Guillaume Damiand in the context of image processing. Key words: Topological model, 3D Image segmentation, Intervoxel bound- aries, Combinatorial maps. 1 Introduction Segmentation of 3D images is a great challenge in many fields as for example

Paris-Sud XI, Universit茅 de

31

Ames Lab 101: Real-Time 3D Imaging  

ScienceCinema

Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

Zhang, Song

2012-08-29

32

Computational 3D and reflectivity imaging with high photon efficiency  

E-print Network

Imaging the 3D structure and reflectivity of a scene can be done using photon-counting detectors. Traditional imagers of this type typically require hundreds of detected photons per pixel for accurate 3D and reflectivity ...

Shin, Dongeek

2014-01-01

33

Ames Lab 101: Real-Time 3D Imaging  

SciTech Connect

Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

Zhang, Song

2010-01-01

34

Localization of 3D Anatomical Point Landmarks in 3D Tomographic Images Using Deformable Models  

Microsoft Academic Search

Existing differential approaches to the localization of 3D anatomical point landmarks in 3D tomographic images are relatively sensitive to noise as well as to small intensity variations, both of which result in false detections as well as affect the localization accuracy. In this paper, we introduce a new approach to 3D landmark localization based on deformable models, which takes into

S鰊ke Frantz; Karl Rohr; H. Siegfried Stiehl

2000-01-01

35

Live-cell 3D super-resolution imaging in  

E-print Network

Live-cell 3D super- resolution imaging in thick biological samples Francesca Cella Zanacchi1, Zeno,2 We demonstrate three-dimensional (3D) super-resolution live-cell imaging through thick specimens (50 the focal plane. We report 3D super-resolution imaging of cellular spheroids. Advances in far

Cai, Long

36

Digital tracking and control of retinal images  

NASA Astrophysics Data System (ADS)

Laser induced retinal lesions are used to treat a variety of eye diseases such as diabetic retinopathy and retinal detachment. An instrumentation system has been developed to track a specific lesion coordinate on the retinal surface and provide corrective signals to maintain laser position on the coordinate. High resolution retinal images are acquired via a CCD camera coupled to a fundus camera and video frame grabber. Optical filtering and histogram modification are used to enhance the retinal vessel network against the lighter retinal background. Six distinct retinal landmarks are tracked on the high contrast image obtained from the frame grabber using two-dimensional blood vessel templates. The frame grabber is hosted on a 486 PC. The PC performs correction signal calculations using an exhaustive search on selected image portions. An X and Y laser correction signal is derived from the landmark tracking information and provided to a pair of galvanometer steered mirrors via a data acquisition and control subsystem. This subsystem also responds to patient inputs and the system monitoring lesion growth. This paper begins with an overview of the robotic laser system design followed by implementation and testing of a development system for proof of concept. The paper concludes with specifications for a real time system.

Barrett, Steven F.; Jerath, Maya R.; Rylander, Henry G., III; Welch, Ashley J.

1993-06-01

37

Retinal imaging using adaptive optics technology?  

PubMed Central

Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effect of wave front distortions. Retinal imaging using AO aims to compensate for higher order aberrations originating from the cornea and the lens by using deformable mirror. The main application of AO retinal imaging has been to assess photoreceptor cell density, spacing, and mosaic regularity in normal and diseased eyes. Apart from photoreceptors, the retinal pigment epithelium, retinal nerve fiber layer, retinal vessel wall and lamina cribrosa can also be visualized with AO technology. Recent interest in AO technology in eye research has resulted in growing number of reports and publications utilizing this technology in both animals and humans. With the availability of first commercially available instruments we are making transformation of AO technology from a research tool to diagnostic instrument. The current challenges include imaging eyes with less than perfect optical media, formation of normative databases for acquired images such as cone mosaics, and the cost of the technology. The opportunities for AO will include more detailed diagnosis with description of some new findings in retinal diseases and glaucoma as well as expansion of AO into clinical trials which has already started. PMID:24843304

Kozak, Igor

2014-01-01

38

3-D Volume Imaging for Dentistry: A New Dimension  

Microsoft Academic Search

The use of computed tomography for dental imaging procedures has in- creased recently. Use of CT for even seemingly routine diagnosis and treatment procedures suggests that the desire for 3-D imaging is more than a current trend but rather a shift toward a future of dimensional volume imaging. Recognizing this shift, several imaging manufacturers recently have developed 3-D imaging devices

Robert A. Danforth; Ivan Dus; James Mah

2003-01-01

39

Automatic Detection, Segmentation and Classification of Retinal Horizontal Neurons in Large-scale 3D Confocal Imagery  

SciTech Connect

Automatic analysis of neuronal structure from wide-field-of-view 3D image stacks of retinal neurons is essential for statistically characterizing neuronal abnormalities that may be causally related to neural malfunctions or may be early indicators for a variety of neuropathies. In this paper, we study classification of neuron fields in large-scale 3D confocal image stacks, a challenging neurobiological problem because of the low spatial resolution imagery and presence of intertwined dendrites from different neurons. We present a fully automated, four-step processing approach for neuron classification with respect to the morphological structure of their dendrites. In our approach, we first localize each individual soma in the image by using morphological operators and active contours. By using each soma position as a seed point, we automatically determine an appropriate threshold to segment dendrites of each neuron. We then use skeletonization and network analysis to generate the morphological structures of segmented dendrites, and shape-based features are extracted from network representations of each neuron to characterize the neuron. Based on qualitative results and quantitative comparisons, we show that we are able to automatically compute relevant features that clearly distinguish between normal and abnormal cases for postnatal day 6 (P6) horizontal neurons.

Karakaya, Mahmut [ORNL; Kerekes, Ryan A [ORNL; Gleason, Shaun Scott [ORNL; Martins, Rodrigo [St. Jude Children's Research Hospital; Dyer, Michael [St. Jude Children's Research Hospital

2011-01-01

40

Make3D: learning 3D scene structure from a single still image.  

PubMed

We consider the problem of estimating detailed 3D structure from a single still image of an unstructured environment. Our goal is to create 3D models that are both quantitatively accurate as well as visually pleasing. For each small homogeneous patch in the image, we use a Markov Random Field (MRF) to infer a set of "plane parameters" that capture both the 3D location and 3D orientation of the patch. The MRF, trained via supervised learning, models both image depth cues as well as the relationships between different parts of the image. Other than assuming that the environment is made up of a number of small planes, our model makes no explicit assumptions about the structure of the scene; this enables the algorithm to capture much more detailed 3D structure than does prior art and also give a much richer experience in the 3D flythroughs created using image-based rendering, even for scenes with significant nonvertical structure. Using this approach, we have created qualitatively correct 3D models for 64.9 percent of 588 images downloaded from the Internet. We have also extended our model to produce large-scale 3D models from a few images. PMID:19299858

Saxena, Ashutosh; Sun, Min; Ng, Andrew Y

2009-05-01

41

Automated multilayer segmentation and characterization in 3D spectral-domain optical coherence tomography images  

NASA Astrophysics Data System (ADS)

Spectral-domain optical coherence tomography (SD-OCT) is a 3-D imaging technique, allowing direct visualization of retinal morphology and architecture. The various layers of the retina may be affected differentially by various diseases. In this study, an automated graph-based multilayer approach was developed to sequentially segment eleven retinal surfaces including the inner retinal bands to the outer retinal bands in normal SD-OCT volume scans at three different stages. For stage 1, the four most detectable and/or distinct surfaces were identified in the four-times-downsampled images and were used as a priori positional information to limit the graph search for other surfaces at stage 2. Eleven surfaces were then detected in the two-times-downsampled images at stage 2, and refined in the original image space at stage 3 using the graph search integrating the estimated morphological shape models. Twenty macular SD-OCT (Heidelberg Spectralis) volume scans from 20 normal subjects (one eye per subject) were used in this study. The overall mean and absolute mean differences in border positions between the automated and manual segmentation for all 11 segmented surfaces were -0.20 +/- 0.53 voxels (-0.76 +/- 2.06 ?m) and 0.82 +/- 0.64 voxels (3.19 +/- 2.46 ?m). Intensity and thickness properties in the resultant retinal layers were investigated. This investigation in normal subjects may provide a comparative reference for subsequent investigations in eyes with disease.

Hu, Zhihong; Wu, Xiaodong; Hariri, Amirhossein; Sadda, SriniVas R.

2013-03-01

42

3D ULTRASONIC STRAIN IMAGING USING FREEHAND SCANNING AND  

E-print Network

3D ULTRASONIC STRAIN IMAGING USING FREEHAND SCANNING AND A MECHANICALLY-SWEPT PROBE R. J. Housden.cam.ac.uk #12;3D ultrasonic strain imaging using freehand scanning and a mechanically-swept probe R. James of Engineering Trumpington Street Cambridge CB2 1PZ Abstract This paper compares two approaches to 3D ultrasonic

Drummond, Tom

43

Toward a compact underwater structured light 3-D imaging system  

E-print Network

A compact underwater 3-D imaging system based on the principles of structured light was created for classroom demonstration and laboratory research purposes. The 3-D scanner design was based on research by the Hackengineer ...

Dawson, Geoffrey E

2013-01-01

44

Retinal image analysis: Concepts, applications and potential  

Microsoft Academic Search

As digital imaging and computing power increasingly develop, so too does the potential to use these technologies in ophthalmology. Image processing, analysis and computer vision techniques are increasing in prominence in all fields of medical science, and are especially pertinent to modern ophthalmology, as it is heavily dependent on visually oriented signs. The retinal microvasculature is unique in that it

Niall Patton; Tariq M. Aslam; Thomas MacGillivray; Ian J. Deary; Baljean Dhillon; Robert H. Eikelboom; Kanagasingam Yogesan; Ian J. Constable

2006-01-01

45

Gabor wavelet based vessel segmentation in retinal images  

Microsoft Academic Search

Retinal image vessel segmentation and their branching pattern are used for automated screening and diagnosis of diabetic retinopathy. Vascular pattern is normally not visible in retinal images. We present a method that uses 2-D Gabor wavelet and sharpening filter to enhance and sharpen the vascular pattern respectively. Our technique extracts the vessels from sharpened retinal image using edge detection algorithm

M. Usman Akram; Anam Tariq; Sarwat Nasir; Shoab A. Khan

2009-01-01

46

A comparison study to evaluate retinal image enhancement techniques  

Microsoft Academic Search

Retinal vessels can show different states of several diseases, making the detection of vessels in retinal images very crucial. Retinal images can be used for other applications such as ocular fundus operations and human recognition. Due to the acquisition process, these images often have low grey level contrast and dynamic range that can seriously affect diagnosis procedure results. In this

Mohammad Saleh Miri; Ali Mahloojifar

2009-01-01

47

A Method for Anisotropy Analysis of 3D Images  

Microsoft Academic Search

In this paper we present an extension of anisotropy analysis methods for 3D image volumes. Two approaches based on orientation-sensitive filtering and a 3D version of spatial gray-level difference histograms are compared. The performance of the method is demonstrated on synthetic image volumes and original 3D CT and MRI medical images. The orientation structure of left and right hemispheres of

Vassili A. Kovalev; Yaroslav S. Bondar

1997-01-01

48

3D image quality of 200-inch glasses-free 3D display system  

NASA Astrophysics Data System (ADS)

We have proposed a glasses-free three-dimensional (3D) display for displaying 3D images on a large screen using multi-projectors and an optical screen consisting of a special diffuser film with a large condenser lens. To achieve high presence communication with natural large-screen 3D images, we numerically analyze the factors responsible for degrading image quality to increase the image size. A major factor that determines the 3D image quality is the arrangement of component units, such as the projector array and condenser lens, as well as the diffuser film characteristics. We design and fabricate a prototype 200-inch glasses-free 3D display system on the basis of the numerical results. We select a suitable diffuser film, and we combine it with an optimally designed condenser lens. We use 57 high-definition projector units to obtain viewing angles of 13.5. The prototype system can display glasses-free 3D images of a life-size car using natural parallax images.

Kawakita, M.; Iwasawa, S.; Sakai, M.; Haino, Y.; Sato, M.; Inoue, N.

2012-03-01

49

Constructing Complex 3D Biological Environments from Medical Imaging Using  

E-print Network

in humans, from histology images, to create a unique but realistic 3D virtual organ. Histology images were the tissue. This information was then related back to the histology images, linking the 2D cross sections reconstruction, biological tissue, histology. ? 1 INTRODUCTION CREATING realistic 3D computational models

Romano, Daniela

50

Surgical Tools Localization in 3D Ultrasound Images  

E-print Network

to basics of medical ultrasound (US) imaging. The state of the art localization methods are reviewedSurgical Tools Localization in 3D Ultrasound Images Doctoral Thesis under Co-supervision presented such as needles or electrodes in 3D ultrasound images. The precise and reliable localization is im- portant

Paris-Sud XI, Universit茅 de

51

Automatic 2D-to-3D image conversion using 3D examples from the internet  

NASA Astrophysics Data System (ADS)

The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D repository. While far from perfect, the presented results demonstrate that on-line repositories of 3D content can be used for effective 2D-to-3D image conversion. With the continuously increasing amount of 3D data on-line and with the rapidly growing computing power in the cloud, the proposed framework seems a promising alternative to operator-assisted 2D-to-3D conversion.

Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.

2012-03-01

52

A 3D image processing method for manufacturing process automation  

Microsoft Academic Search

Three-dimensional (3D) image processing provides a useful tool for machine vision applications. Typically a 3D vision system is divided into data acquisition, low-level processing, object representation and matching. In this paper, a 3D object pose estimation method is developed for an automated manufacturing assembly process. The experimental results show that the 3D pose estimation method produces accurate geometrical information for

Dongming Zhao; Songtao Li

2005-01-01

53

Comparison of 3D Deformable Models For in vivo Measurements of Mouse Embryo from 3D Ultrasound Images  

E-print Network

Comparison of 3D Deformable Models For in vivo Measurements of Mouse Embryo from 3D Ultrasound in the analysis of the 3D shape of mouse embryo from 3D ultrasound (US) images acquired using an experimental ultrasonic system. Two approaches for the 3D segmentation of mouse embryo are evaluated. The first one

Paris-Sud XI, Universit茅 de

54

Segmentation of Retinal Arteries in Adaptive Optics Images  

E-print Network

Segmentation of Retinal Arteries in Adaptive Optics Images Nicolas Lerm茅, Florence Rossant for automatically segmenting the walls of retinal arteries in adaptive optics images. To the best of our knowledge. INTRODUCTION The diseases affecting the retinal blood vessels of small diameter ( 150碌m) such as arterial

Paris-Sud XI, Universit茅 de

55

GAMMA-RAY IMAGING AND POLARIZATION MEASUREMENT USING 3-D  

E-print Network

GAMMA-RAY IMAGING AND POLARIZATION MEASUREMENT USING 3-D POSITION-SENSITIVE CdZnTe DETECTORS by Dan a solid foundation for the gamma-ray Compton imaging using a single 3-D CdZnTe detector. It was her-Ramo Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 Single Polarity Charge Sensing

He, Zhong

56

Model-Based Interpretation of 3D Medical Images  

Microsoft Academic Search

The automatic segmentation and labelling of anatomical structures in 3D medical images is a challenging task of practical importance. We describe a model-based approach which allows robust and accurate interpretation using explicit anatomical knowledge. Our method is based on the extension to 3D of Point Distribution Mo- dels (PDMs) and associated image search algorithms. A combination of global, Genetic Algorithm

A. Hill; A. Thornham; C. J. Taylor

1993-01-01

57

An ultrafast phase modulator for 3D imaging  

Microsoft Academic Search

In this paper we explore potential applications of a new transparent Electro-Optic Ceramic in 3D imaging as a fast phase shifter and demonstrate its performance in a newly developed Low Coherence Polarization Interference Microscopy (LCPIM). The new phase modulator is fast, convenient and inexpensive. It makes the 3D imaging system that employs it mechanically efficient and compact. The LCPIM proposed

Janice Y. Cheng; Qiushui Chen

2006-01-01

58

Geometric corner extraction in retinal fundus images.  

PubMed

This paper presents a novel approach of finding corner features between retinal fundus images. Such images are relatively textureless and comprising uneven shades which render state-of-the-art approaches e.g., SIFT to be ineffective. Many of the detected features have low repeatability (<; 10%), especially when the viewing angle difference in the corresponding images is large. Our approach is based on the finding of blood vessels using a robust line fitting algorithm, and locating corner features based on the bends and intersections between the blood vessels. These corner features have proven to be superior to the state-of-the-art feature extraction methods (i.e. SIFT, SURF, Harris, Good Features To Track (GFTT) and FAST) with regard to repeatability and stability in our experiment. Overall in average, the approach has close to 10% more repeatable detected features than the second best in two corresponding retinal images in the experiment. PMID:25569921

Lee, Jimmy Addison; Beng Hai Lee; Guozhen Xu; Ee Ping Ong; Wong, Damon Wing Kee; Jiang Liu; Tock Han Lim

2014-08-01

59

Retinal atlas statistics from color fundus images  

NASA Astrophysics Data System (ADS)

An atlas provides a reference anatomic structure and an associated coordinate system. An atlas may be used in a variety of applications, including segmentation and registration, and can be used to characterize anatomy across a population. We present a method for generating an atlas of the human retina from 500 color fundus image pairs. Using color fundus image pairs, we register image pairs to obtain a larger anatomic field of view. Key retinal anatomic features are selected for atlas landmarks: disk center, fovea, and main vessel arch. An atlas coordinate system is defined based on the statistics of the landmarks. Images from the population are warped into the atlas space to produce a statistical retinal atlas which can be used for automatic diagnosis, concise indexing, semantic blending, etc.

Lee, Sangyeol; Abramoff, Michael D.; Reinhardt, Joseph M.

2010-03-01

60

Octahedral transforms for 3-D image processing.  

PubMed

The octahedral group is one of the finite subgroups of the rotation group in 3-D Euclidean space and a symmetry group of the cubic grid. Compression and filtering of 3-D volumes are given as application examples of its representation theory. We give an overview over the finite subgroups of the 3-D rotation group and their classification. We summarize properties of the octahedral group and basic results from its representation theory. Wide-sense stationary processes are processes with group theoretical symmetries whose principal components are closely related to the representation theory of their symmetry group. Linear filter systems are defined as projection operators and symmetry-based filter systems are generalizations of the Fourier transforms. The algorithms are implemented in Maple/Matlab functions and worksheets. In the experimental part, we use two publicly available MRI volumes. It is shown that the assumption of wide-sense stationarity is realistic and the true principal components of the correlation matrix are very well approximated by the group theoretically predicted structure. We illustrate the nature of the different types of filter systems, their invariance and transformation properties. Finally, we show how thresholding in the transform domain can be used in 3-D signal processing. PMID:19674954

Lenz, Reiner; Latorre Carmona, Pedro

2009-12-01

61

Highway 3D model from image and lidar data  

NASA Astrophysics Data System (ADS)

We present a new method of highway 3-D model construction developed based on feature extraction in highway images and LIDAR data. We describe the processing road coordinate data that connect the image frames to the coordinates of the elevation data. Image processing methods are used to extract sky, road, and ground regions as well as significant objects (such as signs and building fronts) in the roadside for the 3D model. LIDAR data are interpolated and processed to extract the road lanes as well as other features such as trees, ditches, and elevated objects to form the 3D model. 3D geometry reasoning is used to match the image features to the 3D model. Results from successive frames are integrated to improve the final model.

Chen, Jinfeng; Chu, Henry; Sun, Xiaoduan

2014-05-01

62

Image performance evaluation of a 3D surgical imaging platform  

NASA Astrophysics Data System (ADS)

The O-arm (Medtronic Inc.) is a multi-dimensional surgical imaging platform. The purpose of this study was to perform a quantitative evaluation of the imaging performance of the O-arm in an effort to understand its potential for future nonorthopedic applications. Performance of the reconstructed 3D images was evaluated, using a custom-built phantom, in terms of resolution, linearity, uniformity and geometrical accuracy. Both the standard (SD, 13 s) and high definition (HD, 26 s) modes were evaluated, with the imaging parameters set to image the head (120 kVp, 100 mAs and 150 mAs, respectively). For quantitative noise characterization, the images were converted to Hounsfield units (HU) off-line. Measurement of the modulation transfer function revealed a limiting resolution (at 10% level) of 1.0 mm-1 in the axial dimension. Image noise varied between 15 and 19 HU for the HD and SD modes, respectively. Image intensities varied linearly over the measured range, up to 1300 HU. Geometric accuracy was maintained in all three dimensions over the field of view. The present study has evaluated the performance characteristics of the O-arm, and demonstrates feasibility for use in interventional applications and quantitative imaging tasks outside those currently targeted by the manufacturer. Further improvements to the reconstruction algorithms may further enhance performance for lower-contrast applications.

Petrov, Ivailo E.; Nikolov, Hristo N.; Holdsworth, David W.; Drangova, Maria

2011-03-01

63

3D laser imaging for concealed object identification  

NASA Astrophysics Data System (ADS)

This paper deals with new optical non-conventional 3D laser imaging. Optical non-conventional imaging explores the advantages of laser imaging to form a three-dimensional image of the scene. 3D laser imaging can be used for threedimensional medical imaging, topography, surveillance, robotic vision because of ability to detect and recognize objects. In this paper, we present a 3D laser imaging for concealed object identification. The objective of this new 3D laser imaging is to provide the user a complete 3D reconstruction of the concealed object from available 2D data limited in number and with low representativeness. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different interfaces of the scene of interest and from experimental results. We show the global 3D reconstruction procedures capable to separate objects from foliage and reconstruct a threedimensional image of the considered object. In this paper, we present examples of reconstruction and completion of three-dimensional images and we analyse the different parameters of the identification process such as resolution, the scenario of camouflage, noise impact and lacunarity degree.

Berechet, Ion; Berginc, G閞ard; Berechet, Stefan

2014-09-01

64

Rapid Particle Size Measurement Using 3D Surface Imaging  

Microsoft Academic Search

The present study introduces a new three-dimensional (3D) surface image analysis technique in which white light illumination\\u000a from different incident angles is used to create 3D surfaces with a photometric approach. The three-dimensional features of\\u000a the surface images created are then used in the characterization of particle size distributions of granules. This surface\\u000a image analysis method is compared to sieve

Ira Soppela; Sari Airaksinen; Juha Hatara; Heikki R鋓kk鰊en; Osmo Antikainen; Jouko Yliruusi; Niklas Sandler

2011-01-01

65

Automated segmentation of outer retinal layers in macular OCT images of patients with retinitis pigmentosa  

PubMed Central

To provide a tool for quantifying the effects of retinitis pigmentosa (RP) seen on spectral domain optical coherence tomography images, an automated layer segmentation algorithm was developed. This algorithm, based on dual-gradient information and a shortest path search strategy, delineates the inner limiting membrane and three outer retinal boundaries in optical coherence tomography images from RP patients. In addition, an automated inner segment (IS)/outer segment (OS) contour detection method based on the segmentation results is proposed to quantify the locus of points at which the OS thickness goes to zero in a 3D volume scan. The segmentation algorithm and the IS/OS contour were validated with manual segmentation data. The segmentation and IS/OS contour results on repeated measures showed good within-day repeatability, while the results on data acquired on average 22.5 months afterward demonstrated a possible means to follow disease progression. In particular, the automatically generated IS/OS contour provided a possible objective structural marker for RP progression. PMID:21991543

Yang, Qi; Reisman, Charles A.; Chan, Kinpui; Ramachandran, Rithambara; Raza, Ali; Hood, Donald C.

2011-01-01

66

Recovering 3D tumor locations from 2D bioluminescence images  

E-print Network

structure with detailed anatomical structure extracted from high-resolution microCT on a single platform. We imaging modality to BLI is the microCT imaging, which can be used in one session to provide the high extracted from microCT anatomical images. There is a need for 3D reconstruction because 2D BLI images do

Huang, Xiaolei

67

Hyperspectral image compression with modified 3D SPECK  

Microsoft Academic Search

Hyperspectral image consist of a set of contiguous images bands collected by a hyperspectral sensor. The large amount of data of hyperspectral images emphasizes the importance of efficient compression for storage and transmission. This paper proposes the simplified version of the three dimensional Set Partitioning Embedded bloCK (3D SPECK) algorithm for lossy compression of hyperspectral image. A three dimensional discrete

Ruzelita Ngadiran; Said Boussakta; Ahmed Bouridane; Bayan Syarif

2010-01-01

68

MR image denoising method for brain surface 3D modeling  

NASA Astrophysics Data System (ADS)

Three-dimensional (3D) modeling of medical images is a critical part of surgical simulation. In this paper, we focus on the magnetic resonance (MR) images denoising for brain modeling reconstruction, and exploit a practical solution. We attempt to remove the noise existing in the MR imaging signal and preserve the image characteristics. A wavelet-based adaptive curve shrinkage function is presented in spherical coordinates system. The comparative experiments show that the denoising method can preserve better image details and enhance the coefficients of contours. Using these denoised images, the brain 3D visualization is given through surface triangle mesh model, which demonstrates the effectiveness of the proposed method.

Zhao, De-xin; Liu, Peng-jie; Zhang, De-gan

2014-11-01

69

Image plane interaction techniques in 3D immersive environments  

Microsoft Academic Search

This paper presents a set of interaction techniques for use in head- tracked immersive virtual environments. With these techniques, the user interacts with the 2D projections that 3D objects in the scene make on his image plane. The desktop analog is the use of a mouse to interact with objects in a 3D scene based on their projections on the

Jeffrey S. Pierce; Andrew S. Forsberg; Matthew J. Conway; Seung Hong; Robert C. Zeleznik; Mark R. Mine

1997-01-01

70

Adaptive Metamorphs Model for 3D Medical Image Segmentation  

E-print Network

Adaptive Metamorphs Model for 3D Medical Image Segmentation Junzhou Huang1 , Xiaolei Huang2 a solid model deforms toward object bound- ary. Our 3D segmentation method stems from Metamorphs. Metamorphs [1] is proposed as a new class of deformable models that integrate boundary information

Huang, Junzhou

71

Three-dimensional retinal imaging with ultrahigh resolution Fourier\\/spectral domain optical coherence tomography  

Microsoft Academic Search

Ultrahigh resolution OCT using broadband light sources achieves improved axial image resolutions of ~2-3 um compared to standard 10 um resolution OCT used in current commercial instruments. High-speed OCT using Fourier\\/spectral domain detection enables dramatic increases in imaging speeds. 3D OCT retinal imaging is performed in human subjects using high-speed, ultrahigh resolution OCT, and the concept of an OCT fundus

Vivek J. Srinivasan; Maciej Wojtkowski; Tony Ko; Mariana Carvalho; James Fujimoto; Jay Duker; Joel Schumann; Andrzej Kowalczyk

2005-01-01

72

Image based 3D city modeling : Comparative study  

NASA Astrophysics Data System (ADS)

3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city reconstruction; CityEngine is a good product. Agisoft Photoscan software creates much better 3D model with good texture quality and automatic processing. So this image based comparative study is useful for 3D city user community. Thus this study will provide a good roadmap for geomatics user community to create photo-realistic virtual 3D city model by using image based techniques.

Singh, S. P.; Jain, K.; Mandla, V. R.

2014-06-01

73

Computational holographic 3D imaging for object recognition and classification  

NASA Astrophysics Data System (ADS)

Object recognition and identification is one of essential parts for Homeland Security. There have been numerous researches dealing with object recognition using two-dimensional (2D) or three-dimensional (3D) imaging. In this paper, we address 3D object classification with computational holographic imaging. A 3D object can be reconstructed at different planes using a single hologram. We apply Principal Component Analysis (PCA) and Fisher Linear Discriminant (FLD) analysis based on Gabor-wavelet feature vectors to classify 3D objects measured by digital interferometry. The presented technique substantially reduces the dimensionality of the 3D classification problem. Experimental and simulation results are presented for regional filtering concentrated at specific positions, and for overall grid filtering.

Yeom, Sekwon; Javidi, Bahram

2004-09-01

74

Acoustic Imaging in 3D Frank Natterer  

E-print Network

of seismic imaging, namely velocity estimation. In the suggested ultrasound mammography system, the chest from X-ray tomography. It belongs to the family of adjoint methods and can be viewed as a single

M眉nster, Westf盲lische Wilhelms-Universit盲t

75

Imaging hypoxia using 3D photoacoustic spectroscopy  

NASA Astrophysics Data System (ADS)

Purpose: The objective is to develop a multivariate in vivo hemodynamic model of tissue oxygenation (MiHMO2) based on 3D photoacoustic spectroscopy. Introduction: Low oxygen levels, or hypoxia, deprives cancer cells of oxygen and confers resistance to irradiation, some chemotherapeutic drugs, and oxygen-dependent therapies (phototherapy) leading to treatment failure and poor disease-free and overall survival. For example, clinical studies of patients with breast carcinomas, cervical cancer, and head and neck carcinomas (HNC) are more likely to suffer local reoccurrence and metastasis if their tumors are hypoxic. A novel method to non invasively measure tumor hypoxia, identify its type, and monitor its heterogeneity is devised by measuring tumor hemodynamics, MiHMO2. Material and Methods: Simulations are performed to compare tumor pO2 levels and hypoxia based on physiology - perfusion, fractional plasma volume, fractional cellular volume - and its hemoglobin status - oxygen saturation and hemoglobin concentration - based on in vivo measurements of breast, prostate, and ovarian tumors. Simulations of MiHMO2 are performed to assess the influence of scanner resolutions and different mathematic models of oxygen delivery. Results: Sensitivity of pO2 and hypoxic fraction to photoacoustic scanner resolution and dependencies on model complexity will be presented using hemodynamic parameters for different tumors. Conclusions: Photoacoustic CT spectroscopy provides a unique ability to monitor hemodynamic and cellular physiology in tissue, which can be used to longitudinally monitor tumor oxygenation and its response to anti-angiogenic therapies.

Stantz, Keith M.

2010-02-01

76

Optical 3D watermark based digital image watermarking for telemedicine  

NASA Astrophysics Data System (ADS)

Region of interest (ROI) of a medical image is an area including important diagnostic information and must be stored without any distortion. This algorithm for application of watermarking technique for non-ROI of the medical image preserving ROI. The paper presents a 3D watermark based medical image watermarking scheme. In this paper, a 3D watermark object is first decomposed into 2D elemental image array (EIA) by a lenslet array, and then the 2D elemental image array data is embedded into the host image. The watermark extraction process is an inverse process of embedding. The extracted EIA through the computational integral imaging reconstruction (CIIR) technique, the 3D watermark can be reconstructed. Because the EIA is composed of a number of elemental images possesses their own perspectives of a 3D watermark object. Even though the embedded watermark data badly damaged, the 3D virtual watermark can be successfully reconstructed. Furthermore, using CAT with various rule number parameters, it is possible to get many channels for embedding. So our method can recover the weak point having only one transform plane in traditional watermarking methods. The effectiveness of the proposed watermarking scheme is demonstrated with the aid of experimental results.

Li, Xiao Wei; Kim, Seok Tae

2013-12-01

77

Dedicated 3D photoacoustic breast imaging  

PubMed Central

Purpose: To report the design and imaging methodology of a photoacoustic scanner dedicated to imaging hemoglobin distribution throughout a human breast. Methods: The authors developed a dedicated breast photoacoustic mammography (PAM) system using a spherical detector aperture based on our previous photoacoustic tomography scanner. The system uses 512 detectors with rectilinear scanning. The scan shape is a spiral pattern whose radius varies from 24 to 96 mm, thereby allowing a field of view that accommodates a wide range of breast sizes. The authors measured the contrast-to-noise ratio (CNR) using a target comprised of 1-mm dots printed on clear plastic. Each dot absorption coefficient was approximately the same as a 1-mm thickness of whole blood at 756 nm, the output wavelength of the Alexandrite laser used by this imaging system. The target was immersed in varying depths of an 8% solution of stock Liposyn II-20%, which mimics the attenuation of breast tissue (1.1 cm?1). The spatial resolution was measured using a 6 ?m-diameter carbon fiber embedded in agar. The breasts of four healthy female volunteers, spanning a range of breast size from a brassiere C cup to a DD cup, were imaged using a 96-mm spiral protocol. Results: The CNR target was clearly visualized to a depth of 53 mm. Spatial resolution, which was estimated from the full width at half-maximum of a profile across the PAM image of a carbon fiber, was 0.42 mm. In the four human volunteers, the vasculature was well visualized throughout the breast tissue, including to the chest wall. Conclusions: CNR, lateral field-of-view and penetration depth of our dedicated PAM scanning system is sufficient to image breasts as large as 1335 mL, which should accommodate up to 90% of the women in the United States. PMID:24320471

Kruger, Robert A.; Kuzmiak, Cherie M.; Lam, Richard B.; Reinecke, Daniel R.; Del Rio, Stephen P.; Steed, Doreen

2013-01-01

78

Imaging fault zones using 3D seismic image processing techniques  

NASA Astrophysics Data System (ADS)

Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes and collecting these into "disturbance geobodies". These seismic image processing methods represents a first efficient step toward a construction of a robust technique to investigate sub-seismic strain, mapping noisy deformed zones and displacement within subsurface geology (Dutzer et al.,2011; Iacopini et al.,2012). In all these cases, accurate fault interpretation is critical in applied geology to building a robust and reliable reservoir model, and is essential for further study of fault seal behavior, and reservoir compartmentalization. They are also fundamental for understanding how deformation localizes within sedimentary basins, including the processes associated with active seismogenetic faults and mega-thrust systems in subduction zones. Dutzer, JF, Basford., H., Purves., S. 2009, Investigating fault sealing potential through fault relative seismic volume analysis. Petroleum Geology Conference series 2010, 7:509-515; doi:10.1144/0070509 Marfurt, K.J., Chopra, S., 2007, Seismic attributes for prospect identification and reservoir characterization. SEG Geophysical development Iacopini, D., Butler, RWH. & Purves, S. (2012). 'Seismic imaging of thrust faults and structural damage: a visualization workflow for deepwater thrust belts'. First Break, vol 5, no. 30, pp. 39-46.

Iacopini, David; Butler, Rob; Purves, Steve

2013-04-01

79

3-D capacitance density imaging system  

DOEpatents

A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved. 7 figs.

Fasching, G.E.

1988-03-18

80

Fully Automatic 3D Reconstruction of Histological Images  

E-print Network

In this paper, we propose a computational framework for 3D volume reconstruction from 2D histological slices using registration algorithms in feature space. To improve the quality of reconstructed 3D volume, first, intensity variations in images are corrected by an intensity standardization process which maps image intensity scale to a standard scale where similar intensities correspond to similar tissues. Second, a subvolume approach is proposed for 3D reconstruction by dividing standardized slices into groups. Third, in order to improve the quality of the reconstruction process, an automatic best reference slice selection algorithm is developed based on an iterative assessment of image entropy and mean square error of the registration process. Finally, we demonstrate that the choice of the reference slice has a significant impact on registration quality and subsequent 3D reconstruction.

Bagci, Ulas

2009-01-01

81

Surface Reconstruction 3D Medical Imaging  

E-print Network

. Defects are graphically identified by a user in surface颅rendered images. Radial basis function. The fitted surface is used to produce CNC milling instructions to machine a mould in the shape of the surface by avoiding the manual aspects of fashioning an implant. It is also suitable when other techniques which use

Drummond, Tom

82

AUTOMATIC REGISTRATION OF 3D ULTRASOUND IMAGES  

E-print Network

AUTOMATIC REGISTRATION OF 3颅D ULTRASOUND IMAGES R.N. Rohling, A.H. Gee and L. Berman CUED promising applications of 3颅D ultrasound lies in the visualisation and volume estimation of internal 3颅D structures. Unfortunately, the quality of the ultrasound data can be severely degraded by artifacts

Drummond, Tom

83

SPATIAL COMPOUNDING OF 3D ULTRASOUND IMAGES  

E-print Network

reconstruct 3颅D anatomy given multiple 2颅D slices. Research is underway to overcome this limitation using 3颅D. Subsequent processing can build up a 3颅D description of the imaged anatomy, in much the same manner], vascular structure [10], gall bladder [8], breast [17], kidney [11], and heart [21]. probe scan plane 3

Drummond, Tom

84

On Anisotropic Diffusion in 3D image processing and image sequence analysis  

Microsoft Academic Search

A morphological multiscale method in 3D image and 3D image sequence processing is discussed which identifies edges on level sets and the motion of features in time. Based on these indicator evaluation the image data is processed applying nonlinear diffusion and the theory of geometric evolution problems. The aim is to smooth level sets of a 3D image while simultaneously

Karol Mikula; Martin Rumpf; Fiorella Sgallari

85

Molecular Imaging of Retinal Disease  

PubMed Central

Abstract Imaging of the eye plays an important role in ocular therapeutic discovery and evaluation in preclinical models and patients. Advances in ophthalmic imaging instrumentation have enabled visualization of the retina at an unprecedented resolution. These developments have contributed toward early detection of the disease, monitoring of disease progression, and assessment of the therapeutic response. These powerful technologies are being further harnessed for clinical applications by configuring instrumentation to detect disease biomarkers in the retina. These biomarkers can be detected either by measuring the intrinsic imaging contrast in tissue, or by the engineering of targeted injectable contrast agents for imaging of the retina at the cellular and molecular level. Such approaches have promise in providing a window on dynamic disease processes in the retina such as inflammation and apoptosis, enabling translation of biomarkers identified in preclinical and clinical studies into useful diagnostic targets. We discuss recently reported and emerging imaging strategies for visualizing diverse cell types and molecular mediators of the retina in vivo during health and disease, and the potential for clinical translation of these approaches. PMID:23421501

Capozzi, Megan E.; Gordon, Andrew Y.; Penn, John S.

2013-01-01

86

Acoustic 3D imaging of dental structures  

SciTech Connect

Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

Lewis, D.K. [Lawrence Livermore National Lab., CA (United States); Hume, W.R. [California Univ., Los Angeles, CA (United States); Douglass, G.D. [California Univ., San Francisco, CA (United States)

1997-02-01

87

3D Assisted Face Recognition: A Survey of 3D Imaging, Modelling and Recognition Approachest  

Microsoft Academic Search

3D face recognition has lately been attracting ever increasing attention. In this paper we review the full spectrum of 3D face processing technology, from sensing to recognition. The review covers 3D face modelling, 3D to 3D and 3D to 2D registration, 3D based recognition and 3D assisted 2D based recognition. The fusion of 2D and 3D modalities is also addressed.

J. Kittler; A. Hilton; M. Hamouz; J. Illingworth

2005-01-01

88

Automatic 3D lesion segmentation on breast ultrasound images  

NASA Astrophysics Data System (ADS)

Automatically acquired and reconstructed 3D breast ultrasound images allow radiologists to detect and evaluate breast lesions in 3D. However, assessing potential cancers in 3D ultrasound can be difficult and time consuming. In this study, we evaluate a 3D lesion segmentation method, which we had previously developed for breast CT, and investigate its robustness on lesions on 3D breast ultrasound images. Our dataset includes 98 3D breast ultrasound images obtained on an ABUS system from 55 patients containing 64 cancers. Cancers depicted on 54 US images had been clinically interpreted as negative on screening mammography and 44 had been clinically visible on mammography. All were from women with breast density BI-RADS 3 or 4. Tumor centers and margins were indicated and outlined by radiologists. Initial RGI-eroded contours were automatically calculated and served as input to the active contour segmentation algorithm yielding the final lesion contour. Tumor segmentation was evaluated by determining the overlap ratio (OR) between computer-determined and manually-drawn outlines. Resulting average overlap ratios on coronal, transverse, and sagittal views were 0.60 +/- 0.17, 0.57 +/- 0.18, and 0.58 +/- 0.17, respectively. All OR values were significantly higher the 0.4, which is deemed "acceptable". Within the groups of mammogram-negative and mammogram-positive cancers, the overlap ratios were 0.63 +/- 0.17 and 0.56 +/- 0.16, respectively, on the coronal views; with similar results on the other views. The segmentation performance was not found to be correlated to tumor size. Results indicate robustness of the 3D lesion segmentation technique in multi-modality 3D breast imaging.

Kuo, Hsien-Chi; Giger, Maryellen L.; Reiser, Ingrid; Drukker, Karen; Edwards, Alexandra; Sennett, Charlene A.

2013-02-01

89

Accelerated 3D catheter visualization from triplanar MR projection images.  

PubMed

One major obstacle for MR-guided catheterizations is long acquisition times associated with visualizing interventional devices. Therefore, most techniques presented hitherto rely on single-plane imaging to visualize the catheter. Recently, accelerated three-dimensional (3D) imaging based on compressed sensing has been proposed to reduce acquisition times. However, frame rates with this technique remain low, and the 3D reconstruction problem yields a considerable computational load. In X-ray angiography, it is well understood that the shape of interventional devices can be derived in 3D space from a limited number of projection images. In this work, this fact is exploited to develop a method for 3D visualization of active catheters from multiplanar two-dimensional (2D) projection MR images. This is favorable to 3D MRI as the overall number of acquired profiles, and consequently the acquisition time, is reduced. To further reduce measurement times, compressed sensing is employed. Furthermore, a novel single-channel catheter design is presented that combines a solenoidal tip coil in series with a single-loop antenna, enabling simultaneous tip tracking and shape visualization. The tracked tip and catheter properties provide constraints for compressed sensing reconstruction and subsequent 2D/3D curve fitting. The feasibility of the method is demonstrated in phantoms and in an in vivo pig experiment. PMID:20572136

Schirra, Carsten Oliver; Weiss, Steffen; Krueger, Sascha; Caulfield, Denis; Pedersen, Steen F; Razavi, Reza; Kozerke, Sebastian; Schaeffter, Tobias

2010-07-01

90

Exposing digital image forgeries by 3D reconstruction technology  

NASA Astrophysics Data System (ADS)

Digital images are easy to tamper and edit due to availability of powerful image processing and editing software. Especially, forged images by taking from a picture of scene, because of no manipulation was made after taking, usual methods, such as digital watermarks, statistical correlation technology, can hardly detect the traces of image tampering. According to image forgery characteristics, a method, based on 3D reconstruction technology, which detect the forgeries by discriminating the dimensional relationship of each object appeared on image, is presented in this paper. This detection method includes three steps. In the first step, all the parameters of images were calibrated and each crucial object on image was chosen and matched. In the second step, the 3D coordinates of each object were calculated by bundle adjustment. In final step, the dimensional relationship of each object was analyzed. Experiments were designed to test this detection method; the 3D reconstruction and the forged image 3D reconstruction were computed independently. Test results show that the fabricating character in digital forgeries can be identified intuitively by this method.

Wang, Yongqiang; Xu, Xiaojing; Li, Zhihui; Liu, Haizhen; Li, Zhigang; Huang, Wei

2009-11-01

91

3D thermography imaging standardization technique for inflammation diagnosis  

NASA Astrophysics Data System (ADS)

We develop a 3D thermography imaging standardization technique to allow quantitative data analysis. Medical Digital Infrared Thermal Imaging is very sensitive and reliable mean of graphically mapping and display skin surface temperature. It allows doctors to visualise in colour and quantify temperature changes in skin surface. The spectrum of colours indicates both hot and cold responses which may co-exist if the pain associate with an inflammatory focus excites an increase in sympathetic activity. However, due to thermograph provides only qualitative diagnosis information, it has not gained acceptance in the medical and veterinary communities as a necessary or effective tool in inflammation and tumor detection. Here, our technique is based on the combination of visual 3D imaging technique and thermal imaging technique, which maps the 2D thermography images on to 3D anatomical model. Then we rectify the 3D thermogram into a view independent thermogram and conform it a standard shape template. The combination of these imaging facilities allows the generation of combined 3D and thermal data from which thermal signatures can be quantified.

Ju, Xiangyang; Nebel, Jean-Christophe; Siebert, J. Paul

2005-01-01

92

Retinal segmentation using multicolor laser imaging.  

PubMed

Spectral-domain optical coherence tomography (SD-OCT) changed 3 worlds: clinical care, clinical research, and the regulatory environment of phases 2, 3, and 4 pharmaceutical and surgical trials. OCT is now undergoing another transformation with multicolor technology, which acquires images using data from 3 simultaneous lasers: red, green, and blue, taking advantage of the different wavelengths of each of these colors to most precisely image 3 different zones of the retina. Rather than seeing only the surface of the retina and optic disc and any large lesions in the deeper retina, this technology provides a topographic map of the outer (red), mid (green), and inner (blue) retina somewhat similar to what is observed with fundus autoflourescence of deep retina, retinal pigment epithelium, and choroid. Multicolor imaging will supplement and help to define what is observed with traditional fundus photography and SD-OCT. In addition, it may demonstrate abnormalities when routine photography is normal and when SD-OCT findings are equivocal. This review will illustrate the basic principles of multicolor imaging and will show clinical examples of how this technique can further define retinal and optic nerve pathology. PMID:25133967

Sergott, Robert C

2014-09-01

93

A 3D surface imaging system for assessing human obesity  

NASA Astrophysics Data System (ADS)

The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

2009-08-01

94

Prototype of Video Endoscopic Capsule With 3-D Imaging Capabilities  

Microsoft Academic Search

Wireless video capsules can now carry out gastroenterological examinations. The images make it possible to analyze some diseases during postexamination, but the gastroenterologist could make a direct diagnosis if the video capsule integrated vision algorithms. The first step toward in situ diagnosis is the implementation of 3-D imaging techniques in the video capsule. By transmitting only the diagnosis instead of

Anthony Kolar; Olivier Romain; Jad Ayoub; Sylvain Viateur; Bertrand Granado

2010-01-01

95

Automated detection of tunneling nanotubes in 3D images  

Microsoft Academic Search

Background: This paper presents an automated method for the identification of thin membrane tubes in 3D fluo- rescence images. These tubes, referred to as tunneling nanotubes (TNTs), are newly discovered intercellular structures that connect living cells through a membrane continuity. TNTs are 50-200 nm in diameter, crossing from one cell to another at their nearest distance. In mi- croscopic images,

Erlend Hodneland; Arvid Lundervold; Steffen Gurke; Xue-Cheng Tai; Amin Rustom; Hans-Hermann Gerdes

2006-01-01

96

2D and 3D Elasticity Imaging Using Freehand Ultrasound  

E-print Network

2D and 3D Elasticity Imaging Using Freehand Ultrasound Joel Edward Lindop Pembroke College March. #12;i Summary Medical imaging is vital to modern clinical practice, enabling clinicians to examine to mechanical properties (e.g., stiffness) to which conventional forms of ultrasound, X-ray and magnetic

Drummond, Tom

97

Segmentation of Retinal Arteries in Adaptive Optics Images  

E-print Network

Segmentation of Retinal Arteries in Adaptive Optics Images Nicolas Lerm茅, Florence Rossant--In this paper, we present a method for automatically segmenting the walls of retinal arteries in adaptive optics, ap- proximate parallelism, retina imaging. I. INTRODUCTION Arterial hypertension (AH) and diabetic

Boyer, Edmond

98

2D/3D Image Registration using Regression Learning  

PubMed Central

In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object抯 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called CLARET (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projec-tion(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region抯 motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method抯 application to Image-guided Radiation Therapy (IGRT) requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof. PMID:24058278

Chou, Chen-Rui; Frederick, Brandon; Mageras, Gig; Chang, Sha; Pizer, Stephen

2013-01-01

99

3-D Terahertz Synthetic-Aperture Imaging and Spectroscopy  

NASA Astrophysics Data System (ADS)

Terahertz (THz) wavelengths have attracted recent interest in multiple disciplines within engineering and science. Situated between the infrared and the microwave region of the electromagnetic spectrum, THz energy can propagate through non-polar materials such as clothing or packaging layers. Moreover, many chemical compounds, including explosives and many drugs, reveal strong absorption signatures in the THz range. For these reasons, THz wavelengths have great potential for non-destructive evaluation and explosive detection. Three-dimensional (3-D) reflection imaging with considerable depth resolution is also possible using pulsed THz systems. While THz imaging (especially 3-D) systems typically operate in transmission mode, reflection offers the most practical configuration for standoff detection, especially for objects with high water content (like human tissue) which are opaque at THz frequencies. In this research, reflection-based THz synthetic-aperture (SA) imaging is investigated as a potential imaging solution. THz SA imaging results presented in this dissertation are unique in that a 2-D planar synthetic array was used to generate a 3-D image without relying on a narrow time-window for depth isolation cite [Shen 2005]. Novel THz chemical detection techniques are developed and combined with broadband THz SA capabilities to provide concurrent 3-D spectral imaging. All algorithms are tested with various objects and pressed pellets using a pulsed THz time-domain system in the Northwest Electromagnetics and Acoustics Research Laboratory (NEAR-Lab).

Henry, Samuel C.

100

Imaging of Buried 3D Magnetic Rolled-up Nanomembranes  

PubMed Central

Increasing performance and enabling novel functionalities of microelectronic devices, such as three-dimensional (3D) on-chip architectures in optics, electronics, and magnetics, calls for new approaches in both fabrication and characterization. Up to now, 3D magnetic architectures had mainly been studied by integral means without providing insight into local magnetic microstructures that determine the device performance. We prove a concept that allows for imaging magnetic domain patterns in buried 3D objects, for example, magnetic tubular architectures with multiple windings. The approach is based on utilizing the shadow contrast in transmission X-ray magnetic circular dichroism (XMCD) photoemission electron microscopy and correlating the observed 2D projection of the 3D magnetic domains with simulated XMCD patterns. That way, we are not only able to assess magnetic states but also monitor the field-driven evolution of the magnetic domain patterns in individual windings of buried magnetic rolled-up nanomembranes. PMID:24849571

2014-01-01

101

Large deformation 3D image registration in image-guided radiation therapy  

E-print Network

processing of serial 3D CT images used in image- guided radiation therapy. A major assumption in deformable methods, such as cone beam CT and CT-on-rails, that enable image guided radiation therapy as a way to meetLarge deformation 3D image registration in image-guided radiation therapy Mark Foskey, Brad Davis

Utah, University of

102

Segmentation, registration,and selective watermarking of retinal images  

E-print Network

In this dissertation, I investigated some fundamental issues related to medical image segmentation, registration, and watermarking. I used color retinal fundus images to perform my study because of the rich representation of different objects (blood...

Wu, Di

2006-08-16

103

Vessel Cross-Sectional Diameter Measurement on Color Retinal Image  

Microsoft Academic Search

Vessel cross-sectional diameter is an important feature for analyzing retinal vascular changes. In automated retinal image\\u000a analysis, the measurement of vascular width is a complex process as most of the vessels are few pixels wide or suffering from\\u000a lack of contrast. In this paper, we propose a new method to measure the retinal blood vessel diameter which can be used

Alauddin Bhuiyan; Baikunth Nath; Josel韙o J. Chua; Ramamohanarao Kotagiri

2008-01-01

104

High dynamic depth range for 3D image capturing system  

NASA Astrophysics Data System (ADS)

To detect the 3D depth information of objects in a deep scene is not so easy due to the limited depth of field (DoF) of cameras. In this paper, we proposed a 3D depth map capturing system with high dynamic depth range (HDDR). Unlike conventional extending depth of field (EDoF) method, the HDDR method will not deteriorate the image quality. By imitating an active tunable m n lens array focusing on a sequential of imaging planes, each object in the scene would be clearly captured by at least three elemental lenses. We estimated the elemental depth maps by using the method depth from disparity individually, and then fused them into one all-in-focus depth map. Comparing with the conventional 3D cameras, the working range of HDDR system with 3x3 camera array can be extend from 90cm to 165cm.

Huang, Yi-Pai; Hsieh, Po-Yuan; Su, Yong-Ren; Shieh, Han-Ping D.

2014-06-01

105

3-D Display Of Magnetic Resonance Imaging Of The Spine  

NASA Astrophysics Data System (ADS)

The original data is produced through standard magnetic resonance imaging (MRI) procedures with a surface coil applied to the lower back of a normal human subject. The 3-D spine image data consists of twenty-six contiguous slices with 256 x 256 pixels per slice. Two methods for visualization of the 3-D spine are explored. One method utilizes a verifocal mirror system which creates a true 3-D virtual picture of the object. Another method uses a standard high resolution monitor to simultaneously show the three orthogonal sections which intersect at any user-selected point within the object volume. We discuss the application of these systems in assessment of low back pain.

Nelson, Alan C.; Kim, Yongmin; Haralick, Robert M.; Anderson, Paul A.; Johnson, Roger H.; DeSoto, Larry A.

1988-06-01

106

Reconstruction of 3D scenes from sequences of images  

NASA Astrophysics Data System (ADS)

Reconstruction of three-dimensional (3D) scenes is an active research topic in the field of computer vision and 3D display. It's a challenge to model 3D objects rapidly and effectively. A 3D model can be extracted from multiple images. The system only requires a sequence of images taken with cameras without knowing the parameters of camera, which provide flexibility to a high degree. We focus on quickly merging point cloud of the object from depth map sequences. The whole system combines algorithms of different areas in computer vision, such as camera calibration, stereo correspondence, point cloud splicing and surface reconstruction. The procedure of 3D reconstruction is decomposed into a number of successive steps. Firstly, image sequences are received by the camera freely moving around the object. Secondly, the scene depth is obtained by a non-local stereo matching algorithm. The pairwise is realized with the Scale Invariant Feature Transform (SIFT) algorithm. An initial matching is then made for the first two images of the sequence. For the subsequent image that is processed with previous image, the point of interest corresponding to ones in previous images are refined or corrected. The vertical parallax between the images is eliminated. The next step is to calibrate camera, and intrinsic parameters and external parameters of the camera are calculated. Therefore, The relative position and orientation of camera are gotten. A sequence of depth maps are acquired by using a non-local cost aggregation method for stereo matching. Then point cloud sequence is achieved by the scene depths, which consists of point cloud model using the external parameters of camera and the point cloud sequence. The point cloud model is then approximated by a triangular wire-frame mesh to reduce geometric complexity and to tailor the model to the requirements of computer graphics visualization systems. Finally, the texture is mapped onto the wire-frame model, which can also be used for 3D display. According to the experimental results, we can reconstruct a 3D point cloud model more quickly and efficiently than other methods.

Niu, Bei; Sang, Xinzhu; Chen, Duo; Cai, Yuanfa

2013-08-01

107

3-D transformations of images in scanline order  

Microsoft Academic Search

Currerntly texture mapping onto projections of 3-D surfaces is time consuming and subject to considerable aliasing errors. Usually the procedure is to perform some inverse mapping from the area of the pixel onto the surface texture. It is difficult to do this correctly. There is an alternate approach where the texture surface is transformed as a 2-D image until it

Ed Catmull; Alvy Ray Smith

1980-01-01

108

Silhouette-based 3D Model Reconstruction from Multiple Images  

Microsoft Academic Search

The goal of this study is to investigate the re- construction of 3D graphical models of real objects in a con- trolled imaging environment and present the work done in our group based on silhouette-based reconstruction. Al- though many parts of the whole system have been well- known in the literature and in practice, the main contribu- tion of the

Adem Yasar Mulayim; Ulas Y鮨maz; Volkan Atalay

109

3D Face Reconstruction from 2D Images  

Microsoft Academic Search

This paper surveys the topic of 3D face reconstruction using 2D images from a computer science perspective. Various approaches have been proposed as solutions for this problem but most have their limitations and drawbacks. Shape from shading, shape from silhouettes, shape from motion and analysis by synthesis using morphable models are currently regarded as the main methods of attaining the

W. N. Widanagamaachchi; A. T. Dharmaratne

2008-01-01

110

Visualization and Segmentation Techniques in 3D Ultrasound Images  

Microsoft Academic Search

Although ultrasonography is an important cost-effective imaging modality, technical improvements are needed before its full potential is realized for accurate and quantitative monitoring of disease progression or regression. 2D viewing of 3D anatomy, using conventional ultrasonography limits our ability to quantify and visualize pathology and is partly responsible for the reported variability in diagnosis and monitoring of disease progression. Efforts

Aaron Fenster; Mingyue Ding; Ning Hu; Hanif M. Ladak; Guokuan Li; Neale Cardinal; D髇al Downey

111

Holography of 3D surface reconstructed CT images.  

PubMed

A multiplex hologram (cylindrical holographic stereogram) was successfully made from three-dimensional (3D) surface reconstruction CT images of a child with plagiocephaly. This method appears to be suitable as one of the projectional aids of 3D surface reconstruction CT images that are proving useful in plastic and reconstructive surgery. The principle of the method is described. Also discussed is the possibility of developing a computer-aided hologram synthesizing system that could be used for images obtained with U-arm X-ray equipment (by either cinefilm, or videotape, or digital subtraction angiography) or by CT as well as MR. For practical use, it is necessary for the hologram to be synthesized in a short time. One of the key problems in developing such a machine is the need for an incoherent-to-coherent image converter. PMID:3335666

Fujioka, M; Ohyama, N; Honda, T; Tsujiuchi, J; Suzuki, M; Hashimoto, S; Ikeda, S

1988-01-01

112

Practical pseudo-3D registration for large tomographic images  

NASA Astrophysics Data System (ADS)

Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has been performed.

Liu, Xuan; Laperre, Kjell; Sasov, Alexander

2014-09-01

113

3D Winding Number: Theory and Application to Medical Imaging  

PubMed Central

We develop a new formulation, mathematically elegant, to detect critical points of 3D scalar images. It is based on a topological number, which is the generalization to three dimensions of the 2D winding number. We illustrate our method by considering three different biomedical applications, namely, detection and counting of ovarian follicles and neuronal cells and estimation of cardiac motion from tagged MR images. Qualitative and quantitative evaluation emphasizes the reliability of the results. PMID:21317978

Becciu, Alessandro; Fuster, Andrea; Pottek, Mark; van den Heuvel, Bart; ter Haar Romeny, Bart; van Assen, Hans

2011-01-01

114

Impact of Helical Reconstruction Algorithm on 3D Image Artifact  

Microsoft Academic Search

Helical computed tomography (HCT) has become the preferred protocol in routine clinical applications. The advantages of HCT include its capability of scanning a complete anatomical volume in a single breath-hold, and the capability of generating images at arbitrary locations. However, studies have indicated various HCT-related image artifacts in 3D applications. In this paper, we perform a detailed analysis on the

Jiang Hsieh

1998-01-01

115

Generation of photorealistic 3D image using optical digitizer.  

PubMed

A technique to generate a photorealistic three-dimensional (3D) image and color-textured model using a dedicated optical digitizer is presented. The proposed technique is started with the range and texture image acquisition from different viewpoints, followed by the registration and integration of multiple range images to get a complete and nonredundant point cloud that represents a real-life object. The accuracy of the range image and the precision of correspondence between the range image and texture image are guaranteed by sensor system calibration. Based on the point cloud, a geometric model is established by considering the connectivity of adjacent range image points. In order to enhance the photorealistic effect, we suggest a texture blending technique that utilizes a composite-weight strategy to blend the texture images within the overlapped region. This technique allows more efficient removal of the artifacts existing in the registered texture image, leading to a 3D image with photorealistic quality and color-texture modeling. Experimental results are also presented to testify to the validity of the proposed method. PMID:22441476

Liu, X M; Peng, X; Yin, Y K; Li, A M; Liu, X L; Wu, W

2012-03-20

116

Simulation of 3D objects into breast tomosynthesis images.  

PubMed

Digital breast tomosynthesis is a new three-dimensional (3D) breast-imaging modality that produces images of cross-sectional planes parallel to the detector plane from a limited number of X-ray projections over a limited angular range. Several technical and clinical parameters have not yet been completely optimised. Some of the open questions could be addressed experimentally; other parameter settings cannot be easily realised in practice and the associated optimisation process requires therefore a theoretical approach. Rather than simulating the complete 3D imaging chain, it is hypothesised that the simulation of small lesions into clinical (or test object) images can be of help in the optimisation process. In the present study, small 3D objects have been simulated into real projection images. Subsequently, these hybrid projection images are reconstructed using the routine clinical reconstruction tools. In this study, the validation of this simulation framework is reported through the comparison between simulated and real objects in reconstructed planes. The results confirm that there is no statistically significant difference between the simulated and the real objects. This suggests that other small mathematical or physiological objects could be simulated with the same approach. PMID:20207750

Shaheen, E; Zanca, F; Sisini, F; Zhang, G; Jacobs, J; Bosmans, H

2010-01-01

117

Deblurring of tomosynthesis images using 3D anisotropic diffusion filtering  

NASA Astrophysics Data System (ADS)

Breast tomosynthesis is an emerging state-of-the-art three-dimensional (3D) imaging technology that demonstrates significant early promise in screening and diagnosing breast cancer. However, this kind of image has significant out-of-plane artifacts due to its limited tomography nature, which affects the image quality and further would interrupt interpretation. In this paper, we develop a robust deblurring method to remove or suppress blurry artifacts by applying three-dimensional (3D) nonlinear anisotropic diffusion filtering method. Differential equation of 3D anisotropic diffusion filtering is discretized using explicit and implicit numerical methods, respectively, combined by first (fixed grey value) and second (adiabatic) boundary conditions under ten nearest neighbor grids configuration of finite difference scheme. The discretized diffusion equation is applied in the breast volume reconstructed from the entire tomosynthetic images of breast. The proposed diffusion filtering method is evaluated qualitatively and quantitatively on clinical tomosynthesis images. Results indicate that the proposed diffusion filtering method is very powerful in suppressing the blurry artifacts, and the results also indicate that implicit numerical algorithm with fixed value boundary condition has better performance in enhancing the contrast of tomosynthesis image, demonstrating the effectiveness of the proposed filtering method in deblurring the out-of-plane artifacts.

Sun, Xuejun; Land, Walker; Samala, Ravi

2007-03-01

118

Refraction Correction in 3D Transcranial Ultrasound Imaging  

PubMed Central

We present the first correction of refraction in three-dimensional (3D) ultrasound imaging using an iterative approach that traces propagation paths through a two-layer planar tissue model, applying Snell抯 law in 3D. This approach is applied to real-time 3D transcranial ultrasound imaging by precomputing delays offline for several skull thicknesses, allowing the user to switch between three sets of delays for phased array imaging at the push of a button. Simulations indicate that refraction correction may be expected to increase sensitivity, reduce beam steering errors, and partially restore lost spatial resolution, with the greatest improvements occurring at the largest steering angles. Distorted images of cylindrical lesions were created by imaging through an acrylic plate in a tissue-mimicking phantom. As a result of correcting for refraction, lesions were restored to 93.6% of their original diameter in the lateral direction and 98.1% of their original shape along the long axis of the cylinders. In imaging two healthy volunteers, the mean brightness increased by 8.3% and showed no spatial dependency. PMID:24275538

Lindsey, Brooks D.; Smith, Stephen W.

2014-01-01

119

Spatial and symbolic queries for 3D image data  

NASA Astrophysics Data System (ADS)

We present a query system for an object-oriented biomedical imaging database containing 3-D anatomical structures and their corresponding 2-D images. The graphical interface facilitates the formation of spatial queries, nonspatial or symbolic queries, and combined spatial/symbolic queries. A query editor is used for the creation and manipulation of 3-D query objects as volumes, surfaces, lines, and points. Symbolic predicates are formulated through a combination of text fields and multiple choice selections. Query results, which may include images, image contents, composite objects, graphics, and alphanumeric data, are displayed in multiple views. Objects returned by the query may be selected directly within the views for further inspection or modification, or for use as query objects in subsequent queries. Our image database query system provides visual feedback and manipulation of spatial query objects, multiple views of volume data, and the ability to combine spatial and symbolic queries. The system allows for incremental enhancement of existing objects and the addition of new objects and spatial relationships. The query system is designed for databases containing symbolic and spatial data. This paper discuses its application to data acquired in biomedical 3- D image reconstruction, but it is applicable to other areas such as CAD/CAM, geographical information systems, and computer vision.

Benson, Daniel C.; Zick, Gregory L.

1992-04-01

120

3-D object-oriented image analysis of geophysical data  

NASA Astrophysics Data System (ADS)

Geophysical data are the main source of information about the subsurface. Geophysical techniques are, however, highly non-unique in determining specific physical parameters and boundaries of subsurface objects. To obtain actual physical information, an inversion process is often applied, in which measurements at or above the Earth surface are inverted into a 2- or 3-D subsurface spatial distribution of the physical property. Interpreting these models into structural objects, related to physical processes, requires a priori knowledge and expert analysis which is susceptible to subjective choices and is therefore often non-repeatable. In this research, we implemented a recently introduced object-based approach to interpret the 3-D inversion results of a single geophysical technique using the available a priori information and the physical and geometrical characteristics of the interpreted objects. The introduced methodology is semi-automatic and repeatable, and allows the extraction of subsurface structures using 3-D object-oriented image analysis (3-D OOA) in an objective knowledge-based classification scheme. The approach allows for a semi-objective setting of thresholds that can be tested and, if necessary, changed in a very fast and efficient way. These changes require only changing the thresholds used in a so-called ruleset, which is composed of algorithms that extract objects from a 3-D data cube. The approach is tested on a synthetic model, which is based on a priori knowledge on objects present in the study area (Tanzania). Object characteristics and thresholds were well defined in a 3-D histogram of velocity versus depth, and objects were fully retrieved. The real model results showed how 3-D OOA can deal with realistic 3-D subsurface conditions in which the boundaries become fuzzy, the object extensions become unclear and the model characteristics vary with depth due to the different physical conditions. As expected, the 3-D histogram of the real data was substantially more complex. Still, the 3-D OOA-derived objects were extracted based on their velocity and their depth location. Spatially defined boundaries, based on physical variations, can improve the modelling with spatially dependent parameter information. With 3-D OOA, the non-uniqueness on the location of objects and their physical properties can be potentially significantly reduced.

Fadel, I.; Kerle, N.; van der Meijde, M.

2014-07-01

121

3D ultrasound image segmentation using wavelet support vector machines  

PubMed Central

Purpose: Transrectal ultrasound (TRUS) imaging is clinically used in prostate biopsy and therapy. Segmentation of the prostate on TRUS images has many applications. In this study, a three-dimensional (3D) segmentation method for TRUS images of the prostate is presented for 3D ultrasound-guided biopsy. Methods: This segmentation method utilizes a statistical shape, texture information, and intensity profiles. A set of wavelet support vector machines (W-SVMs) is applied to the images at various subregions of the prostate. The W-SVMs are trained to adaptively capture the features of the ultrasound images in order to differentiate the prostate and nonprostate tissue. This method consists of a set of wavelet transforms for extraction of prostate texture features and a kernel-based support vector machine to classify the textures. The voxels around the surface of the prostate are labeled in sagittal, coronal, and transverse planes. The weight functions are defined for each labeled voxel on each plane and on the model at each region. In the 3D segmentation procedure, the intensity profiles around the boundary between the tentatively labeled prostate and nonprostate tissue are compared to the prostate model. Consequently, the surfaces are modified based on the model intensity profiles. The segmented prostate is updated and compared to the shape model. These two steps are repeated until they converge. Manual segmentation of the prostate serves as the gold standard and a variety of methods are used to evaluate the performance of the segmentation method. Results: The results from 40 TRUS image volumes of 20 patients show that the Dice overlap ratio is 90.3%??2.3% and that the sensitivity is 87.7%??4.9%. Conclusions: The proposed method provides a useful tool in our 3D ultrasound image-guided prostate biopsy and can also be applied to other applications in the prostate. PMID:22755682

Akbari, Hamed; Fei, Baowei

2012-01-01

122

Do focus measures apply to retinal images?  

NASA Astrophysics Data System (ADS)

The diverse needs for digital auto-focusing systems have driven the development of a variety of focus measures. The purpose of the current study was to investigate whether any of these focus measures are biologically plausible; specifically whether they are applicable to retinal images from which defocus information is extracted in the operation of accommodation and emmetropization, two ocular auto-focusing mechanisms. Ten representative focus measures were chosen for analysis, 6 in the spatial domain and 4 transform-based. Their performance was examined for combinations of non-defocus aberrations and positive and negative defocus. For each combination, a wavefront was reconstructed, the corresponding point spread function (PSF) computed using Fast Fourier Transform (FFT), and then the blurred image obtained as the convolution of the PSF and a perfect image. For each blurred image, a focus measure curve was derived for each focus measure. Aberration data were either collected from 22 real eyes or randomly generated data based on Gaussian parameters describing data from a published large scale human study (n>100). For the latter data set, analyses made use of distributed computing on a small inhomogeneous computer cluster. In the presence of small amounts of nondefocus aberrations, all focus measures showed monotonic changes with positive or negative defocus, and their curves generally remained unimodal, although there were large differences in their variability, sensitivity to defocus and effective ranges. However, the performance of a number of these focus measures became unacceptable when nondefocus aberrations exceed a certain level.

Tian, Yibin; Shieh, Kevin; Wildsoet, Christine F.

2007-02-01

123

Large distance 3D imaging of hidden objects  

NASA Astrophysics Data System (ADS)

Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

2014-06-01

124

Texture blending on 3D models using casual images  

NASA Astrophysics Data System (ADS)

In this paper, a method for constructing photorealistic textured model using 3D structured light digitizer is presented. Our method acquisition of range images and texture images around object, and range images are registered and integrated to construct geometric model of object. System is calibrated and poses of texture-camera are determined so that the relationship between texture and geometric model is established. After that, a global optimization is applied to assign compatible texture to adjacent surface and followed with a level procedure to remove artifacts due to vary lighting, approximate geometric model and so on. Lastly, we demonstrate the effect of our method on constructing a real model of world.

Liu, Xingming; Liu, Xiaoli; Li, Ameng; Liu, Junyao; Wang, Huijing

2013-12-01

125

Advanced 3D imaging lidar concepts for long range sensing  

NASA Astrophysics Data System (ADS)

Recent developments in 3D imaging lidar are presented. Long range 3D imaging using photon counting is now a possibility, offering a low-cost approach to integrated remote sensing with step changing advantages in size, weight and power compared to conventional analogue active imaging technology. We report results using a Geiger-mode array for time-of-flight, single photon counting lidar for depth profiling and determination of the shape and size of tree canopies and distributed surface reflections at a range of 9km, with 4?J pulses with a frame rate of 100kHz using a low-cost fibre laser operating at a wavelength of ?=1.5 ?m. The range resolution is less than 4cm providing very high depth resolution for target identification. This specification opens up several additional functionalities for advanced lidar, for example: absolute rangefinding and depth profiling for long range identification, optical communications, turbulence sensing and time-of-flight spectroscopy. Future concepts for 3D time-of-flight polarimetric and multispectral imaging lidar, with optical communications in a single integrated system are also proposed.

Gordon, K. J.; Hiskett, P. A.; Lamb, R. A.

2014-06-01

126

3D radio reflection imaging of asteroid interiors  

NASA Astrophysics Data System (ADS)

Imaging the interior structure of comets and asteroids in 3D holds the key for understand- ing early Solar System and planetary processes, aids mitigation of collisional hazards, and enables future space investigation. 3D wavefield extrapolation of time-domain finite differ- ences, which is referred to as reverse-time migration (RTM), is a tool to provide high-quality images of the complex 3D-internal structure of the target. Instead of a type of acquisition that separately deploys one orbiting and one landing satellite, I discuss dual orbiter systems, where transmitter and receiver satellites orbit around the asteroid target at different speeds. The dual orbiter acquisition can provide multi-offset data that improve the image quality by illuminating the target from different directions and by attenuating coherent noise caused by wavefield multi-pathing. Shot-record imaging requires dense and evenly distributed receiver coordinates to fully image the interior structure at every source-location. I illustrate a 3D imaging method on a complex asteroid model based on the asteroid 433 Eros using realistic data generated from different acquisition designs for the dual orbiter system. In realistic 3D acquisition, the distribution and number of receivers are limited by the acquisition time, revolving speed and direction of both the transmitter and receiver satellites, and the rotation of the asteroid. The migrated image quality depends on different acquisition parameters (i.e., source frequency bandwidth, acquisition time, the spinning rate of the asteroid) and the intrinsic asteroid medium parameters (i.e., the asteroid attenuation factor and an accurate velocity model). A critical element in reconstructing the interior of an asteroid is to have different ac- quisition designs, where the transmitter and receivers revolve quasi-continuously in different inclinational and latitudinal directions and offer evenly distributed receiver coordinates in the shot-record domain. Among different acquisition designs, the simplest orbit (where the transmitter satellite is fixed in the longitudinal plane and the receiver plane gradually shifts in the latitudinal direction around the asteroid target) offers the best data coverage and requires the least energy to shift the satellite. To obtain reasonable coverage for successfully imaging the asteroid interior, the selected acquisition takes up to eight months. However, this mission is attainable because the propulsion requirements are small due to the slow (< 10 cm/s) orbital velocities around a kilometer-sized asteroid.

Ittharat, Detchai

127

Integration of real-time 3D image acquisition and multiview 3D display  

NASA Astrophysics Data System (ADS)

Seamless integration of 3D acquisition and 3D display systems offers enhanced experience in 3D visualization of the real world objects or scenes. The vivid representation of captured 3D objects displayed on a glasses-free 3D display screen could bring the realistic viewing experience to viewers as if they are viewing real-world scene. Although the technologies in 3D acquisition and 3D display have advanced rapidly in recent years, effort is lacking in studying the seamless integration of these two different aspects of 3D technologies. In this paper, we describe our recent progress on integrating a light-field 3D acquisition system and an autostereoscopic multiview 3D display for real-time light field capture and display. This paper focuses on both the architecture design and the implementation of the hardware and the software of this integrated 3D system. A prototype of the integrated 3D system is built to demonstrate the real-time 3D acquisition and 3D display capability of our proposed system.

Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Li, Wei; Wang, Jingyi; Liu, Yongchun

2014-03-01

128

Method for extracting the aorta from 3D CT images  

NASA Astrophysics Data System (ADS)

Bronchoscopic biopsy of the central-chest lymph nodes is vital in the staging of lung cancer. Three-dimensional multi-detector CT (MDCT) images provide vivid anatomical detail for planning bronchoscopy. Unfortunately, many lymph nodes are situated close to the aorta, and an inadvertent needle biopsy could puncture the aorta, causing serious harm. As an eventual aid for more complete planning of lymph-node biopsy, it is important to define the aorta. This paper proposes a method for extracting the aorta from a 3D MDCT chest image. The method has two main phases: (1) Off-line Model Construction, which provides a set of training cases for fitting new images, and (2) On-Line Aorta Construction, which is used for new incoming 3D MDCT images. Off-Line Model Construction is done once using several representative human MDCT images and consists of the following steps: construct a likelihood image, select control points of the medial axis of the aortic arch, and recompute the control points to obtain a constant-interval medial-axis model. On-Line Aorta Construction consists of the following operations: construct a likelihood image, perform global fitting of the precomputed models to the current case's likelihood image to find the best fitting model, perform local fitting to adjust the medial axis to local data variations, and employ a region recovery method to arrive at the complete constructed 3D aorta. The region recovery method consists of two steps: model-based and region-growing steps. This region growing method can recover regions outside the model coverage and non-circular tube structures. In our experiments, we used three models and achieved satisfactory results on twelve of thirteen test cases.

Taeprasartsit, Pinyo; Higgins, William E.

2007-03-01

129

New method of retinal vessels diameter evaluation in images obtained during retinal tomography  

NASA Astrophysics Data System (ADS)

PURPOSE: To assess accuracy and reproducibility of retinal vessels caliber measurement in Heidelberg retina tomographer (HRT II) images by new developed method. METHODS: 76 images of optic nerve head were obtained from 76 eyes. Eight vessels' diameters were measured in each case in the area of 0.5 to 1.0 disc diameter from the optic disc margin. The window for "interactive measurements" was used to determine three-dimensional coordinates (x,y,z) of each vessel diameter. Diameter of each vessel was calculated according to the Pythagorean Theorem (value of "z" coordinate remained unchanged). RESULTS: Diameter of retinal arterioles varied from 55,0 to 106,5 ?m. Diameter of retinal venulas ranged from 68,9 to 140,1 ?m. The standard deviation value changed from 0,6 to 16 micron. Artetiole/venule ratio mean value was 0,702+/-0,039. CONCLUSIONS: Measurement of retinal vessels diameter in images obtained during retinal tomography is exact and informative. The described method is the unique way of retinal vessels caliber measurement in absolute values.

Astakhov, Yury S.; Akopov, Evgeny L.

2005-04-01

130

Optimization of the open-loop liquid crystal adaptive optics retinal imaging system  

NASA Astrophysics Data System (ADS)

An open-loop adaptive optics (AO) system for retinal imaging was constructed using a liquid crystal spatial light modulator (LC-SLM) as the wavefront compensator. Due to the dispersion of the LC-SLM, there was only one illumination source for both aberration detection and retinal imaging in this system. To increase the field of view (FOV) for retinal imaging, a modified mechanical shutter was integrated into the illumination channel to control the size of the illumination spot on the fundus. The AO loop was operated in a pulsing mode, and the fundus was illuminated twice by two laser impulses in a single AO correction loop. As a result, the FOV for retinal imaging was increased to 1.7-deg without compromising the aberration detection accuracy. The correction precision of the open-loop AO system was evaluated in a closed-loop configuration; the residual error is approximately 0.0909? (root-mean-square, RMS), and the Strehl ratio ranges to 0.7217. Two subjects with differing rates of myopia (-3D and -5D) were tested. High-resolution images of capillaries and photoreceptors were obtained.

Kong, Ningning; Li, Chao; Xia, Mingliang; Li, Dayu; Qi, Yue; Xuan, Li

2012-02-01

131

Comparison of 3D Set Partitioning Methods in Hyperspectral Image Compression Featuring an Improved 3D-SPIHT  

Microsoft Academic Search

Summary form only given. Hyperspectral images were generated through the collection of hundreds of narrow and contiguously spaced spectral bands of data producing a highly correlated long sequence of images. An investigation and comparison was made on the performance of several three-dimensional embedded wavelet algorithms for compression of hyperspectral images. These algorithms include 3D-SPIHT, AT-3DSPIHT, 3D-SPECK (three-dimensional set partitioned embedded

Xiaoli Tang; Sungdae Cho; William A. Pearlman

2003-01-01

132

Validation of 3D ultrasound: CT registration of prostate images  

NASA Astrophysics Data System (ADS)

All over the world 20% of men are expected to develop prostate cancer sometime in his life. In addition to surgery - being the traditional treatment for cancer - the radiation treatment is getting more popular. The most interesting radiation treatment regarding prostate cancer is Brachytherapy radiation procedure. For the safe delivery of that therapy imaging is critically important. In several cases where a CT device is available a combination of the information provided by CT and 3D Ultrasound (U/S) images offers advantages in recognizing the borders of the lesion and delineating the region of treatment. For these applications the CT and U/S scans should be registered and fused in a multi-modal dataset. Purpose of the present development is a registration tool (registration, fusion and validation) for available CT volumes with 3D U/S images of the same anatomical region, i.e. the prostate. The combination of these two imaging modalities interlinks the advantages of the high-resolution CT imaging and low cost real-time U/S imaging and offers a multi-modality imaging environment for further target and anatomy delineation. This tool has been integrated into the visualization software "InViVo" which has been developed over several years in Fraunhofer IGD in Darmstadt.

Firle, Evelyn A.; Wesarg, Stefan; Karangelis, Grigoris; Dold, Christian

2003-05-01

133

An image reconstruction algorithm for 3-d electrical impedance mammography.  

PubMed

The Sussex MK4 electrical impedance mammography system is especially designed for 3-D breast screening. It aims to diagnose breast cancer at an early stage when it is most treatable. Planar electrodes are employed in this system. The challenge with planar electrodes is the inaccuracy and poor sensitivity in the vertical direction for 3-D imaging. An enhanced image reconstruction algorithm using a duo-mesh method is proposed to improve the vertical accuracy and sensitivity. The novel part of the enhanced image reconstruction algorithm is the correction term. To evaluate the new algorithm, an image processing based error analysis method is presented, which not only can precisely assess the error of the reconstructed image but also locate the center and outline the center and outline the shape of the objects of interest. Although the enhanced image reconstruction algorithm and the image processing based error analysis method are designed for the Sussex MK4 system, they are applicable to all electrical impedance tomography systems, regardless of the hardware design. To validate the enhanced algorithm, performance results from simulations, phantoms and patients are presented. PMID:25014954

Zhang, Xiaolin; Wang, Wei; Sze, Gerald; Barber, David; Chatwin, Chris

2014-12-01

134

Adaptive optics with a micromachined membrane deformable mirror for high resolution retinal imaging  

NASA Astrophysics Data System (ADS)

The resolution of conventional retinal imaging technologies is limited by the optics of the human eye. In this dissertation, the aberrations of the eye and their compensation techniques are investigated for the purpose of high-resolution retinal imaging. Both computer modeling and adaptive optics experiments with the novel micromachined membrane deformable mirror (MMDM) device are performed. First, a new aspherical computer eye model is developed to study the aberrations of the eye and their effects on retinal imaging. The aberrations and point-spread functions of the eye are calculated and found to be pupil size dependent and space-variant. The aberration compensation is modeled using customized lens design techniques showing that high-resolution retinal images can be obtained with a dilated pupil through aberration compensation. Due to the space-variant nature and the individual variations of the eye aberrations, adaptive optics techniques are necessary for dynamic aberration compensation. Thus, an experimental adaptive optics retinal imaging system, based on a novel, low- cost, and compact MMDM, is constructed to investigate adaptive optics techniques for eye aberration compensation, where the aberrations are measured using a Hartmann-Shack wavefront sensor. Due to the difficulties in controlling the new MMDM device, a novel control algorithm is developed to generate the desired wavefront for aberration compensation of the eye. The MMDM is characterized and a closed-loop system algorithm is developed for eye aberration compensation in real-time. The system is tested with an artificial eye, showing that it can effectively compensate for low-order and to a certain extent for high-order aberrations of the eye. A diffraction-limited resolution is achieved when the aberrations are within the working range of the MMDM. Aberration compensation and retinal imaging experiments are also performed with real eyes, showing an improved imaging resolution. In addition, a preliminary investigation into a complementary adaptive optics approach of using image deconvolution techniques is also conducted to improve retinal image resolution when the aberrations of the eye can not be completely compensated for by the MMDM. Future research can be conducted based on this dissertation to obtain high-resolution 3-D retinal imaging.

Zhu, Lijun

135

Automated Recognition of 3D Features in GPIR Images  

NASA Technical Reports Server (NTRS)

A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a directed-graph data structure. Relative to past approaches, this multiaxis approach offers the advantages of more reliable detections, better discrimination of objects, and provision of redundant information, which can be helpful in filling gaps in feature recognition by one of the component algorithms. The image-processing class also includes postprocessing algorithms that enhance identified features to prepare them for further scrutiny by human analysts (see figure). Enhancement of images as a postprocessing step is a significant departure from traditional practice, in which enhancement of images is a preprocessing step.

Park, Han; Stough, Timothy; Fijany, Amir

2007-01-01

136

Getting in touch--3D printing in forensic imaging.  

PubMed

With the increasing use of medical imaging in forensics, as well as the technological advances in rapid prototyping, we suggest combining these techniques to generate displays of forensic findings. We used computed tomography (CT), CT angiography, magnetic resonance imaging (MRI) and surface scanning with photogrammetry in conjunction with segmentation techniques to generate 3D polygon meshes. Based on these data sets, a 3D printer created colored models of the anatomical structures. Using this technique, we could create models of bone fractures, vessels, cardiac infarctions, ruptured organs as well as bitemark wounds. The final models are anatomically accurate, fully colored representations of bones, vessels and soft tissue, and they demonstrate radiologically visible pathologies. The models are more easily understood by laypersons than volume rendering or 2D reconstructions. Therefore, they are suitable for presentations in courtrooms and for educational purposes. PMID:21602004

Ebert, Lars Chr; Thali, Michael J; Ross, Steffen

2011-09-10

137

In vivo fluorescence imaging of primate retinal ganglion cells and retinal pigment epithelial cells  

NASA Astrophysics Data System (ADS)

The ability to resolve single cells noninvasively in the living retina has important applications for the study of normal retina, diseased retina, and the efficacy of therapies for retinal disease. We describe a new instrument for high-resolution, in vivo imaging of the mammalian retina that combines the benefits of confocal detection, adaptive optics, multispectral, and fluorescence imaging. The instrument is capable of imaging single ganglion cells and their axons through retrograde transport in ganglion cells of fluorescent dyes injected into the monkey lateral geniculate nucleus (LGN). In addition, we demonstrate a method involving simultaneous imaging in two spectral bands that allows the integration of very weak signals across many frames despite inter-frame movement of the eye. With this method, we are also able to resolve the smallest retinal capillaries in fluorescein angiography and the mosaic of retinal pigment epithelium (RPE) cells with lipofuscin autofluorescence.

Gray, Daniel C.; Merigan, William; Wolfing, Jessica I.; Gee, Bernard P.; Porter, Jason; Dubra, Alfredo; Twietmeyer, Ted H.; Ahamd, Kamran; Tumbar, Remy; Reinholz, Fred; Williams, David R.

2006-08-01

138

A 3D imaging radar for small unmanned airplanes - ARTINO  

Microsoft Academic Search

In this paper a 3D imaging radar concept, suitable for an unmanned aerial vehicle (UAV), and its status is presented. The concept combines a real aperture, realized by a linear array of nadir pointing antennas, and a synthetic aperture, which is spanned by the moving airplane. The radar front-end uses frequency modulated continuous wave (FMCW) technique with direct down-conversion in

M. Weib; J. H. G. Ender

2005-01-01

139

Radiometric Quality Evaluation of INSAT-3D Imager Data  

NASA Astrophysics Data System (ADS)

INSAT-3D is an advanced meteorological satellite of ISRO which acquires imagery in optical and infra-red (IR) channels for study of weather dynamics in Indian sub-continent region. In this paper, methodology of radiometric quality evaluation for Level-1 products of Imager, one of the payloads onboard INSAT-3D, is described. Firstly, overall visual quality of scene in terms of dynamic range, edge sharpness or modulation transfer function (MTF), presence of striping and other image artefacts is computed. Uniform targets in Desert and Sea region are identified for which detailed radiometric performance evaluation for IR channels is carried out. Mean brightness temperature (BT) of targets is computed and validated with independently generated radiometric references. Further, diurnal/seasonal trends in target BT values and radiometric uncertainty or sensor noise are studied. Results of radiometric quality evaluation over duration of eight months (January to August 2014) and comparison of radiometric consistency pre/post yaw flip of satellite are presented. Radiometric Analysis indicates that INSAT-3D images have high contrast (MTF > 0.2) and low striping effects. A bias of <4K is observed in the brightness temperature values of TIR-1 channel measured during January-August 2014 indicating consistent radiometric calibration. Diurnal and seasonal analysis shows that Noise equivalent differential temperature (NEdT) for IR channels is consistent and well within specifications.

Prakash, S.; Jindal, D.; Badal, N.; Kartikeyan, B.; Gopala Krishna, B.

2014-11-01

140

Automated Identification of Fiducial Points on 3D Torso Images  

PubMed Central

Breast reconstruction is an important part of the breast cancer treatment process for many women. Recently, 2D and 3D images have been used by plastic surgeons for evaluating surgical outcomes. Distances between different fiducial points are frequently used as quantitative measures for characterizing breast morphology. Fiducial points can be directly marked on subjects for direct anthropometry, or can be manually marked on images. This paper introduces novel algorithms to automate the identification of fiducial points in 3D images. Automating the process will make measurements of breast morphology more reliable, reducing the inter- and intra-observer bias. Algorithms to identify three fiducial points, the nipples, sternal notch, and umbilicus, are described. The algorithms used for localization of these fiducial points are formulated using a combination of surface curvature and 2D color information. Comparison of the 3D co-ordinates of automatically detected fiducial points and those identified manually, and geodesic distances between the fiducial points are used to validate algorithm performance. The algorithms reliably identified the location of all three of the fiducial points. We dedicate this article to our late colleague and friend, Dr. Elisabeth K. Beahm. Elisabeth was both a talented plastic surgeon and physician-scientist; we deeply miss her insight and her fellowship. PMID:25288903

Kawale, Manas M; Reece, Gregory P; Crosby, Melissa A; Beahm, Elisabeth K; Fingeret, Michelle C; Markey, Mia K; Merchant, Fatima A

2013-01-01

141

Polarization and retinal image quality estimates in the human eye  

Microsoft Academic Search

We have previously studied how polarization affects the double-pass estimates of the retinal image quality by using an imaging polarimeter (Opt. Lett. 24, 64 (1999)). A series of 16 images for independent combinations of polarization states in the polarimeter were recorded to obtain the spatially resolved Mueller matrices of the eye. From these matrices, double-pass images of a point source

Juan M. Bueno; Pablo Artal

2001-01-01

142

3D imaging of soil pore network: two different approaches  

NASA Astrophysics Data System (ADS)

Pore geometry imaging and its quantitative description is a key factor for advances in the knowledge of physical, chemical and biological soil processes. For many years photos from flattened surfaces of undisturbed soil samples impregnated with fluorescent resin and from soil thin sections under microscope have been the only way available for exploring pore architecture at different scales. Earlier 3D representations of the internal structure of the soil based on not destructive methods have been obtained using medical tomographic systems (NMR and X-ray CT). However, images provided using such equipments, show strong limitations in terms of spatial resolution. In the last decade very good results have then been obtained using imaging from very expensive systems based on synchrotron radiation. More recently, X-ray Micro-Tomography has resulted the most widely applied being the technique showing the best compromise between costs, resolution and size of the images. Conversely, the conceptually simpler but destructive method of "serial sectioning" has been progressively neglected for technical problems in sample preparation and time consumption needed to obtain an adequate number of serial sections for correct 3D reconstruction of soil pore geometry. In this work a comparison between the two methods above has been carried out in order to define advantages, shortcomings and to point out their different potential. A cylindrical undisturbed soil sample 6.5cm in diameter and 6.5cm height of an Ap horizon of an alluvial soil showing vertic characteristics, has been reconstructed using both a desktop X-ray micro-tomograph Skyscan 1172 and the new automatic serial sectioning system SSAT (Sequential Section Automatic Tomography) set up at CNR ISAFOM in Ercolano (Italy) with the aim to overcome most of the typical limitations of such a technique. Image best resolution of 7.5 祄 per voxel resulted using X-ray Micro CT while 20 祄 was the best value using the serial sectioning system but on less noisy images. SSAT system showed more flexibility in terms of sample size although both techniques allowed investigation on REVs (Representative Elementary Volumes) for most of macroscopic properties describing soil processes. Morover, undoubted advantages of not destructivity and ease sample preparation for the Skysan 1172 are balanced by lower overall costs for the SSAT and its potential of producing 3D representation of soil features different from the simple solid/porous phases. Both approaches allow to use exactly the same image analysis procedures on the reconstructed 3D images although require some specific pre-processing treatments.

Matrecano, M.; Di Matteo, B.; Mele, G.; Terribile, F.

2009-04-01

143

3D sound and 3D image interactions: a review of audio-visual depth perception  

NASA Astrophysics Data System (ADS)

There has been much research concerning visual depth perception in 3D stereoscopic displays and, to a lesser extent, auditory depth perception in 3D spatial sound systems. With 3D sound systems now available in a number of different forms, there is increasing interest in the integration of 3D sound systems with 3D displays. It therefore seems timely to review key concepts and results concerning depth perception in such display systems. We first present overviews of both visual and auditory depth perception, before focussing on cross-modal effects in audio-visual depth perception, which may be of direct interest to display and content designers.

Berry, Jonathan S.; Roberts, David A. T.; Holliman, Nicolas S.

2014-02-01

144

Automatic Class-Specific 3D Reconstruction from a Single Image  

E-print Network

Our goal is to automatically reconstruct 3D objects from a single image, by using prior 3D shape models of classes. The shape models, defined as a collection of oriented primitive shapes centered at fixed 3D positions, can ...

Lozano-Perez, Tomas

2009-02-18

145

Retinal image restoration by means of blind deconvolution.  

PubMed

Retinal imaging plays a key role in the diagnosis and management of ophthalmologic disorders, such as diabetic retinopathy, glaucoma, and age-related macular degeneration. Because of the acquisition process, retinal images often suffer from blurring and uneven illumination. This problem may seriously affect disease diagnosis and progression assessment. Here we present a method for color retinal image restoration by means of multichannel blind deconvolution. The method is applied to a pair of retinal images acquired within a lapse of time, ranging from several minutes to months. It consists of a series of preprocessing steps to adjust the images so they comply with the considered degradation model, followed by the estimation of the point-spread function and, ultimately, image deconvolution. The preprocessing is mainly composed of image registration, uneven illumination compensation, and segmentation of areas with structural changes. In addition, we have developed a procedure for the detection and visualization of structural changes. This enables the identification of subtle developments in the retina not caused by variation in illumination or blur. The method was tested on synthetic and real images. Encouraging experimental results show that the method is capable of significant restoration of degraded retinal images. PMID:22112121

Marrugo, Andr閟 G; Sorel, Michal; Sroubek, Filip; Mill醤, Mar韆 S

2011-11-01

146

3D set partitioned embedded zero block coding algorithm for hyperspectral image compression  

NASA Astrophysics Data System (ADS)

In this paper, a three-dimensional Set Partitioned Embedded Zero Block Coding (3D SPEZBC) algorithm for hyperspectral image compression is proposed, which is motivated by the EZBC and SPECK algorithms. Experimental results show that the 3D SPEZBC algorithm obviously outperforms 3D SPECK, 3D SPIHT and AT-3D SPIHT, and is slightly better than JPEG2000-MC in the compression performances. Moreover, the 3D SPEZBC algorithm can save considerable memory requirement in comparison with 3D EZBC.

Hou, Ying; Liu, Guizhong

2007-11-01

147

3D Lunar Terrain Reconstruction from Apollo Images  

NASA Technical Reports Server (NTRS)

Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

2009-01-01

148

Quantitative 3D Optical Imaging: Applications in Dosimetry and Biophysics  

NASA Astrophysics Data System (ADS)

Optical-CT has been shown to be a potentially useful imaging tool for the two very different spheres of biologists and radiation therapy physicists, but it has yet to live up to that potential. In radiation therapy, researchers have used optical-CT for the readout of 3D dosimeters, but it is yet to be a clinically relevant tool as the technology is too slow to be considered practical. Biologists have used the technique for structural imaging, but have struggled with emission tomography as the reality of photon attenuation for both excitation and emission have made the images quantitatively irrelevant. Dosimetry. The DLOS (Duke Large field of view Optical-CT Scanner) was designed and constructed to make 3D dosimetry utilizing optical-CT a fast and practical tool while maintaining the accuracy of readout of the previous, slower readout technologies. Upon construction/optimization/implementation of several components including a diffuser, band pass filter, registration mount & fluid filtration system the dosimetry system provides high quality data comparable to or exceeding that of commercial products. In addition, a stray light correction algorithm was tested and implemented. The DLOS in combination with the 3D dosimeter it was designed for, PREAGETM, then underwent rigorous commissioning and benchmarking tests validating its performance against gold standard data including a set of 6 irradiations. DLOS commissioning tests resulted in sub-mm isotropic spatial resolution (MTF >0.5 for frequencies of 1.5lp/mm) and a dynamic range of 60dB. Flood field uniformity was 10% and stable after 45minutes. Stray light proved to be small, due to telecentricity, but even the residual can be removed through deconvolution. Benchmarking tests showed the mean 3D passing gamma rate (3%, 3mm, 5% dose threshold) over the 6 benchmark data sets was 97.3% +/- 0.6% (range 96%-98%) scans totaling 10 minutes, indicating excellent ability to perform 3D dosimetry while improving the speed of readout. Noise was low at 2% for 2mm reconstructions. The DLOS/PRESAGERTM benchmark tests show consistently excellent performance, with very good agreement to simple known distributions. The telecentric design was critical to enabling fast (~15mins) imaging with minimal stray light artifacts. The system produces accurate isotropic 2mm3 dose data over clinical volumes (e.g. 16cm diameter phantoms, 12 cm height), and represents a uniquely useful and versatile new tool for commissioning complex radiotherapy techniques. The system also has wide versatility, and has successfully been used in preliminary tests with protons and with kV irradiations. Biology. Attenuation corrections for optical-emission-CT were done by modeling physical parameters in the imaging setup within the framework of an ordered subset expectation maximum (OSEM) iterative reconstruction algorithm. This process has a well documented history in single photon emission computed tomography (SPECT), but is inherently simpler due to the lack of excitation photons to account for. Excitation source strength distribution, excitation and emission attenuation were modeled. The accuracy of the correction was investigated by imaging phantoms containing known distributions of attenuation and fluorophores. The correction was validated on a manufactured phantom designed to give uniform emission in a central cuboidal region and later applied to a cleared mouse brain with GFP (green-fluorescentprotein) labeled vasculature and a cleared 4T1 xenograft flank tumor with constitutive RFP (red-fluorescent-protein). Reconstructions were compared to corresponding slices imaged with a fluorescent dissection microscope. Significant optical-ECT attenuation artifacts were observed in the uncorrected phantom images and appeared up to 80% less intense than the verification image in the central region. The corrected phantom images showed excellent agreement with the verification image with only slight variations. The corrected tissue sample reconstructions showed general agreement between the verification images. Comp

Thomas, Andrew Stephen

149

Feature detection on 3D images of dental imprints  

NASA Astrophysics Data System (ADS)

A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.

Mokhtari, Marielle; Laurendeau, Denis

1994-09-01

150

Image to Point Cloud Method of 3D-MODELING  

NASA Astrophysics Data System (ADS)

This article describes the method of constructing 3D models of objects (buildings, monuments) based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.

Chibunichev, A. G.; Galakhov, V. P.

2012-07-01

151

Depth-controlled 3D TV image coding  

NASA Astrophysics Data System (ADS)

Conventional 3D-TV codecs processing one down-compatible (either left, or right) channel may optionally include the extraction of the disparity field associated with the stereo-pairs to support the coding of the complementary channel. A two-fold improvement over such approaches is proposed in this paper by exploiting 3D features retained in the stereo-pairs to reduce the redundancies in both channels, and according to their visual sensitiveness. Through an a-priori disparity field analysis, our coding scheme separates a region of interest from the foreground/background in the volume space reproduced in order to code them selectively based on their visual relevance. Such a region of interest is here identified as the one which is focused by the shooting device. By suitably scaling the DCT coefficient n such a way that precision is reduced for the image blocks lying on less relevant areas, our approach aims at reducing the signal energy in the background/foreground patterns, while retaining finer details on the more relevant image portions. From an implementation point of view, it is worth noticing that the system proposed keeps its surplus processing power on the encoder side only. Simulation results show such improvements as a better image quality for a given transmission bit rate, or a graceful quality degradation of the reconstructed images with decreasing data-rates.

Chiari, Armando; Ciciani, Bruno; Romero, Milton; Rossi, Ricardo

1998-04-01

152

3D range scan enhancement using image-based methods  

NASA Astrophysics Data System (ADS)

This paper addresses the problem of 3D surface scan refinement, which is desirable due to noise, outliers, and missing measurements being present in the 3D surfaces obtained with a laser scanner. We present a novel algorithm for the fusion of absolute laser scanner depth profiles and photometrically estimated surface normal data, which yields a noise-reduced and highly detailed depth profile with large scale shape robustness. In contrast to other approaches published in the literature, the presented algorithm (1) regards non-Lambertian surfaces, (2) simultaneously computes surface reflectance (i.e. BRDF) parameters required for 3D reconstruction, (3) models pixelwise incident light and viewing directions, and (4) accounts for interreflections. The algorithm as such relies on the minimization of a three-component error term, which penalizes intensity deviations, integrability deviations, and deviations from the known large-scale surface shape. The solution of the error minimization is obtained iteratively based on a calculus of variations. BRDF parameters are estimated by initially reducing and then iteratively refining the optical resolution, which provides the required robust data basis. The 3D reconstruction of concave surface regions affected by interreflections is improved by compensating global illumination in the image data. The algorithm is evaluated based on eight objects with varying albedos and reflectance behaviors (diffuse, specular, metallic). The qualitative evaluation shows a removal of outliers and a strong reduction of noise, while the large scale shape is preserved. Fine surface details Which are previously not contained in the surface scans, are incorporated through using image data. The algorithm is evaluated with respect to its absolute accuracy using two caliper objects of known shape, and based on synthetically generated data. The beneficial effect of interreflection compensation on the reconstruction accuracy is evaluated quantitatively in a Photometric Stereo framework.

Herbort, Steffen; Gerken, Britta; Schugk, Daniel; W鰄ler, Christian

2013-10-01

153

Improving 3D Wavelet-Based Compression of Hyperspectral Images  

NASA Technical Reports Server (NTRS)

Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a manner similar to that of a baseline hyperspectral- image-compression method. The mean values are encoded in the compressed bit stream and added back to the data at the appropriate decompression step. The overhead incurred by encoding the mean values only a few bits per spectral band is negligible with respect to the huge size of a typical hyperspectral data set. The other method is denoted modified decomposition. This method is so named because it involves a modified version of a commonly used multiresolution wavelet decomposition, known in the art as the 3D Mallat decomposition, in which (a) the first of multiple stages of a 3D wavelet transform is applied to the entire dataset and (b) subsequent stages are applied only to the horizontally-, vertically-, and spectrally-low-pass subband from the preceding stage. In the modified decomposition, in stages after the first, not only is the spatially-low-pass, spectrally-low-pass subband further decomposed, but also spatially-low-pass, spectrally-high-pass subbands are further decomposed spatially. Either method can be used alone to improve the quality of a reconstructed image (see figure). Alternatively, the two methods can be combined by first performing modified decomposition, then subtracting the mean values from spatial planes of spatially-low-pass subbands.

Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

2009-01-01

154

Detection of retinal nerve fiber layer defects in retinal fundus images using Gabor filtering  

NASA Astrophysics Data System (ADS)

Retinal nerve fiber layer defect (NFLD) is one of the most important findings for the diagnosis of glaucoma reported by ophthalmologists. However, such changes could be overlooked, especially in mass screenings, because ophthalmologists have limited time to search for a number of different changes for the diagnosis of various diseases such as diabetes, hypertension and glaucoma. Therefore, the use of a computer-aided detection (CAD) system can improve the results of diagnosis. In this work, a technique for the detection of NFLDs in retinal fundus images is proposed. In the preprocessing step, blood vessels are "erased" from the original retinal fundus image by using morphological filtering. The preprocessed image is then transformed into a rectangular array. NFLD regions are observed as vertical dark bands in the transformed image. Gabor filtering is then applied to enhance the vertical dark bands. False positives (FPs) are reduced by a rule-based method which uses the information of the location and the width of each candidate region. The detected regions are back-transformed into the original configuration. In this preliminary study, 71% of NFLD regions are detected with average number of FPs of 3.2 per image. In conclusion, we have developed a technique for the detection of NFLDs in retinal fundus images. Promising results have been obtained in this initial study.

Hayashi, Yoshinori; Nakagawa, Toshiaki; Hatanaka, Yuji; Aoyama, Akira; Kakogawa, Masakatsu; Hara, Takeshi; Fujita, Hiroshi; Yamamoto, Tetsuya

2007-03-01

155

Live imaging and analysis of postnatal mouse retinal development  

PubMed Central

Background The explanted, developing rodent retina provides an efficient and accessible preparation for use in gene transfer and pharmacological experimentation. Many of the features of normal development are retained in the explanted retina, including retinal progenitor cell proliferation, heterochronic cell production, interkinetic nuclear migration, and connectivity. To date, live imaging in the developing retina has been reported in non-mammalian and mammalian whole-mount samples. An integrated approach to rodent retinal culture/transfection, live imaging, cell tracking, and analysis in structurally intact explants greatly improves our ability to assess the kinetics of cell production. Results In this report, we describe the assembly and maintenance of an in vitro, CO2-independent, live mouse retinal preparation that is accessible by both upright and inverted, 2-photon or confocal microscopes. The optics of this preparation permit high-quality and multi-channel imaging of retinal cells expressing fluorescent reporters for up to 48h. Tracking of interkinetic nuclear migration within individual cells, and changes in retinal progenitor cell morphology are described. Follow-up, hierarchical cluster screening revealed that several different dependent variable measures can be used to identify and group movement kinetics in experimental and control samples. Conclusions Collectively, these methods provide a robust approach to assay multiple features of rodent retinal development using live imaging. PMID:23758927

2013-01-01

156

Multimodal registration of retinal images using self organizing maps  

Microsoft Academic Search

In this paper, an automatic method for registering multimodal retinal images is presented. The method consists of three steps: the vessel centerline detection and extraction of bifurcation points only in the reference image, the automatic correspondence of bifurcation points in the two images using a novel implementation of the self organizing maps and the extraction of the parameters of the

George K. Matsopoulos; Pantelis A. Asvestas; Nicolaos A. Mouravliansky; Konstantinos K. Delibasis

2004-01-01

157

An automated vessel segmentation of retinal images using multiscale vesselness  

Microsoft Academic Search

The ocular fundus image can provide information on pathological changes caused by local ocular diseases and early signs of certain systemic diseases, such as diabetes and hypertension. Automated analysis and interpretation of fundus images has become a necessary and important diagnostic procedure in ophthalmology. The extraction of blood vessels from retinal images is an important and challenging task in medical

Mariem Ben Abdallah; Jihene Malek; Karl Krissian; Rached Tourki

2011-01-01

158

AUTOMATED MODELING OF 3D BUILDING ROOFS USING IMAGE AND LIDAR DATA  

E-print Network

AUTOMATED MODELING OF 3D BUILDING ROOFS USING IMAGE AND LIDAR DATA N. Demir* , E. Baltsavias, Detection, 3D Modelling ABSTRACT: In this work, an automated approach for 3D building roof modelling of accurate and complete 3D building models with high degree of automation. Aerial images and LiDAR data

Schindler, Konrad

159

Adaptive optics scanning laser ophthalmoscope for stabilized retinal imaging  

PubMed Central

A retinal imaging instrument that integrates adaptive optics (AO), scanning laser ophthalmoscopy (SLO), and retinal tracking components was built and tested. The system uses a Hartmann-Shack wave-front sensor (HS-WS) and MEMS-based deformable mirror (DM) for AO-correction of high-resolution, confocal SLO images. The system includes a wide-field line-scanning laser ophthalmoscope for easy orientation of the high-magnification SLO raster. The AO system corrected ocular aberrations to <0.1 ?m RMS wave-front error. An active retinal tracking with custom processing board sensed and corrected eye motion with a bandwidth exceeding 1 kHz. We demonstrate tracking accuracy down to 6 ?m RMS for some subjects (typically performance: 1015 ?m RMS). The system has the potential to become an important tool to clinicians and researchers for vision studies and the early detection and treatment of retinal diseases. PMID:19516480

Hammer, Daniel X.; Ferguson, R. Daniel; Bigelow, Chad E.; Iftimia, Nicusor V.; Ustun, Teoman E.; Burns, Stephen A.

2010-01-01

160

[Content-based automatic retinal image recognition and retrieval system].  

PubMed

This paper is aimed to fulfill a prototype system used to classify and retrieve retinal image automatically. With the content-based image retrieval (CBIR) technology, a method to represent the retinal characteristics mixing the fundus image color (gray) histogram with bright, dark region features and other local comprehensive information was proposed. The method uses kernel principal component analysis (KPCA) to further extract nonlinear features and dimensionality reduced. It also puts forward a measurement method using support vector machine (SVM) on KPCA weighted distance in similarity measure aspect. Testing 300 samples with this prototype system randomly, we obtained the total image number of wrong retrieved 32, and the retrieval rate 89.33%. It showed that the identification rate of the system for retinal image was high. PMID:23858770

Zhang, Jiumei; Du, Jianjun; Cheng, Xia; Cao, Hongliang

2013-04-01

161

Ultra-realistic 3-D imaging based on colour holography  

NASA Astrophysics Data System (ADS)

A review of recent progress in colour holography is provided with new applications. Colour holography recording techniques in silver-halide emulsions are discussed. Both analogue, mainly Denisyuk colour holograms, and digitally-printed colour holograms are described and their recent improvements. An alternative to silver-halide materials are the panchromatic photopolymer materials such as the DuPont and Bayer photopolymers which are covered. The light sources used to illuminate the recorded holograms are very important to obtain ultra-realistic 3-D images. In particular the new light sources based on RGB LEDs are described. They show improved image quality over today's commonly used halogen lights. Recent work in colour holography by holographers and companies in different countries around the world are included. To record and display ultra-realistic 3-D images with perfect colour rendering are highly dependent on the correct recording technique using the optimal recording laser wavelengths, the availability of improved panchromatic recording materials and combined with new display light sources.

Bjelkhagen, H. I.

2013-02-01

162

3D-LZ helicopter ladar imaging system  

NASA Astrophysics Data System (ADS)

A joint-service team led by the Air Force Research Laboratory's Munitions and Sensors Directorates completed a successful flight test demonstration of the 3D-LZ Helicopter LADAR Imaging System. This was a milestone demonstration in the development of technology solutions for a problem known as "helicopter brownout", the loss of situational awareness caused by swirling sand during approach and landing. The 3D-LZ LADAR was developed by H.N. Burns Engineering and integrated with the US Army Aeroflightdynamics Directorate's Brown-Out Symbology System aircraft state symbology aboard a US Army EH-60 Black Hawk helicopter. The combination of these systems provided an integrated degraded visual environment landing solution with landing zone situational awareness as well as aircraft guidance and obstacle avoidance information. Pilots from the U.S. Army, Air Force, Navy, and Marine Corps achieved a 77% landing rate in full brownout conditions at a test range at Yuma Proving Ground, Arizona. This paper will focus on the LADAR technology used in 3D-LZ and the results of this milestone demonstration.

Savage, James; Harrington, Walter; McKinley, R. Andrew; Burns, H. N.; Braddom, Steven; Szoboszlay, Zoltan

2010-04-01

163

3D Multispectral Light Propagation Model For Subcutaneous Veins Imaging  

SciTech Connect

In this paper, we describe a new 3D light propagation model aimed at understanding the effects of various physiological properties on subcutaneous vein imaging. In particular, we build upon the well known MCML (Monte Carlo Multi Layer) code and present a tissue model that improves upon the current state-of-the-art by: incorporating physiological variation, such as melanin concentration, fat content, and layer thickness; including veins of varying depth and diameter; using curved surfaces from real arm shapes; and modeling the vessel wall interface. We describe our model, present results from the Monte Carlo modeling, and compare these results with those obtained with other Monte Carlo methods.

Paquit, Vincent C [ORNL; Price, Jeffery R [ORNL; Meriaudeau, Fabrice [ORNL; Tobin Jr, Kenneth William [ORNL

2008-01-01

164

Study of the performance of different subpixel image correlation methods in 3D digital image correlation.  

PubMed

The three-dimensional digital image correlation (3D-DIC) method is rapidly developing and is being widely applied to engineering and manufacturing. Despite its extensive use, the error caused by different image matching algorithms is seldom discussed. An algorithm for 3D speckle image generation is proposed, and the performances of different subpixel correlation algorithms are studied. The advantage is that there is no interpolation bias of texture in the simulation before and after deformation, and the error from the interpolation of speckle can be omitted in this algorithm. An error criterion for 3D reconstruction is proposed. 3D speckle images were simulated, and the performance of four subpixel algorithms is addressed. Based on the research results of different subpixel algorithms, a first-order Newton-Raphson iteration method and gradient-based method are recommended for 3D-DIC measurement. PMID:20648187

Hu, Zhenxing; Xie, Huimin; Lu, Jian; Hua, Tao; Zhu, Jianguo

2010-07-20

165

Performance assessment of 3D surface imaging technique for medical imaging applications  

NASA Astrophysics Data System (ADS)

Recent development in optical 3D surface imaging technologies provide better ways to digitalize the 3D surface and its motion in real-time. The non-invasive 3D surface imaging approach has great potential for many medical imaging applications, such as motion monitoring of radiotherapy, pre/post evaluation of plastic surgery and dermatology, to name a few. Various commercial 3D surface imaging systems have appeared on the market with different dimension, speed and accuracy. For clinical applications, the accuracy, reproducibility and robustness across the widely heterogeneous skin color, tone, texture, shape properties, and ambient lighting is very crucial. Till now, a systematic approach for evaluating the performance of different 3D surface imaging systems still yet exist. In this paper, we present a systematic performance assessment approach to 3D surface imaging system assessment for medical applications. We use this assessment approach to exam a new real-time surface imaging system we developed, dubbed "Neo3D Camera", for image-guided radiotherapy (IGRT). The assessments include accuracy, field of view, coverage, repeatability, speed and sensitivity to environment, texture and color.

Li, Tuotuo; Geng, Jason; Li, Shidong

2013-03-01

166

Application of 3D surface imaging in breast cancer radiotherapy  

NASA Astrophysics Data System (ADS)

Purpose: Accurate dose delivery in deep-inspiration breath-hold (DIBH) radiotherapy for patients with breast cancer relies on precise treatment setup and monitoring of the depth of the breath hold. This study entailed performance evaluation of a 3D surface imaging system for image guidance in DIBH radiotherapy by comparison with cone-beam computed tomography (CBCT). Materials and Methods: Fifteen patients, treated with DIBH radiotherapy after breast-conserving surgery, were included. The performance of surface imaging was compared to the use of CBCT for setup verification. Retrospectively, breast surface registrations were performed for CBCT to planning CT as well as for a 3D surface, captured concurrently with CBCT, to planning CT. The resulting setup errors were compared with linear regression analysis. For the differences between setup errors, group mean, systematic and random errors were calculated. Furthermore, a residual error after registration (RRE) was assessed for both systems by investigating the root-mean-square distance between the planning CT surface and registered CBCT/captured surface. Results: Good correlation between setup errors was found: R2=0.82, 0.86, 0.82 in left-right, cranio-caudal and anteriorposterior direction, respectively. Systematic and random errors were <=0.16cm and <=0.13cm in all directions, respectively. RRE values for surface imaging and CBCT were on average 0.18 versus 0.19cm with a standard deviation of 0.10 and 0.09cm, respectively. Wilcoxon-signed-ranks testing showed that CBCT registrations resulted in higher RRE values than surface imaging registrations (p=0.003). Conclusion: This performance evaluation study shows very promising results

Alderliesten, Tanja; Sonke, Jan-Jakob; Betgen, Anja; Honnef, Joeri; van Vliet-Vroegindeweij, Corine; Remeijer, Peter

2012-02-01

167

A Simple Quality Assessment Index for Stereoscopic Images Based on 3D Gradient Magnitude  

PubMed Central

We present a simple quality assessment index for stereoscopic images based on 3D gradient magnitude. To be more specific, we construct 3D volume from the stereoscopic images across different disparity spaces and calculate pointwise 3D gradient magnitude similarity (3D-GMS) along three horizontal, vertical, and viewpoint directions. Then, the quality score is obtained by averaging the 3D-GMS scores of all points in the 3D volume. Experimental results on four publicly available 3D image quality assessment databases demonstrate that, in comparison with the most related existing methods, the devised algorithm achieves high consistency alignment with subjective assessment. PMID:25133265

Wang, Shanshan; Shao, Feng; Li, Fucui; Yu, Mei; Jiang, Gangyi

2014-01-01

168

High-resolution 3-D refractive index imaging and Its biological applications  

E-print Network

This thesis presents a theory of 3-D imaging in partially coherent light under a non-paraxial condition. The transmission cross-coefficient (TCC) has been used to characterize partially coherent imaging in a 2- D and 3-D ...

Sung, Yongjin

2011-01-01

169

3D Slicer as an image computing platform for the Quantitative Imaging Network.  

PubMed

Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open-source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future directions that can further facilitate development and validation of imaging biomarkers using 3D Slicer. PMID:22770690

Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V; Pieper, Steve; Kikinis, Ron

2012-11-01

170

3D Slicer as an Image Computing Platform for the Quantitative Imaging Network  

PubMed Central

Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm, and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future directions that can further facilitate development and validation of imaging biomarkers using 3D Slicer. PMID:22770690

Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V.; Pieper, Steve; Kikinis, Ron

2012-01-01

171

Multimodal registration of retinal images using self organizing maps.  

PubMed

In this paper, an automatic method for registering multimodal retinal images is presented. The method consists of three steps: the vessel centerline detection and extraction of bifurcation points only in the reference image, the automatic correspondence of bifurcation points in the two images using a novel implementation of the self organizing maps and the extraction of the parameters of the affine transform using the previously obtained correspondences. The proposed registration algorithm was tested on 24 multimodal retinal pairs and the obtained results show an advantageous performance in terms of accuracy with respect to the manual registration. PMID:15575412

Matsopoulos, George K; Asvestas, Pantelis A; Mouravliansky, Nikolaos A; Delibasis, Konstantinos K

2004-12-01

172

3D set partitioned embedded zero block coding algorithm for hyperspectral image compression  

Microsoft Academic Search

In this paper, a three-dimensional Set Partitioned Embedded Zero Block Coding (3D SPEZBC) algorithm for hyperspectral image compression is proposed, which is motivated by the EZBC and SPECK algorithms. Experimental results show that the 3D SPEZBC algorithm obviously outperforms 3D SPECK, 3D SPIHT and AT-3D SPIHT, and is slightly better than JPEG2000-MC in the compression performances. Moreover, the 3D SPEZBC

Ying Hou; Guizhong Liu

2007-01-01

173

A compression algorithm of hyperspectral remote sensing image based on 3-D Wavelet transform and fractal  

Microsoft Academic Search

In this paper, the 3-D wavelet-fractal coding was used to compress the hyperspectral remote sensing image. The classical eight kinds of affine transformations in 2-D fractal image compression were generalized to nineteen for the 3-D fractal image compression. Hyperspectral image date cube was first translated by 3-D wavelet and then the 3-D fractal compression coding was applied to lowest frequency

Pan Wei; Zou Yi; Ao Lu

2008-01-01

174

Computing 3D head orientation from a monocular image sequence  

NASA Astrophysics Data System (ADS)

An approach for estimating 3D head orientation in a monocular image sequence is proposed. The approach employs recently developed image-based parameterized tracking for face and face features to locate the area in which a sub- pixel parameterized shape estimation of the eye's boundary is performed. This involves tracking of five points (four at the eye corners and the fifth is the tip of the nose). We describe an approach that relies on the coarse structure of the face to compute orientation relative to the camera plane. Our approach employs projective invariance of the cross-ratios of the eye corners and anthropometric statistics to estimate the head yaw, roll and pitch. Analytical and experimental results are reported.

Horprasert, Thanarat; Yacoob, Yaser; Davis, Larry S.

1997-02-01

175

Simple 3-D Image Synthesis Techniques From Serial Planes  

NASA Astrophysics Data System (ADS)

Several techniques now well established in computer graphics and graphic animation are combined in this work to develop a realistic presentation of anatomic structures. A rudimentary type of ray tracing is implemented for routine clinical CT exams. In particular, we describe our method in the context of a standardized cross-reference set of multiplanar reformatted CT pictures. In addition, an image coherence technique is briefly outlined that speeds rendering of a series of 3-D views. This paper describes a method by which object surface information from an existing view is used to help predict where ray tracing can begin to search for ray object intersections in the subsequent view. This method is shown to reduce the computational expense of finding ray-object intersections by beginning this search in the proximity of object surfaces. Finally, we have casted shadows in the scene of objects rendered by our method. Several example images illustrate our results.

Rhodes, Michael L.; Kuo, Yu-Ming

1988-06-01

176

3D Image Viz-Analysis Tools and V3D Development Hackathon, July 26 -August 8, 2010  

E-print Network

- Hacking / Dinner at Bob's Pub #12;3D Image Viz-Analysis Tools and V3D Development Hackathon, July 26, Zongcai Ruan, and Luis Ibanez 12:00-1:00 pm Lunch 1:00 pm- Hacking / Dinner at Bob's Pub July 28, 29, 30 to talk about their problems. See schedule below) 12:00-1:00 pm Lunch 1:00 pm- Hacking / Dinner at Bob

Peng, Hanchuan

177

A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images  

PubMed Central

Glaucoma is an ocular disease characterized by distinctive changes in the optic nerve head (ONH) and visual field. Glaucoma can strike without symptoms and causes blindness if it remains without treatment. Therefore, early disease detection is important so that treatment can be initiated and blindness prevented. In this context, important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, an essential element for glaucoma detection and monitoring. 3D spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, has been commonly used to discriminate glaucomatous from healthy subjects. In this paper, we present a new framework for detection of glaucoma progression using 3D SD-OCT images. In contrast to previous works that the retinal nerve fiber layer (RNFL) thickness measurement provided by commercially available spectral-domain optical coherence tomograph, we consider the whole 3D volume for change detection. To integrate a priori knowledge and in particular the spatial voxel dependency in the change detection map, we propose the use of the Markov Random Field to handle a such dependency. To accommodate the presence of false positive detection, the estimated change detection map is then used to classify a 3D SDOCT image into the 搉on-progressing and 損rogressing glaucoma classes, based on a fuzzy logic classifier. We compared the diagnostic performance of the proposed framework to existing methods of progression detection. PMID:25606299

Belghith, Akram; Bowd, Christopher; Weinreb, Robert N.; Zangwill, Linda M.

2014-01-01

178

Complex adaptation-based LDR image rendering for 3D image reconstruction  

NASA Astrophysics Data System (ADS)

A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

2014-07-01

179

3D RECONSTRUCTION OF PLANT ROOTS FROM MRI IMAGES Hannes Schulz1  

E-print Network

3D RECONSTRUCTION OF PLANT ROOTS FROM MRI IMAGES Hannes Schulz1 , Johannes A. Postma2 , Dagmar van method for deriving a structural model of a plant root system from 3D Magnetic Resonance Imaging (MRI imaging in natural soils is hampered by a wide range of constrictions. For a full, non-destructive 3D

Behnke, Sven

180

Using silhouette coherence for 3D image-based object modeling under circular motion  

E-print Network

Using silhouette coherence for 3D image-based object modeling under circular motion Utilisation de la coh茅rence entre silhouettes pour la mod茅lisation d'objets 3D 脿 partir d'images de s茅quences en) (France 1983-9999) #12;Using Silhouette Coherence for 3D Image-based Object Modeling under Circular Motion

Esteban, Carlos Hern谩ndez

181

Speckle Suppression for 3-D Ultrasound Images Using Nonlinear Multiscale Wavelet Diffusion  

E-print Network

ultrasound methods used for visualization of 3-D anatomy and pathology.1 Compared with other medical imagingSpeckle Suppression for 3-D Ultrasound Images Using Nonlinear Multiscale Wavelet Diffusion Yong Yue ABSTRACT We introduce a new speckle suppression approach for 3-D ultrasound images. The proposed method

Duncan, James S.

182

High Resolution 3D Radar Imaging of Comet Interiors  

NASA Astrophysics Data System (ADS)

Knowing the interiors of comets and other primitive bodies is fundamental to our understanding of how planets formed. We have developed a Discovery-class mission formulation, Comet Radar Explorer (CORE), based on the use of previously flown planetary radar sounding techniques, with the goal of obtaining high resolution 3D images of the interior of a small primitive body. We focus on the Jupiter-Family Comets (JFCs) as these are among the most primitive bodies reachable by spacecraft. Scattered in from far beyond Neptune, they are ultimate targets of a cryogenic sample return mission according to the Decadal Survey. Other suitable targets include primitive NEOs, Main Belt Comets, and Jupiter Trojans. The approach is optimal for small icy bodies ~3-20 km diameter with spin periods faster than about 12 hours, since (a) navigation is relatively easy, (b) radar penetration is global for decameter wavelengths, and (c) repeated overlapping ground tracks are obtained. The science mission can be as short as ~1 month for a fast-rotating JFC. Bodies smaller than ~1 km can be globally imaged, but the navigation solutions are less accurate and the relative resolution is coarse. Larger comets are more interesting, but radar signal is unlikely to be reflected from depths greater than ~10 km. So, JFCs are excellent targets for a variety of reasons. We furthermore focus on the use of Solar Electric Propulsion (SEP) to rendezvous shortly after the comet's perihelion. This approach leaves us with ample power for science operations under dormant conditions beyond ~2-3 AU. This leads to a natural mission approach of distant observation, followed by closer inspection, terminated by a dedicated radar mapping orbit. Radar reflections are obtained from a polar orbit about the icy nucleus, which spins underneath. Echoes are obtained from a sounder operating at dual frequencies 5 and 15 MHz, with 1 and 10 MHz bandwidths respectively. The dense network of echoes is used to obtain global 3D images of interior structure to ~20 m, and to map dielectric properties (related to internal composition) to better than 200 m throughout. This is comparable in detail to modern 3D medical ultrasound, although we emphasize that the techniques are somewhat different. An interior mass distribution is obtained through spacecraft tracking, using data acquired during the close, quiet radar orbits. This is aligned with the radar-based images of the interior, and the shape model, to contribute to the multi-dimensional 3D global view. High-resolution visible imaging provides boundary conditions and geologic context to these interior views. An infrared spectroscopy and imaging campaign upon arrival reveals the time-evolving activity of the nucleus and the structure and composition of the inner coma, and the definition of surface units. CORE is designed to obtain a total view of a comet, from the coma to the active and evolving surface to the deep interior. Its primary science goal is to obtain clear images of internal structure and dielectric composition. These will reveal how the comet was formed, what it is made of, and how it 'works'. By making global yet detailed connections from interior to exterior, this knowledge will be an important complement to the Rosetta mission, and will lay the foundation for comet nucleus sample return by revealing the areas of shallow depth to 'bedrock', and relating accessible deposits to their originating provenances within the nucleus.

Asphaug, E. I.; Gim, Y.; Belton, M.; Brophy, J.; Weissman, P. R.; Heggy, E.

2012-12-01

183

Cone photoreceptor definition on adaptive optics retinal imaging  

PubMed Central

Aims To quantitatively analyse cone photoreceptor matrices on images captured on an adaptive optics (AO) camera and assess their correlation to well-established parameters in the retinal histology literature. Methods High resolution retinal images were acquired from 10 healthy subjects, aged 2035?years old, using an AO camera (rtx1, Imagine Eyes, France). Left eye images were captured at 5 of retinal eccentricity, temporal to the fovea for consistency. In three subjects, images were also acquired at 0, 2, 3, 5 and 7 retinal eccentricities. Cone photoreceptor density was calculated following manual and automated counting. Inter-photoreceptor distance was also calculated. Voronoi domain and power spectrum analyses were performed for all images. Results At 5 eccentricity, the cone density (cones/mm2 mean盨D) was 15.31.4103 (automated) and 13.91.0103 (manual) and the mean inter-photoreceptor distance was 8.60.4??m. Cone density decreased and inter-photoreceptor distance increased with increasing retinal eccentricity from 2 to 7. A regular hexagonal cone photoreceptor mosaic pattern was seen at 2, 3 and 5 of retinal eccentricity. Conclusions Imaging data acquired from the AO camera match cone density, intercone distance and show the known features of cone photoreceptor distribution in the pericentral retina as reported by histology, namely, decreasing density values from 2 to 7 of eccentricity and the hexagonal packing arrangement. This confirms that AO flood imaging provides reliable estimates of pericentral cone photoreceptor distribution in normal subjects. PMID:24729030

Muthiah, Manickam Nick; Gias, Carlos; Chen, Fred Kuanfu; Zhong, Joe; McClelland, Zoe; Sallo, Ferenc B; Peto, Tunde; Coffey, Peter J; da Cruz, Lyndon

2014-01-01

184

Improvements of 3-D image quality in integral display by reducing distortion errors  

NASA Astrophysics Data System (ADS)

An integral three-dimensional (3-D) system based on the principle of integral photography can display natural 3-D images. We studied ways of improving the resolution and viewing angle of 3-D images by using extremely highresolution (EHR) video in an integral 3-D video system. One of the problems with the EHR projection-type integral 3-D system is that positional errors appear between the elemental image and the elemental lens when there is geometric distortion in the projected image. We analyzed the relationships between the geometric distortion in the elemental images caused by the projection lens and the spatial distortion of the reconstructed 3-D image. As a result, we clarified that 3-D images reconstructed far from the lens array were greatly affected by the distortion of the elemental images, and that the 3-D images were significantly distorted in the depth direction at the corners of the displayed images. Moreover, we developed a video signal processor that electrically compensated the distortion in the elemental images for an EHR projection-type integral 3-D system. Therefore, the distortion in the displayed 3-D image was removed, and the viewing angle of the 3-D image was expanded to nearly double that obtained with the previous prototype system.

Kawakita, Masahiro; Sasaki, Hisayuki; Arai, Jun; Okano, Fumio; Suehiro, Koya; Haino, Yasuyuki; Yoshimura, Makoto; Sato, Masahito

2008-02-01

185

Fast 3-D Tomographic Microwave Imaging for Breast Cancer Detection  

PubMed Central

Microwave breast imaging (using electromagnetic waves of frequencies around 1 GHz) has mostly remained at the research level for the past decade, gaining little clinical acceptance. The major hurdles limiting patient use are both at the hardware level (challenges in collecting accurate and noncorrupted data) and software level (often plagued by unrealistic reconstruction times in the tens of hours). In this paper we report improvements that address both issues. First, the hardware is able to measure signals down to levels compatible with sub-centimeter image resolution while keeping an exam time under 2 min. Second, the software overcomes the enormous time burden and produces similarly accurate images in less than 20 min. The combination of the new hardware and software allows us to produce and report here the first clinical 3-D microwave tomographic images of the breast. Two clinical examples are selected out of 400+ exams conducted at the Dartmouth Hitchcock Medical Center (Lebanon, NH). The first example demonstrates the potential usefulness of our system for breast cancer screening while the second example focuses on therapy monitoring. PMID:22562726

Meaney, Paul M.; Kaufman, Peter A.; diFlorio-Alexander, Roberta M.; Paulsen, Keith D.

2013-01-01

186

Imaging the 3D geometry of pseudotachylyte-bearing faults  

NASA Astrophysics Data System (ADS)

Dynamic friction experiments in granitoid or gabbroic rocks that achieve earthquake slip velocities reveal significant weakening by melt-lubrication of the sliding surfaces. Extrapolation of these experimental results to seismic source depths (> 7 km) suggests that the slip weakening distance (Dw) over which this transition occurs is < 10 cm. The physics of this lubrication in the presence of a fluid (melt) is controlled by surface micro-topography. In order to characterize fault surface microroughness and its evolution during dynamic slip events on natural faults, we have undertaken an analysis of three-dimensional (3D) fault surface microtopography and its causes on a suite of pseudotachylyte-bearing fault strands from the Gole Larghe fault zone, Italy. The solidification of frictional melt soon after seismic slip ceases "freezes in" earthquake source geometries, however it also precludes the development of extensive fault surface exposures that have enabled direct studies of fault surface roughness. We have overcome this difficulty by imaging the intact 3D geometry of the fault using high-resolution X-ray computed tomography (CT). We collected a suite of 2-3.5 cm diameter cores (2-8 cm long) from individual faults within the Gole Larghe fault zone with a range of orientations (+/- 45 degrees from average strike) and slip magnitudes (0-1 m). Samples were scanned at the University of Texas High Resolution X-ray CT Facility, using an Xradia MicroCT scanner with a 70 kV X-ray source. Individual voxels (3D pixels) are ~36 ?m across. Fault geometry is thus imaged over ~4 orders of magnitude from the micron scale up to ~Dw. Pseudotachylyte-bearing fault zones are imaged as tabular bodies of intermediate X-ray attenuation crosscutting high attenuation biotite and low attenuation quartz and feldspar of the surrounding tonalite. We extract the fault surfaces (contact between the pseudotachylyte bearing fault zone and the wall rock) using integrated manual mapping, automated edge detection, and statistical evaluation. This approach results in a digital elevation model for each side of the fault zone that we use to quantify melt thickness and volume as well as surface microroughness and explore the relationship between these properties and the geometry, slip magnitude, and wall rock mineralogy of the fault.

Resor, Phil; Shervais, Katherine

2013-04-01

187

3D Chemical and Elemental Imaging by STXM Spectrotomography  

SciTech Connect

Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

Wang, J.; Karunakaran, C.; Lu, Y.; Hormes, J. [Canadian Light Source Inc., University of Saskatchewan, Saskatoon, SK S7N 0X4 (Canada); Hitchcock, A. P. [BIMR, McMaster University, Hamilton, ON L8S 4M1 (Canada); Prange, A. [Microbiology and Food Hygiene, Niederrhein University of Applied Sciences, Moenchengladbach (Germany); Institute for Microbiology and Virology, University of Witten/Herdecke, Witten (Germany); Center for Advanced Microstructures and Devices (CAMD), Louisiana State University, Baton Rouge, LA (United States); Franz, B. [Microbiology and Food Hygiene, Niederrhein University of Applied Sciences, Moenchengladbach (Germany); Harkness, T. [College of Medicine, University of Saskatchewan, Saskatoon, SK S7N 5E5 (Canada); Obst, M. [Center for Applied Geoscience, Tuebingen University, Tuebingen (Germany)

2011-09-09

188

An improved watershed algorithm for counting objects in noisy, anisotropic 3-D biological images  

Microsoft Academic Search

Effective 3-D image processing algorithms are presented for automatic counting and analysis of cells in anisotropic 3-D biological images that are collected by laser-scanning confocal microscopes. In these instruments, the x-y resolution is much better than the resolution along the z axis, hence the voxels (pixels in 3-D) are anisotropic. In this work, the images are pre-processed by a 3-D

H. Ancin; T. E. Dufresne; G. M. Ridder; J. N. Turner; B. Roysam

1995-01-01

189

Vision expert system 3D-IMPRESS for automated construction of three dimensional image processing procedures  

Microsoft Academic Search

In this paper a three dimensional (3D) image processing expert system called 3D-IMPRESS is presented. This system can automatically construct a 3D image processing procedure by using pairs of an original input image and a desired output figure called sample figure given by a user This paper describes the outline of 3D-IMPRESS and presents a method of procedure consolidation for

Xiang-Rong Zhou; Akinobu Shimizu; Jun-ichi Hasegawa; Jun-ichiro Toriwaki; Takeshi Hara; Hiroshi Fujita

2001-01-01

190

3D painting documentation: evaluation of conservation conditions with 3D imaging and ranging techniques  

NASA Astrophysics Data System (ADS)

The monitoring of paintings, both on canvas and wooden support, is a crucial issue for the preservation and conservation of this kind of artworks. Many environmental factors (e.g. humidity, temperature, illumination, etc.), as well as bad conservation practices (e.g. wrong restorations, inappropriate locations, etc.), can compromise the material conditions over time and deteriorate an artwork. The article presents an on-going project realized by a multidisciplinary team composed by the ENEA UTICT 3D GraphLab, the 3D Optical Metrology Unit of the Bruno Kessler Foundation and the Soprintendenza per i Beni Storico Artistici ed Etnoantropologici of Bologna (Italy). The goal of the project is the multi-temporal 3D documentation and monitoring of paintings - at the moment in bad conservation's situation - and the provision of some metrics to quantify the deformations and damages.

Abate, D.; Menna, F.; Remondino, F.; Gattari, M. G.

2014-06-01

191

An Efficient 3D Imaging using Structured Light Systems  

NASA Astrophysics Data System (ADS)

Structured light 3D surface imaging has been crucial in the fields of image processing and computer vision, particularly in reconstruction, recognition and others. In this dissertation, we propose the approaches to development of an efficient 3D surface imaging system using structured light patterns including reconstruction, recognition and sampling criterion. To achieve an efficient reconstruction system, we address the problem in its many dimensions. In the first, we extract geometric 3D coordinates of an object which is illuminated by a set of concentric circular patterns and reflected to a 2D image plane. The relationship between the original and the deformed shape of the light patterns due to a surface shape provides sufficient 3D coordinates information. In the second, we consider system efficiency. The efficiency, which can be quantified by the size of data, is improved by reducing the number of circular patterns to be projected onto an object of interest. Akin to the Shannon-Nyquist Sampling Theorem, we derive the minimum number of circular patterns which sufficiently represents the target object with no considerable information loss. Specific geometric information (e.g. the highest curvature) of an object is key to deriving the minimum sampling density. In the third, the object, represented using the minimum number of patterns, has incomplete color information (i.e. color information is given a priori along with the curves). An interpolation is carried out to complete the photometric reconstruction. The results can be approximately reconstructed because the minimum number of the patterns may not exactly reconstruct the original object. But the result does not show considerable information loss, and the performance of an approximate reconstruction is evaluated by performing recognition or classification. In an object recognition, we use facial curves which are deformed circular curves (patterns) on a target object. We simply carry out comparison between the facial curves of different faces or different expressions, and subsequently evaluate the performance of the reconstruction results. Since comparison between all pairs of curves can increase the computational complexity, we propose an approach for classification which is based on the shortest geodesic distances. Shape-based comparison is carried out because it shows robustness to scaling effect and rotation due to varying viewpoints. Previously, linear methods and non-linear methods have been investigated for a dimensional reduction which achieves efficient recognition / classification algorithms. But, existing approaches generate many parameters which leads to an optimization procedures which sometimes do not provide explicit solution. The proposed approach to dimensionality reduction for recognition is based on the property of the Fourier Transform whose magnitude response is symmetric and invariant to time-shift, and the results are much more explicit without loss of intrinsic information of targets. In practice, to achieve the reconstruction of a larger sized object, we use multipleprojector-viewpoints (MPV) system. The minimum number of cameras and projectors is critical part to achieve an efficient MPV system. For an alternative view of reconstruction, we apply the concepts of a system identification to the reconstruction problem. The first one is a general system identification determined by the ratio of the output to input, and the second one is a modulation-demodulation theory used to estimate an input (transmitted) signal from an output (received or observed) signal.

Lee, Deokwoo

192

Needle placement for piriformis injection using 3-D imaging.  

PubMed

Piriformis syndrome is a pain syndrome originating in the buttock and is attributed to 6% - 8% of patients referred for the treatment of back and leg pain. The treatment for piriformis syndrome using fluoroscopy, computed tomography (CT), electromyography (EMG), and ultrasound (US) has become standard practice. The treatment of Piriformis Syndrome has evolved to include fluoroscopy and EMG with CT guidance. We present a case study of 5 successful piriformis injections using 3-D computer-assisted electromagnet needle tracking coupled with ultrasound. A 6-degree of freedom electromagnetic position tracker was attached to the ultrasound probe that allowed the system to detect the position and orientation of the probe in the magnetic field. The tracked ultrasound probe was used to find the posterior superior iliac spine. Subsequently, 3 points were captured to register the ultrasound image with the CT or magnetic resonance image scan. Moreover, after the registration was obtained, the navigation system visualized the tracked needle relative to the CT scan in real-time using 2 orthogonal multi-planar reconstructions centered at the tracked needle tip. Conversely, a recent study revealed that fluoroscopically guided injections had 30% accuracy compared to ultrasound guided injections, which tripled the accuracy percentage. This novel technique exhibited an accurate needle guidance injection precision of 98% while advancing to the piriformis muscle and avoiding the sciatic nerve. The mean ( SD) procedure time was 19.08 ( 4.9) minutes. This technique allows for electromagnetic instrument tip tracking with real-time 3-D guidance to the selected target. As with any new technique, a learning curve is expected; however, this technique could offer an alternative, minimizing radiation exposure. PMID:23703429

Clendenen, Steven R; Candler, Shawn A; Osborne, Michael D; Palmer, Scott C; Duench, Stephanie; Glynn, Laura; Ghazi, Salim M

2013-01-01

193

3D Segmentation of Prostate Ultrasound images Using Wavelet Transform.  

PubMed

The current definitive diagnosis of prostate cancer is transrectal ultrasound (TRUS) guided biopsy. However, the current procedure is limited by using 2D biopsy tools to target 3D biopsy locations. This paper presents a new method for automatic segmentation of the prostate in three-dimensional transrectal ultrasound images, by extracting texture features and by statistically matching geometrical shape of the prostate. A set of Wavelet-based support vector machines (W-SVMs) are located and trained at different regions of the prostate surface. The WSVMs capture texture priors of ultrasound images for classification of the prostate and non-prostate tissues in different zones around the prostate boundary. In the segmentation procedure, these W-SVMs are trained in three sagittal, coronal, and transverse planes. The pre-trained W-SVMs are employed to tentatively label each voxel around the surface of the model as a prostate or non-prostate voxel by the texture matching. The labeled voxels in three planes after post-processing is overlaid on a prostate probability model. The probability prostate model is created using 10 segmented prostate data. Consequently, each voxel has four labels: sagittal, coronal, and transverse planes and one probability label. By defining a weight function for each labeling in each region, each voxel is labeled as a prostate or non-prostate voxel. Experimental results by using real patient data show the good performance of the proposed model in segmenting the prostate from ultrasound images. PMID:22468205

Akbari, Hamed; Yang, Xiaofeng; Halig, Luma V; Fei, Baowei

2011-01-01

194

GPU-accelerated denoising of 3D magnetic resonance images  

SciTech Connect

The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

Howison, Mark; Wes Bethel, E.

2014-05-29

195

Portable, low-priced retinal imager for eye disease screening  

NASA Astrophysics Data System (ADS)

The objective of this project was to develop and demonstrate a portable, low-priced, easy to use non-mydriatic retinal camera for eye disease screening in underserved urban and rural locations. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities or other economically stressed healthcare facilities. Our approach for Smart i-Rx is based primarily on a significant departure from current generations of desktop and hand-held commercial retinal cameras as well as those under development. Our techniques include: 1) Exclusive use of off-the-shelf components; 2) Integration of retinal imaging device into low-cost, high utility camera mount and chin rest; 3) Unique optical and illumination designed for small form factor; and 4) Exploitation of autofocus technology built into present digital SLR recreational cameras; and 5) Integration of a polarization technique to avoid the corneal reflex. In a prospective study, 41 out of 44 diabetics were imaged successfully. No imaging was attempted on three of the subjects due to noticeably small pupils (less than 2mm). The images were of sufficient quality to detect abnormalities related to diabetic retinopathy, such as microaneurysms and exudates. These images were compared with ones taken non-mydriatically with a Canon CR-1 Mark II camera. No cases identified as having DR by expert retinal graders were missed in the Smart i-Rx images.

Soliz, Peter; Nemeth, Sheila; VanNess, Richard; Barriga, E. S.; Zamora, Gilberto

2014-02-01

196

Reconstructing Plants in 3D from a Single Image using Analysis-by-Synthesis  

E-print Network

Reconstructing Plants in 3D from a Single Image using Analysis-by-Synthesis J麓er^ome Gu麓enard1 G from images. However, due to high complexity of plant topology, dedicated methods for generating 3D plant models must be devised. We propose to generate a 3D model of a plant, using an analysis

Paris-Sud XI, Universit茅 de

197

Highly Undersampled 3D Golden Ratio Radial Imaging with Iterative Reconstruction , H. Eggers2  

E-print Network

.5 ms, TR = 7.1 ms, flip angle = 10掳, matrix size 1283 , applying 2D golden section sampling. 32766Highly Undersampled 3D Golden Ratio Radial Imaging with Iterative Reconstruction M. Doneva1 , H of CS for 3D dynamic imaging using highly undersampled 3D radial acquisition with golden ratio profile

L眉beck, Universit盲t zu

198

Omnidirectional Integral Photography images compression using the 3D-DCT  

E-print Network

Omnidirectional Integral Photography images compression using the 3D-DCT N. P. Sgouros, D. P transform (3D-DCT) [6] encoder for use in omnidirectional IP image compression. The encoder utilizes the 2D traversal scheme based on the Hilbert space filling curve. 2. Compression using the 3D-DCT and 2D scanning

Athens, University of

199

Integration of 3-D Stereographic Imaging Techniques with a Large-Chamber Scanning Electron Microscope  

E-print Network

. In addition, analytical maps can be overlaid on these 3-D models to further characterize the complex a stereographic pair of images with a six-degree offset. Figure 2: Generated 3-D topography models demonstratingIntegration of 3-D Stereographic Imaging Techniques with a Large-Chamber Scanning Electron

Abidi, Mongi A.

200

High speed detection of retinal blood vessels in fundus image using phase congruency  

Microsoft Academic Search

Detection of blood vessels in retinal fundus image is the preliminary step to diagnose several retinal diseases. There exist\\u000a several methods to automatically detect blood vessels from retinal image with the aid of different computational methods.\\u000a However, all these methods require lengthy processing time. The method proposed here acquires binary vessels from a RGB retinal\\u000a fundus image in almost real

M. Ashraful Amin; Hong Yan

2011-01-01

201

Preoperative 3D to intraoperative low-resolution 3D and 2D sequences of MR images.  

PubMed

We have developed an automatic model-based deformable registration method applicable to MR soft-tissue imaging. The registration algorithm uses a dynamic finite element (FE) continuum mechanics model of the tissue deformation to register its 3D preoperative images with intraoperative 1) 3D low-resolution or 2) 2D MR images. The registration is achieved through a filtering process that combines information from the deformation model and observation errors based on correlation ratio, mutual information or sum of square differences between images. Experimental results with a breast phantom show that the proposed method converges in few iterations in the presence of very large deformations, similar to those typically observed in breast biopsy applications. PMID:22003650

Marami, Bahram; Sirouspour, Shahin; Capson, David W

2011-01-01

202

ROIC for gated 3D imaging LADAR receiver  

NASA Astrophysics Data System (ADS)

Time of flight laser range finding, deep space communications and scanning video imaging are three applications requiring very low noise optical receivers to achieve detection of fast and weak optical signal. HgCdTe electrons initiated avalanche photodiodes (e-APDs) in linear multiplication mode is the detector of choice thanks to its high quantum efficiency, high gain at low bias, high bandwidth and low noise factor. In this project, a readout integrated circuit of hybrid e-APD focal plane array (FPA) with 100um pitch for 3D-LADAR was designed for gated optical receiver. The ROIC works at 77K, including unit cell circuit, column-level circuit, timing control, bias circuit and output driver. The unit cell circuit is a key component, which consists of preamplifier, correlated double Sampling (CDS), bias circuit and timing control module. Specially, the preamplifier used the capacitor feedback transimpedance amplifier (CTIA) structure which has two capacitors to offer switchable capacitance for passive/active dual mode imaging. The main circuit of column-level circuit is a precision Multiply-by-Two circuit which is implemented by switched-capacitor circuit. Switched-capacitor circuit is quite suitable for the signal processing of readout integrated circuit (ROIC) due to the working characteristics. The output driver uses a simply unity-gain buffer. Because the signal is amplified in column-level circuit, the amplifier in unity-gain buffer uses a rail-rail amplifier. In active imaging mode, the integration time is 80ns. Integrating current from 200nA to 4uA, this circuit shows the nonlinearity is less than 1%. In passive imaging mode, the integration time is 150ns. Integrating current from 1nA to 20nA shows the nonlinearity less than 1%.

Chen, Guoqiang; Zhang, Junling; Wang, Pan; Zhou, Jie; Gao, Lei; Ding, Ruijun

2013-09-01

203

A Statistical Image-Based Shape Model for Visual Hull Reconstruction and 3D Structure Inference  

E-print Network

We present a statistical image-based shape + structure model for Bayesian visual hull reconstruction and 3D structure inference. The 3D shape of a class of objects is represented by sets of contours from silhouette ...

Grauman, Kristen

2003-05-22

204

Autostereoscopic 3D visualization and image processing system for neurosurgery.  

PubMed

A demonstrator system for planning neurosurgical procedures was developed based on commercial hardware and software. The system combines an easy-to-use environment for surgical planning with high-end visualization and the opportunity to analyze data sets for research purposes. The demonstrator system is based on the software AMIRA. Specific algorithms for segmentation, elastic registration, and visualization have been implemented and adapted to the clinical workflow. Modules from AMIRA and the image processing library Insight Segmentation and Registration Toolkit (ITK) can be combined to solve various image processing tasks. Customized modules tailored to specific clinical problems can easily be implemented using the AMIRA application programming interface and a self-developed framework for ITK filters. Visualization is done via autostereoscopic displays, which provide a 3D impression without viewing aids. A Spaceball device allows a comfortable, intuitive way of navigation in the data sets. Via an interface to a neurosurgical navigation system, the demonstrator system can be used intraoperatively. The precision, applicability, and benefit of the demonstrator system for planning of neurosurgical interventions and for neurosurgical research were successfully evaluated by neurosurgeons using phantom and patient data sets. PMID:23740656

Meyer, Tobias; Ku, Julia; Uhlemann, Falk; Wagner, Stefan; Kirsch, Matthias; Sobottka, Stephan B; Steinmeier, Ralf; Schackert, Gabriele; Morgenstern, Ute

2013-06-01

205

3D imaging of enzymes working in situ.  

PubMed

Today, development of slowly digestible food with positive health impact and production of biofuels is a matter of intense research. The latter is achieved via enzymatic hydrolysis of starch or biomass such as lignocellulose. Free label imaging, using UV autofluorescence, provides a great tool to follow one single enzyme when acting on a non-UV-fluorescent substrate. In this article, we report synchrotron DUV fluorescence in 3-dimensional imaging to visualize in situ the diffusion of enzymes on solid substrate. The degradation pathway of single starch granules by two amylases optimized for biofuel production and industrial starch hydrolysis was followed by tryptophan autofluorescence (excitation at 280 nm, emission filter at 350 nm). The new setup has been specially designed and developed for a 3D representation of the enzyme-substrate interaction during hydrolysis. Thus, this tool is particularly effective for improving knowledge and understanding of enzymatic hydrolysis of solid substrates such as starch and lignocellulosic biomass. It could open up the way to new routes in the field of green chemistry and sustainable development, that is, in biotechnology, biorefining, or biofuels. PMID:24796213

Jamme, F; Bourquin, D; Tawil, G; Viks-Nielsen, A; Bul閛n, A; R閒r間iers, M

2014-06-01

206

3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine  

NASA Astrophysics Data System (ADS)

3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.

Hamamoto, Kazuhiko; Sato, Motoyoshi

207

Blood Flow Magnetic Resonance Imaging of Retinal Degeneration  

E-print Network

Blood Flow Magnetic Resonance Imaging of Retinal Degeneration Yingxia Li,1 Haiying Cheng,1 Qiang. Duong1,2,3,4,5,6,7 PURPOSE. This study aims to investigate quantitative basal blood flow as well as hypercapnia- and hyperoxia-induced blood flow changes in the retinas of the Royal College of Surgeons (RCS

Duong, Timothy Q.

208

Retinal images: Blood vessel segmentation by threshold probing  

Microsoft Academic Search

An automated system for screening and diagnosis of diabetic retinopathy should segment blood vessels from colored retinal image to assist the ophthalmologists. We present a method for blood vessel enhancement and segmentation. This paper proposes a wavelet based method for vessel enhancement, piecewise threshold probing and adaptive thresholding for vessel localization and segmentation respectively. The method is tested on publicly

M. Usman Akram; Aasia Khanum

2010-01-01

209

On the Small Vessel Detection in High Resolution Retinal Images  

Microsoft Academic Search

In this paper, we proposed a new scheme for detection of small blood vessels in retinal images. A novel filter called Gabor variance filter and a modified histogram equalization technique are developed to enhance the contrast between vessels and background. Vessel segmentation is then performed on the enhanced map using thresholding and branch pruning based on the vessel structures. The

Ming Zhang; Di Wu; Jyh-Charn Liu

2005-01-01

210

3-D Image Registration Using Fast Fourier Transform, with Potential Applications to Geoinformatics and Bioinformatics  

E-print Network

.g., in protein docking) and geoin- formatics (e.g., in earth modeling). Keywords: Image Registration, 3- D Images and Bioinformatics Roberto Araiza Matthew G. Averill G. Randy Keller Scott A. Starks NASA Pan-American Center images. Registration of 3-D images is an important prob- lem in areas such as bioinformatics (e

Texas at Austin, University of

211

Retinal Image Analysis Using Curvelet Transform and Multistructure Elements Morphology by Reconstruction  

Microsoft Academic Search

Retinal images can be used in several applications, such as ocular fundus operations as well as human recognition. Also, they play important roles in detection of some diseases in early stages, such as diabetes, which can be performed by compar- ison of the states of retinal blood vessels. Intrinsic characteristics of retinal images make the blood vessel detection process difficult.

Mohammad Saleh Miri; Ali Mahloojifar

2011-01-01

212

Retinal image enhancement based on the human visual system  

NASA Astrophysics Data System (ADS)

Improving the quality of gray level images continues to be a challenging task, and the challenge increases for color images due to the interaction of multiple parameters within a scene. Each color plane or wavelength constitutes an image by itself, and its quality depends on many parameters such as absorption, reflectance or scattering of the object with the lighting source. Non-uniformity of the lighting, optics, electronics of the camera, and even the environment of the object are sources of degradation in the image. Therefore, segmentation and interpretation of the image may become very difficult if its quality is not enhanced. The main goal of the present work is to demonstrate image processing algorithm that is inspired from some concepts of the Human Visual System (HVS). HVS concepts have been widely used in gray level image enhancement and here we show how they can be successfully extended to color images. The resulting Multi-Scale Spatial Decomposition (MSSD) is employed to enhance the quality of color images. Of particular interest for medical imaging is the enhancement of retinal images whose quality is extremely sensitive to imaging artifacts. We show that our MSSD algorithm improves the readability and gradeability of retinal images and quantify such improvements using both subjective and objective metrics of image quality.

Belkacem-Boussaid, Kamel; Raman, Balaji; Zamora, Gilberto; Srinivasan, Yeshwanth; Bursell, Sven-Erick

2006-03-01

213

3-D Seismic Methods for Shallow Imaging Beneath Pavement  

E-print Network

-surface 3D Reflection Survey 28 2.4.1 Receiver and Source Interval 29 2.4.2 Receiver and Source Line Interval 30 2.4.3 Migration Apron 30 2.4.4 Fold 31 2.5 Symmetrical Sampling 32 2.6 3D Survey Designs Applied to the Autojuggie 33 2.6.1 Conventional... section can be thought of as a cross- section, or single slice, of the 3D data volume. From this we can see that a 2D line provides a fraction of the information that is available from a 3D data cube. Migration is also a consideration. Migration...

Miller, Brian

2013-05-31

214

Retinal Fundus Image Registration via Vascular Structure Graph Matching.  

PubMed

Motivated by the observation that a retinal fundus image may contain some unique geometric structures within its vascular trees which can be utilized for feature matching, in this paper, we proposed a graph-based registration framework called GM-ICP to align pairwise retinal images. First, the retinal vessels are automatically detected and represented as vascular structure graphs. A graph matching is then performed to find global correspondences between vascular bifurcations. Finally, a revised ICP algorithm incorporating with quadratic transformation model is used at fine level to register vessel shape models. In order to eliminate the incorrect matches from global correspondence set obtained via graph matching, we proposed a structure-based sample consensus (STRUCT-SAC) algorithm. The advantages of our approach are threefold: (1) global optimum solution can be achieved with graph matching; (2) our method is invariant to linear geometric transformations; and (3) heavy local feature descriptors are not required. The effectiveness of our method is demonstrated by the experiments with 48 pairs retinal images collected from clinical patients. PMID:20871853

Deng, Kexin; Tian, Jie; Zheng, Jian; Zhang, Xing; Dai, Xiaoqian; Xu, Min

2010-01-01

215

Retinal Fundus Image Registration via Vascular Structure Graph Matching  

PubMed Central

Motivated by the observation that a retinal fundus image may contain some unique geometric structures within its vascular trees which can be utilized for feature matching, in this paper, we proposed a graph-based registration framework called GM-ICP to align pairwise retinal images. First, the retinal vessels are automatically detected and represented as vascular structure graphs. A graph matching is then performed to find global correspondences between vascular bifurcations. Finally, a revised ICP algorithm incorporating with quadratic transformation model is used at fine level to register vessel shape models. In order to eliminate the incorrect matches from global correspondence set obtained via graph matching, we proposed a structure-based sample consensus (STRUCT-SAC) algorithm. The advantages of our approach are threefold: (1) global optimum solution can be achieved with graph matching; (2) our method is invariant to linear geometric transformations; and (3) heavy local feature descriptors are not required. The effectiveness of our method is demonstrated by the experiments with 48 pairs retinal images collected from clinical patients. PMID:20871853

Deng, Kexin; Tian, Jie; Zheng, Jian; Zhang, Xing; Dai, Xiaoqian; Xu, Min

2010-01-01

216

3D imaging of nanomaterials by discrete tomography.  

PubMed

The field of discrete tomography focuses on the reconstruction of samples that consist of only a few different materials. Ideally, a three-dimensional (3D) reconstruction of such a sample should contain only one grey level for each of the compositions in the sample. By exploiting this property in the reconstruction algorithm, either the quality of the reconstruction can be improved significantly, or the number of required projection images can be reduced. The discrete reconstruction typically contains fewer artifacts and does not have to be segmented, as it already contains one grey level for each composition. Recently, a new algorithm, called discrete algebraic reconstruction technique (DART), has been proposed that can be used effectively on experimental electron tomography datasets. In this paper, we propose discrete tomography as a general reconstruction method for electron tomography in materials science. We describe the basic principles of DART and show that it can be applied successfully to three different types of samples, consisting of embedded ErSi(2) nanocrystals, a carbon nanotube grown from a catalyst particle and a single gold nanoparticle, respectively. PMID:19269094

Batenburg, K J; Bals, S; Sijbers, J; K黚el, C; Midgley, P A; Hernandez, J C; Kaiser, U; Encina, E R; Coronado, E A; Van Tendeloo, G

2009-05-01

217

Inferring 3D scene structure from a single polarization image  

Microsoft Academic Search

This paper presents a method for deducing both, the 3D orientation of a flat, rough surface and the 3D position of the light source by analyzing the specular reflection produced by the light source on the surface. This is achieved by polarization analysis of the reflected light from a single point of view. This new approach is applicable to all

Stefan Rahmann

1999-01-01

218

Facial image comparison using 3D techniques Arnout Ruifroka  

E-print Network

acquisition techniques have been improved. Therefore, some face recognition methods have been extended for 3-dimensional purposes. Using 3D models one can deal with one main problem in 2D face recognition; the pose, together with 3D modeling software, offer the possibility of flexible and reproducible positioning

Veltkamp, Remco

219

Computer 3D site model generation based on aerial images  

Microsoft Academic Search

The technology for 3D model design of real world scenes and its photorealistic rendering are current topics of investigation. Development of such technology is very attractive to implement in vast varieties of applications: military mission planning, crew training, civil engineering, architecture, virtual reality entertainments--just a few were mentioned. 3D photorealistic models of urban areas are often discussed now as upgrade

Sergei Y. Zheltov; Yuri B. Blokhinov; Alexander A. Stepanov; Sergei V. Skryabin; Alexander V. Sibiryakov

1997-01-01

220

3D mapping from high resolution satellite images  

NASA Astrophysics Data System (ADS)

In recent years 3D information has become more easily available. Users' needs are constantly increasing, adapting to this reality and 3D maps are in more demand. 3D models of the terrain in CAD or other environments have already been common practice; however one is bound by the computer screen. This is why contemporary digital methods have been developed in order to produce portable and, hence, handier 3D maps of various forms. This paper deals with the implementation of the necessary procedures to produce holographic 3D maps and three dimensionally printed maps. The main objective is the production of three dimensional maps from high resolution aerial and/or satellite imagery with the use of holography and but also 3D printing methods. As study area the island of Antiparos was chosen, as there were readily available suitable data. These data were two stereo pairs of Geoeye-1 and a high resolution DTM of the island. Firstly the theoretical bases of holography and 3D printing are described, and the two methods are analyzed and there implementation is explained. In practice a x-axis parallax holographic map of the Antiparos Island is created and a full parallax (x-axis and y-axis) holographic map is created and printed, using the holographic method. Moreover a three dimensional printed map of the study area has been created using 3dp (3d printing) method. The results are evaluated for their usefulness and efficiency.

Goulas, D.; Georgopoulos, A.; Sarakenos, A.; Paraschou, Ch.

2013-08-01

221

Satellite-borne high-resolution 3D active imaging lidar  

Microsoft Academic Search

Owing to the notable advantages over range, resolution and accuracy, satellite-borne high-resolution 3D imaging lidar has found widespread applications in aerospace reconnaissance, deep-space detection, earth observation, disaster evaluation, and so on. Based on the principle of 3D laser imaging, the typical satellite-borne high-resolution 3D active imaging lidar systems are reviewed and the development trend is analyzed. Some conclusions can be

Fangpei Zhang; Haizhong Xue; Zhongjie Liu; Yubing Zhang; Yuhua Xing; Guangyan Dong; Shengguo Wang; Xiafei Wu; Yingxiang Song

2011-01-01

222

A Level Set Method for Anisotropic Geometric Diffusion in 3D Image Processing  

Microsoft Academic Search

A new morphological multiscale method in 3D image processing is presented which combines the image processing methodology based on nonlinear diffusion equations and the theory of geometric evolution prob- lems. Its aim is to smooth level sets of a 3D image while simultaneously preserving geometric features such as edges and corners on the level sets. This is obtained by an

Martin Rumpf

2000-01-01

223

A Range Image Refinement Technique for Multi-view 3D Model Reconstruction  

Microsoft Academic Search

This paper presents a range image refinement technique for generating accurate 3D computer models of real ob- jects. Range images obtained from a stereo-vision system typically experience geometric distortions on reconstructed 3D surfaces due to the inherent stereo matching problems such as occlusions or mismatchings. This paper introduces a range image refinement technique to correct such erro- neous ranges by

Soon-yong Park; Murali Subbarao

2003-01-01

224

3D LIGHTING-BASED IMAGE FORGERY DETECTION USING SHAPE-FROM-SHADING , Kai Wanga  

E-print Network

3D LIGHTING-BASED IMAGE FORGERY DETECTION USING SHAPE-FROM-SHADING Wei Fana,b , Kai Wanga , Franc- jective to faithfully detect image forgeries. Compared to the authentication based on digital watermarking, which is new in the field of image forgery detection. Our motivation was to use 3D surface normals

Boyer, Edmond

225

DXSoil, a library for 3D image analysis in soil science  

Microsoft Academic Search

A comprehensive series of routines has been developed to extract structural and topological information from 3D images of porous media. The main application aims at feeding a pore network approach to simulate unsaturated hydraulic properties from soil core images. Beyond the application example, the successive algorithms presented in the paper allow, from any 3D object image, the extraction of the

Jean-fran-cois Delerue; Edith Perrier

2002-01-01

226

Segmentation of 3D CT Volume Images Using a Single 2D Atlas  

E-print Network

Segmentation of 3D CT Volume Images Using a Single 2D Atlas Feng Ding1 , Wee Kheng Leow1 on the segmentation of complex CT/MR images using the atlas-based approach. Most existing methods use 3D atlases which for the segmentation of brain images. This paper presents a method that can segment multiple slices of an abdominal CT

Leow, Wee Kheng

227

Segmentation of Skull in 3D Human MR Images Using Mathematical Morphology  

E-print Network

Segmentation of Skull in 3D Human MR Images Using Mathematical Morphology B. Dogdasa, D. Shattuckb. Prior to the segmentation of skull, we segment the scalp and the brain from the MR image. The scalp mask in 3D T1-weighted human MR images. Keywords: Skull segmentation, scalp segmentation, morphology, MR

Leahy, Richard M.

228

Automatic Calibration of a Robotized 3D Ultrasound Imaging System by Visual Servoing  

E-print Network

held by a medical robot. Index Terms-- Visual servoing, 3D ultrasound imaging, medi- cal robotics. I. INTRODUCTION Three-dimensional ultrasound imaging is used in numerous medical applications such as vascularAutomatic Calibration of a Robotized 3D Ultrasound Imaging System by Visual Servoing Alexandre

Paris-Sud XI, Universit茅 de

229

Improving retinal imaging by corneal refractive index matching.  

PubMed

Imaging the retina at high resolution requires a dilated pupil, which in turn exposes more corneal irregularities. We diminish the optical errors of the cornea by refractive index matching. Lens-fitted goggles were used for corneal immersion, to reduce its aberrations, while keeping the ocular power. An additional aspheric plate reduced the residual ocular spherical aberration. A comparison of the index-matching-based retinal images with those acquired directly shows resolution improvement for subjects with normal extent of ocular aberrations. A simulation of the point spread function, obtained from an averaged ocular and corneal wavefront error, also reveals substantial improvement when using corneal index matching. The demonstrated improvement using index matching may enable further improvement of current retinal imaging techniques or relaxing requirements for active ocular aberration correction. PMID:23455285

Meitav, N; Ribak, E N; Goncharov, A V

2013-03-01

230

Hydraulic conductivity imaging from 3-D transient hydraulic tomography at several pumping/observation densities  

E-print Network

Hydraulic conductivity imaging from 3-D transient hydraulic tomography at several pumping August 2013; accepted 7 September 2013; published 13 November 2013. [1] 3-D Hydraulic tomography (3-D HT (primarily hydraulic conductivity, K) is estimated by joint inversion of head change data from multiple

Barrash, Warren

231

Class-specific grasping of 3D objects from a single 2D image  

E-print Network

Our goal is to grasp 3D objects given a single image, by using prior 3D shape models of object classes. The shape models, defined as a collection of oriented primitive shapes centered at fixed 3D positions, can be learned ...

Chiu, Han-Pang

232

3D Shape from Silhouette Points in Registered 2D Images Using Conjugate Gradient Method  

E-print Network

3D Shape from Silhouette Points in Registered 2D Images Using Conjugate Gradient Method Andrzej a number of silhouette points obtained from two or more viewpoints and a parametric model of the shape. Our the silhouette points to 3D to their closest silhouette points on the 3D shape. The solution is found using

Hoff, William A.

233

13. On the Computation of Image Motion and Heading in a 3-D Cluttered Scene  

E-print Network

259 13. On the Computation of Image Motion and Heading in a 3-D Cluttered Scene Michael S. Langer1. In particular, densely cluttered 3-D scenes such as a forest or grassland. When an observer moves through a densely cluttered 3-D scene, the occlusion relationships between the #12;M. S. LANGER AND R. MANN 260

Langer, Michael

234

3D-3D registration of partial capitate bones using spin-images  

NASA Astrophysics Data System (ADS)

It is often necessary to register partial objects in medical imaging. Due to limited field of view (FOV), the entirety of an object cannot always be imaged. This study presents a novel application of an existing registration algorithm to this problem. The spin-image algorithm [1] creates pose-invariant representations of global shape with respect to individual mesh vertices. These `spin-images,' are then compared for two different poses of the same object to establish correspondences and subsequently determine relative orientation of the poses. In this study, the spin-image algorithm is applied to 4DCT-derived capitate bone surfaces to assess the relative accuracy of registration with various amounts of geometry excluded. The limited longitudinal coverage under the 4DCT technique (38.4mm, [2]), results in partial views of the capitate when imaging wrist motions. This study assesses the ability of the spin-image algorithm to register partial bone surfaces by artificially restricting the capitate geometry available for registration. Under IRB approval, standard static CT and 4DCT scans were obtained on a patient. The capitate was segmented from the static CT and one phase of 4DCT in which the whole bone was available. Spin-image registration was performed between the static and 4DCT. Distal portions of the 4DCT capitate (10-70%) were then progressively removed and registration was repeated. Registration accuracy was evaluated by angular errors and the percentage of sub-resolution fitting. It was determined that 60% of the distal capitate could be omitted without appreciable effect on registration accuracy using the spin-image algorithm (angular error < 1.5 degree, sub-resolution fitting < 98.4%).

Breighner, Ryan; Holmes, David R.; Leng, Shuai; An, Kai-Nan; McCollough, Cynthia; Zhao, Kristin

2013-03-01

235

Segmental reproducibility of retinal blood flow velocity measurements using retinal function imager  

PubMed Central

Background To evaluate the reproducibility of blood flow velocity measurements of individual retinal blood vessel segments using retinal function imager (RFI). Methods Eighteen eyes of 15 healthy subjects were enrolled prospectively at three centers. All subjects underwent RFI imaging in two separate sessions 15 min apart by a single experienced photographer at each center. An average of five to seven serial RFI images were obtained. All images were transferred electronically to one center, and were analyzed by a single observer. Multiple blood vessel segments (each shorter than 100 ?m) were co-localized on first and second session images taken at different times of the same fundus using built-in software. Velocities of corresponding segments were determined, and then the inter-session reproducibility of flow velocity was assessed by the concordance correlation co-efficient (CCC), coefficient of reproducibility (CR), and coefficient of variance (CV). Results Inter-session CCC for flow velocity was 0.97 (95% confidence interval (CI), 0.966 to 0.9797). The CR was 1.49 mm/sec (95% CI, 1.39 to 1.59 mm/sec), and CV was 10.9%. The average arterial blood flow velocity was 3.16 mm/sec, and average venous blood flow velocity was 3.15 mm/sec. The CR for arterial and venous blood flow velocity was 1.61 mm/sec and 1.27 mm/sec respectively. Conclusion RFI provides reproducible measurements for retinal blood flow velocity for individual blood vessel segments, with 10.9% variability. PMID:23700326

Chhablani, Jay; Bartsch, Dirk-Uwe; Kozak, Igor; Cheng, Lingyun; Alshareef, Rayan A; Rezeq, Sami S; Sampat, Kapil M; Garg, Sunir J; Burgansky-Eliash, Zvia; Freeman, William R

2013-01-01

236

Adaptive optics with pupil tracking for high resolution retinal imaging.  

PubMed

Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics. PMID:22312577

Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris

2012-02-01

237

Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation.  

PubMed

Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 25601600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability. PMID:25465067

Wang, Junchen; Suenaga, Hideyuki; Liao, Hongen; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

2014-11-18

238

3D lidar imaging for detecting and understanding plant responses and canopy structure  

Microsoft Academic Search

Understanding and diagnosing plant responses to stress will benefit greatly from three-dimensional (3D) measurement and analysis of plant properties because plant responses are strongly related to their 3D struc- tures. Light detection and ranging (lidar) has recently emerged as a powerful tool for direct 3D measurement of plant structure. Here the use of 3D lidar imaging to estimate plant properties

Kenji Omasa; Fumiki Hosoi; Atsumi Konishi

2006-01-01

239

3D computer graphic of Fuji's mountain by using triple stereo images  

Microsoft Academic Search

In order to obtain the application of three dimensional (3D) reconstruction techniques for disaster visualization system on the mountain. This research is to develop a 3D model of mountain from triple images, and try to develop real time reconstruction process of 3D-mountain. The processes of this research study consist of seven stages to develop the 3D model's mountain surface. In

Weerakaset Suanpaga; Watcharin Witayakul; Somsak Chotichanathawewong

2012-01-01

240

Accuracy of volume measurement using 3D ultrasound and development of CT-3D US image fusion algorithm for prostate cancer radiotherapy  

SciTech Connect

Purpose: To evaluate the accuracy of measuring volumes using three-dimensional ultrasound (3D US), and to verify the feasibility of the replacement of CT-MR fusion images with CT-3D US in radiotherapy treatment planning. Methods: Phantoms, consisting of water, contrast agent, and agarose, were manufactured. The volume was measured using 3D US, CT, and MR devices. A CT-3D US and MR-3D US image fusion software was developed using the Insight Toolkit library in order to acquire three-dimensional fusion images. The quality of the image fusion was evaluated using metric value and fusion images. Results: Volume measurement, using 3D US, shows a 2.8 {+-} 1.5% error, 4.4 {+-} 3.0% error for CT, and 3.1 {+-} 2.0% error for MR. The results imply that volume measurement using the 3D US devices has a similar accuracy level to that of CT and MR. Three-dimensional image fusion of CT-3D US and MR-3D US was successfully performed using phantom images. Moreover, MR-3D US image fusion was performed using human bladder images. Conclusions: 3D US could be used in the volume measurement of human bladders and prostates. CT-3D US image fusion could be used in monitoring the target position in each fraction of external beam radiation therapy. Moreover, the feasibility of replacing the CT-MR image fusion to the CT-3D US in radiotherapy treatment planning was verified.

Baek, Jihye; Huh, Jangyoung; Hyun An, So; Oh, Yoonjin [Department of Medical Sciences, Ewha Womans University, Seoul 158-710 (Korea, Republic of); Kim, Myungsoo; Kim, DongYoung; Chung, Kwangzoo; Cho, Sungho; Lee, Rena [Department of Radiation Oncology, School of Medicine, Ewha Womans University, Seoul 158-710 (Korea, Republic of)

2013-02-15

241

Imaging System for Creating 3D Block-Face Cryo-Images Of Whole Mice  

PubMed Central

We developed a cryomicrotome/imaging system that provides high resolution, high sensitivity block-face images of whole mice or excised organs, and applied it to a variety of biological applications. With this cryo-imaging system, we sectioned cryo-preserved tissues at 2?40 ?m thickness and acquired high resolution brightfield and fluorescence images with microscopic in-plane resolution (as good as 1.2 ?m). Brightfield images of normal and pathological anatomy show exquisite detail, especially in the abdominal cavity. Multi-planar reformatting and 3D renderings allow one to interrogate 3D structures. In this report, we present brightfield images of mouse anatomy, as well as 3D renderings of organs. For BPK mice model of polycystic kidney disease, we compared brightfield cryo-images and kidney volumes to MRI. The color images provided greater contrast and resolution of cysts as compared to in vivo MRI. We note that color cryo-images are closer to what a researcher sees in dissection, making it easier for them to interpret image data. The combination of field of view, depth of field, ultra high resolution and color/fluorescence contrast enables cryo-image volumes to provide details that cannot be found through in vivo imaging or other ex vivo optical imaging approaches. We believe that this novel imaging system will have applications that include identification of mouse phenotypes, characterization of diseases like blood vessel disease, kidney disease, and cancer, assessment of drug and gene therapy delivery and efficacy and validation of other imaging modalities. PMID:19802364

Roy, Debashish; Breen, Michael; Salvado, Olivier; Heinzel, Meredith; McKinley, Eliot; Wilson, David

2009-01-01

242

3D Prostate Segmentation of Ultrasound Images Combining Longitudinal Image Registration and Machine Learning  

PubMed Central

We developed a three-dimensional (3D) segmentation method for transrectal ultrasound (TRUS) images, which is based on longitudinal image registration and machine learning. Using longitudinal images of each individual patient, we register previously acquired images to the new images of the same subject. Three orthogonal Gabor filter banks were used to extract texture features from each registered image. Patient-specific Gabor features from the registered images are used to train kernel support vector machines (KSVMs) and then to segment the newly acquired prostate image. The segmentation method was tested in TRUS data from five patients. The average surface distance between our and manual segmentation is 1.18 0.31 mm, indicating that our automatic segmentation method based on longitudinal image registration is feasible for segmenting the prostate in TRUS images. PMID:24027622

Yang, Xiaofeng; Fei, Baowei

2012-01-01

243

High Resolution MALDI Imaging Mass Spectrometry of Retinal Tissue Lipids  

PubMed Central

Matrix assisted laser desorption ionization imaging mass spectrometry (MALDI IMS) has the ability to provide an enormous amount of information on the abundances and spatial distributions of molecules within biological tissues. The rapid progress in the development of this technology significantly improves our ability to analyze smaller and smaller areas and features within tissues. The mammalian eye has evolved over millions of years to become an essential asset for survival, providing important sensory input of an organism抯 surroundings. The highly complex sensory retina of the eye is comprised of numerous cell types organized into specific layers with varying dimensions, the thinnest of which is the 10 ?m retinal pigment epithelium (RPE). This single cell layer and the photoreceptor layer contain the complex biochemical machinery required to convert photons of light into electrical signals that are transported to the brain by axons of retinal ganglion cells. Diseases of the retina including age related macular degeneration (AMD), retinitis pigmentosa, and diabetic retinopathy occur when the functions of these cells are interrupted by molecular processes that are not fully understood. In this report, we demonstrate the use of high spatial resolution MALDI IMS and FT-ICR tandem mass spectrometry in the Abca4?/? knockout mouse model of Stargardt disease, a juvenile onset form of macular degeneration. The spatial distributions and identity of lipid and retinoid metabolites are shown to be unique to specific retinal cell layers. PMID:24819461

Anderson, David M. G.; Ablonczy, Zsolt; Koutalos, Yiannis; Spraggins, Jeffrey; Crouch, Rosalie K.; Caprioli, Richard M.; Schey, Kevin L.

2014-01-01

244

Phase aided 3D imaging and modeling: dedicated systems and case studies  

NASA Astrophysics Data System (ADS)

Dedicated prototype systems for 3D imaging and modeling (3DIM) are presented. The 3D imaging systems are based on the principle of phase-aided active stereo, which have been developed in our laboratory over the past few years. The reported 3D imaging prototypes range from single 3D sensor to a kind of optical measurement network composed of multiple node 3D-sensors. To enable these 3D imaging systems, we briefly discuss the corresponding calibration techniques for both single sensor and multi-sensor optical measurement network, allowing good performance of the 3DIM prototype systems in terms of measurement accuracy and repeatability. Furthermore, two case studies including the generation of high quality color model of movable cultural heritage and photo booth from body scanning are presented to demonstrate our approach.

Yin, Yongkai; He, Dong; Liu, Zeyi; Liu, Xiaoli; Peng, Xiang

2014-05-01

245

Application of the Kohonen Network for Automatic Point Correspondence in Retinal Images  

Microsoft Academic Search

In this paper, an algorithm for automatic point correspondence is proposed towards retinal image registration. Given a pair of corresponding retinal images and a set of bifurcations or other salient points in one of the images, the algorithm detects effectively the set of corresponding points in the second image, by exploiting the properties of Kohonen's Self Organizing Maps and embedding

V. E. Markaki; P. A. Asvestas; G. K. Matsopoulos; N. K. Uzunoglu

2007-01-01

246

Seeing More Is Knowing More: V3D Enables Real-Time 3D Visualization and Quantitative Analysis of Large-Scale Biological Image Data Sets  

NASA Astrophysics Data System (ADS)

Everyone understands seeing more is knowing more. However, for large-scale 3D microscopic image analysis, it has not been an easy task to efficiently visualize, manipulate and understand high-dimensional data in 3D, 4D or 5D spaces. We developed a new 3D+ image visualization and analysis platform, V3D, to meet this need. The V3D system provides 3D visualization of gigabyte-sized microscopy image stacks in real time on current laptops and desktops. V3D streamlines the online analysis, measurement and proofreading of complicated image patterns by combining ergonomic functions for selecting a location in an image directly in 3D space and for displaying biological measurements, such as from fluorescent probes, using the overlaid surface objects. V3D runs on all major computer platforms and can be enhanced by software plug-ins to address specific biological problems. To demonstrate this extensibility, we built a V3Dbased application, V3D-Neuron, to reconstruct complex 3D neuronal structures from high-resolution brain images. V3D-Neuron can precisely digitize the morphology of a single neuron in a fruitfly brain in minutes, with about a 17-fold improvement in reliability and tenfold savings in time compared with other neuron reconstruction tools. Using V3D-Neuron, we demonstrate the feasibility of building a high-resolution 3D digital atlas of neurite tracts in the fruitfly brain. V3D can be easily extended using a simple-to-use and comprehensive plugin interface.

Peng, Hanchuan; Long, Fuhui

247

Computer-assisted 3D design software for teaching neuro-ophthalmology of the oculomotor system and training new retinal surgery techniques  

NASA Astrophysics Data System (ADS)

Purpose: To create a more effective method of demonstrating complex subject matter in ophthalmology with the use of high end, 3-D, computer aided animation and interactive multimedia technologies. Specifically, to explore the possibilities of demonstrating the complex nature of the neuroophthalmological basics of the human oculomotor system in a clear and non confusing way, and to demonstrate new forms of retinal surgery in a manner that makes the procedures easier to understand for other retinal surgeons. Methods and Materials: Using Reflektions 4.3, Monzoom Pro 4.5, Cinema 4D XL 5.03, Cinema 4D XL 8 Studio Bundle, Mediator 4.0, Mediator Pro 5.03, Fujitsu-Siemens Pentium III and IV, Gericom Webgine laptop, M.G.I. Video Wave 1.0 and 5, Micrografix Picture Publisher 6.0 and 8, Amorphium 1.0, and Blobs for Windows, we created 3-D animations showing the origin, insertion, course, main direction of pull, and auxiliary direction of pull of the six extra-ocular eye muscles. We created 3-D animations that (a) show the intra-cranial path of the relevant oculomotor cranial nerves and which muscles are supplied by them, (b) show which muscles are active in each of the ten lines of sight, (c) demonstrate the various malfunctions of oculomotor systems, as well as (d) show the surgical techniques and the challenges in radial optic neurotomies and subretinal surgeries. Most of the 3-D animations were integrated in interactive multimedia teaching programs. Their effectiveness was compared to conventional teaching methods in a comparative study performed at the University of Vienna. We also performed a survey to examine the response of students being taught with the interactive programs. We are currently in the process of placing most of the animations in an interactive web site in order to make them freely available to everyone who is interested. Results: Although learning how to use complex 3-D computer animation and multimedia authoring software can be very time consuming and frustrating, we found that once the programs are mastered they can be used to create 3-D animations that drastically improve the quality of medical demonstrations. The comparative study showed a significant advantage of using these technologies over conventional teaching methods. The feedback from medical students, doctors, and retinal surgeons was overwhelmingly positive. A strong interest was expressed to have more subjects and techniques demonstrated in this fashion. Conclusion: 3-D computer technologies should be used in the demonstration of all complex medical subjects. More effort and resources need to be given to the development of these technologies that can improve the understanding of medicine for students, doctors, and patients alike.

Glittenberg, Carl; Binder, Susanne

2004-07-01

248

Hyperspectral image lossy-to-lossless compression using the 3D Embedded Zeroblock Coding alogrithm  

Microsoft Academic Search

In this paper, we propose a hyperspectral image lossy-to-lossless compression coder based on the Three-Dimensional Embedded ZeroBlock Coding (3D EZBC) algorithm. This coder adopts the three-dimensional integer wavelet packet transform with unitary scaling to decorrelate and the 3D EZBC algorithm without motion compensation to process bitplane zeroblock coding. For hyperspectral image compression using the 3D EZBC algorithm, the lossy-to-lossless compression

Ying Hou; Guizhong Liu

2008-01-01

249

Textureless Macula Swelling Detection with Multiple Retinal Fundus Images  

SciTech Connect

Retinal fundus images acquired with non-mydriatic digital fundus cameras are a versatile tool for the diagnosis of various retinal diseases. Because of the ease of use of newer camera models and their relatively low cost, these cameras can be employed by operators with limited training for telemedicine or Point-of-Care applications. We propose a novel technique that uses uncalibrated multiple-view fundus images to analyse the swelling of the macula. This innovation enables the detection and quantitative measurement of swollen areas by remote ophthalmologists. This capability is not available with a single image and prone to error with stereo fundus cameras. We also present automatic algorithms to measure features from the reconstructed image which are useful in Point-of-Care automated diagnosis of early macular edema, e.g., before the appearance of exudation. The technique presented is divided into three parts: first, a preprocessing technique simultaneously enhances the dark microstructures of the macula and equalises the image; second, all available views are registered using non-morphological sparse features; finally, a dense pyramidal optical flow is calculated for all the images and statistically combined to build a naiveheight- map of the macula. Results are presented on three sets of synthetic images and two sets of real world images. These preliminary tests show the ability to infer a minimum swelling of 300 microns and to correlate the reconstruction with the swollen location.

Giancardo, Luca [ORNL; Meriaudeau, Fabrice [ORNL; Karnowski, Thomas Paul [ORNL; Tobin Jr, Kenneth William [ORNL; Grisan, Enrico [University of Padua, Padua, Italy; Favaro, Paolo [Heriot-Watt University, Edinburgh; Ruggeri, Alfredo [University of Padua, Padua, Italy; Chaum, Edward [University of Tennessee, Knoxville (UTK)

2010-01-01

250

3D image display of fetal ultrasonic images by thin shell  

NASA Astrophysics Data System (ADS)

Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.

Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen

1999-05-01

251

Adaptive Optics Retinal Imaging Clinical Opportunities and Challenges  

PubMed Central

The array of therapeutic options available to clinicians for treating retinal disease is expanding. With these advances comes the need for better understanding of the etiology of these diseases on a cellular level as well as improved non-invasive tools for identifying the best candidates for given therapies and monitoring the efficacy of those therapies. While spectral domain optical coherence tomography (SD-OCT) offers a widely available tool for clinicians to assay the living retina, it suffers from poor lateral resolution due to the eye抯 monochromatic aberrations. Adaptive optics (AO) is a technique to compensate for the eye抯 aberrations and provide nearly diffraction-limited resolution. The result is the ability to visualize the living retina with cellular resolution. While AO is unquestionably a powerful research tool, many clinicians remain undecided on the clinical potential of AO imaging putting many at a crossroads with respect to adoption of this technology. This review will briefly summarize the current state of AO retinal imaging, discuss current as well as future clinical applications of AO retinal imaging, and finally provide some discussion of research needs to facilitate more widespread clinical use. PMID:23621343

Carroll, Joseph; Kay, David B.; Scoles, Drew; Dubra, Alfredo; Lombardo, Marco

2014-01-01

252

A novel method for blood vessel detection from retinal images  

PubMed Central

Background The morphological changes of the retinal blood vessels in retinal images are important indicators for diseases like diabetes, hypertension and glaucoma. Thus the accurate segmentation of blood vessel is of diagnostic value. Methods In this paper, we present a novel method to segment retinal blood vessels to overcome the variations in contrast of large and thin vessels. This method uses adaptive local thresholding to produce a binary image then extract large connected components as large vessels. The residual fragments in the binary image including some thin vessel segments (or pixels), are classified by Support Vector Machine (SVM). The tracking growth is applied to the thin vessel segments to form the whole vascular network. Results The proposed algorithm is tested on DRIVE database, and the average sensitivity is over 77% while the average accuracy reaches 93.2%. Conclusions In this paper, we distinguish large vessels by adaptive local thresholding for their good contrast. Then identify some thin vessel segments with bad contrast by SVM, which can be lengthened by tracking. This proposed method can avoid heavy computation and manual intervention. PMID:20187975

2010-01-01

253

3D color surface digitization of human head from sequence of structured light images  

NASA Astrophysics Data System (ADS)

Acquiring 3D color model of human head is desired in many applications. In this paper, we introduce a scheme to obtain 3D color information of human head from image sequence in 3D laser color scanner. Structured light technology is employed to measure depth. We study the relationship among the object's images in different position. Synthesizing these information, we can obtain the shape of hair area from contour image. True color information of sample points can be acquired from the specified image in image sequence. The result of experiment is satisfactory.

Jin, Gang; Li, Dehua; Huang, Jianzhong; Li, Zeyu

1998-09-01

254

A review of 3D/2D registration methods for image-guided interventions.  

PubMed

Registration of pre- and intra-interventional data is one of the key technologies for image-guided radiation therapy, radiosurgery, minimally invasive surgery, endoscopy, and interventional radiology. In this paper, we survey those 3D/2D data registration methods that utilize 3D computer tomography or magnetic resonance images as the pre-interventional data and 2D X-ray projection images as the intra-interventional data. The 3D/2D registration methods are reviewed with respect to image modality, image dimensionality, registration basis, geometric transformation, user interaction, optimization procedure, subject, and object of registration. PMID:20452269

Markelj, P; Toma瀍vi?, D; Likar, B; Pernu, F

2012-04-01

255

Path-length-multiplexed scattering-angle-diverse optical coherence tomography for retinal imaging  

PubMed Central

A low-resolution path-length-multiplexed scattering angle diverse optical coherence tomography (PM-SAD-OCT) is constructed to investigate the scattering properties of the retinal nerve fiber layer (RNFL). Low-resolution PM-SADOCT retinal images acquired from a healthy human subject show the variation of RNFL scattering properties at retinal locations around the optic nerve head. The results are consistent with known retinal ganglion cell neural anatomy and principles of light scattering. Application of PM-SAD-OCT may provide potentially valuable diagnostic information for clinical retinal imaging. PMID:24177097

Wang, Bingqing; Yin, Biwei; Dwelle, Jordan; Rylander, H. Grady; Markey, Mia K.; Milner, Thomas E.

2014-01-01

256

Extensible visualization and analysis for multidimensional images using Vaa3D.  

PubMed

Open-Source 3D Visualization-Assisted Analysis (Vaa3D) is a software platform for the visualization and analysis of large-scale multidimensional images. In this protocol we describe how to use several popular features of Vaa3D, including (i) multidimensional image visualization, (ii) 3D image object generation and quantitative measurement, (iii) 3D image comparison, fusion and management, (iv) visualization of heterogeneous images and respective surface objects and (v) extension of Vaa3D functions using its plug-in interface. We also briefly demonstrate how to integrate these functions for complicated applications of microscopic image visualization and quantitative analysis using three exemplar pipelines, including an automated pipeline for image filtering, segmentation and surface generation; an automated pipeline for 3D image stitching; and an automated pipeline for neuron morphology reconstruction, quantification and comparison. Once a user is familiar with Vaa3D, visualization usually runs in real time and analysis takes less than a few minutes for a simple data set. PMID:24385149

Peng, Hanchuan; Bria, Alessandro; Zhou, Zhi; Iannello, Giulio; Long, Fuhui

2014-01-01

257

Integrated Endoscope for Real-Time 3D Ultrasound Imaging and Hyperthermia: Feasibility Study  

E-print Network

Integrated Endoscope for Real-Time 3D Ultrasound Imaging and Hyperthermia: Feasibility Study ERIC C of treatments for prostate, cervical and esophageal cancer. The ability to combine ultrasound hyperthermia and 3 to facilitate drug delivery therapy. Key words: 3D; endoscope; hyperthermia; imaging; ultrasound. I

Smith, Stephen

258

Experimental imaging and 3D rendering of absorbed dose means of piled-up dosimetric sheets  

Microsoft Academic Search

A method for experimental achievement of full 3D images of in-phantom absorbed dose in conformal radiotherapy was developed. The dose images of a sequence of phantom layers are obtained by means of gel dosimeters. Wide software was properly developed and optimized to perform all the necessary manipulations to get the interactive rendering of 3D dose distribution, starting from a set

Mauro Carrara; Grazia Gambarini; Stefano Tomatis

2005-01-01

259

Voxel Similarity Measures for 3D Serial MR Brain Image Registration  

Microsoft Academic Search

We have evaluated eight different similarity measures used for rigid body registration of serial magnetic resonance (MR) brain scans. To assess their accuracy we used 33 clinical three- dimensional (3-D) serial MR images, with deformable extradural tissue excluded by manual segmentation and simulated 3-D MR images with added intensity distortion. For each measure we deter- mined the consistency of registration

Mark Holden; Derek L. G. Hill; Erika R. E. Denton; Jo M. Jarosz; Tim C. S. Cox; Torsten Rohlfing; Joanne Goodey; David J. Hawkes

2000-01-01

260

Robust Adaptive Segmentation of 3D Medical Images with Level Sets  

Microsoft Academic Search

: This paper is concerned with the use of the Level Set formalism to segment anatomicalstructures in 3D medical images (ultrasound or magnetic resonance images). A closed 3D surfacepropagates towards the desired boundaries through the iterative evolution of a 4D implicit function.The major contribution of this work is the design of a robust evolution model based on adaptiveparameters depending on

C. Baillard; C. Barillot; P. Bouthemy

261

3D Harmonic Mapping and Tetrahedral Meshing of Brain Imaging Data  

E-print Network

3D Harmonic Mapping and Tetrahedral Meshing of Brain Imaging Data Yalin Wang1 , Xianfeng Gu2 , Paul algorithm finds a harmonic map from a 3-manifold to a 3D solid sphere and the second is a novel sphere of magnetic resonance images (MRI). A heat flow method is used to solve the volumetric harmonic mapping

Thompson, Paul

262

Automatic Detection and Segmentation of Kidneys in 3D CT Images Using Random Forests  

E-print Network

Automatic Detection and Segmentation of Kidneys in 3D CT Images Using Random Forests R麓emi Cuingnet. Kidney segmentation in 3D CT images allows extracting use- ful information for nephrologists- ments. Kidneys are localized with random forests following a coarse- to-fine strategy. Their initial

Boyer, Edmond

263

CORTICAL SULCI MODEL AND MATCHING FROM 3D BRAIN MAGNETIC RESONANCE IMAGES , N Royackkers  

E-print Network

CORTICAL SULCI MODEL AND MATCHING FROM 3D BRAIN MAGNETIC RESONANCE IMAGES S Langlois (1) , N maps based on the topography of sulci and gyri. Our work is based on this method and our objective sulci are also necessary relations for sulci identification. Data are issued from 3D MRI images (120x256

Boyer, Edmond

264

Coherence Holography and Spatial Frequency Comb for 3-D Coherence Imaging  

E-print Network

Coherence Holography and Spatial Frequency Comb for 3-D Coherence Imaging Mitsuo Takeda, Wei Wang holography technique, called coherence holography, and a related technique for dispersion-free 3-D coherence: (090.0090) Holography; (030.1640) Coherence; (100.3010) Image reconstruction techniques; (110

Rosen, Joseph

265

KIDNEY DETECTION AND REAL-TIME SEGMENTATION IN 3D CONTRAST-ENHANCED ULTRASOUND IMAGES  

E-print Network

an automatic method to segment the kidney in 3D contrast-enhanced ultrasound (CEUS) images. This modality has Terms-- Kidney, Detection, Segmentation, 3D Ul- trasound, Contrast, CEUS 1. INTRODUCTION Three-dimensional real-time visualization of vascularization can be achieved with CEUS imaging, and provides very use

Cohen, Laurent

266

Spatio-Temporal Data Fusion for 3D+T Image Reconstruction in Cerebral Angiography  

E-print Network

This paper provides a framework for generating high resolution time sequences of 3D images that show the dynamics of cerebral blood flow. These sequences have the potential to allow image feedback during medical procedures ...

Copeland, Andrew D.

267

Radiology Lab 0: Introduction to 2D and 3D Imaging  

NSDL National Science Digital Library

This is a self-directed learning module to introduce students to basic concepts of imaging technology as well as to give students practice going between 2D and 3D imaging using everyday objects.Annotated: true

Shaffer, Kitt

2008-10-02

268

Quantitative 3-D imaging topogrammetry for telemedicine applications  

NASA Technical Reports Server (NTRS)

The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with precision micro-sewing machines, splice neural connections with laser welds, micro-bore through constricted vessels, and computer combine ultrasound, microradiography, and 3-D mini-borescopes to quickly assess and trace vascular problems in situ. The spatial relationships between organs, robotic arms, and end-effector diagnostic, manipulative, and surgical instruments would be constantly monitored by the robot 'brain' using inputs from its multiple 3-D quantitative 'eyes' remote sensing, as well as by contact and proximity force measuring devices. Methods to create accurate and quantitative 3-D topograms at continuous video data rates are described.

Altschuler, Bruce R.

1994-01-01

269

Retinal oxygen saturation evaluation by multi-spectral fundus imaging  

NASA Astrophysics Data System (ADS)

Purpose: To develop a multi-spectral method to measure oxygen saturation of the retina in the human eye. Methods: Five Cynomolgus monkeys with normal eyes were anesthetized with intramuscular ketamine/xylazine and intravenous pentobarbital. Multi-spectral fundus imaging was performed in five monkeys with a commercial fundus camera equipped with a liquid crystal tuned filter in the illumination light path and a 16-bit digital camera. Recording parameters were controlled with software written specifically for the application. Seven images at successively longer oxygen-sensing wavelengths were recorded within 4 seconds. Individual images for each wavelength were captured in less than 100 msec of flash illumination. Slightly misaligned images of separate wavelengths due to slight eye motion were registered and corrected by translational and rotational image registration prior to analysis. Numerical values of relative oxygen saturation of retinal arteries and veins and the underlying tissue in between the artery/vein pairs were evaluated by an algorithm previously described, but which is now corrected for blood volume from averaged pixels (n > 1000). Color saturation maps were constructed by applying the algorithm at each image pixel using a Matlab script. Results: Both the numerical values of relative oxygen saturation and the saturation maps correspond to the physiological condition, that is, in a normal retina, the artery is more saturated than the tissue and the tissue is more saturated than the vein. With the multi-spectral fundus camera and proper registration of the multi-wavelength images, we were able to determine oxygen saturation in the primate retinal structures on a tolerable time scale which is applicable to human subjects. Conclusions: Seven wavelength multi-spectral imagery can be used to measure oxygen saturation in retinal artery, vein, and tissue (microcirculation). This technique is safe and can be used to monitor oxygen uptake in humans. This work is original and is not under consideration for publication elsewhere.

Khoobehi, Bahram; Ning, Jinfeng; Puissegur, Elise; Bordeaux, Kimberly; Balasubramanian, Madhusudhanan; Beach, James

2007-03-01

270

Estimating Density Gradients and Drivers from 3D Ionospheric Imaging  

NASA Astrophysics Data System (ADS)

The transition regions at the edges of the ionospheric storm-enhanced density (SED) are important for a detailed understanding of the mid-latitude physical processes occurring during major magnetic storms. At the boundary, the density gradients are evidence of the drivers that link the larger processes of the SED, with its connection to the plasmasphere and prompt-penetration electric fields, to the smaller irregularities that result in scintillations. For this reason, we present our estimates of both the plasma variation with horizontal and vertical spatial scale of 10 - 100 km and the plasma motion within and along the edges of the SED. To estimate the density gradients, we use Ionospheric Data Assimilation Four-Dimensional (IDA4D), a mature data assimilation algorithm that has been developed over several years and applied to investigations of polar cap patches and space weather storms [Bust and Crowley, 2007; Bust et al., 2007]. We use the density specification produced by IDA4D with a new tool for deducing ionospheric drivers from 3D time-evolving electron density maps, called Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE technique has been tested on simulated data from TIMEGCM-ASPEN and on IDA4D-based density estimates with ongoing validation from Arecibo ISR measurements [Datta-Barua et al., 2009a; 2009b]. We investigate the SED that formed during the geomagnetic super storm of November 20, 2003. We run IDA4D at low-resolution continent-wide, and then re-run it at high (~10 km horizontal and ~5-20 km vertical) resolution locally along the boundary of the SED, where density gradients are expected to be highest. We input the high-resolution estimates of electron density to EMPIRE to estimate the ExB drifts and field-aligned plasma velocities along the boundaries of the SED. We expect that these drivers contribute to the density structuring observed along the SED during the storm. Bust, G. S. and G. Crowley (2007), Tracking of polar cap patches using data assimilation, J. Geophys. Res., 112, A05307, doi:10.1029/2005JA011597. Bust, G. S., G. Crowley, T. W. Garner, T. L. Gaussiran II, R. W. Meggs, C. N. Mitchell, P. S. J. Spencer, P. Yin, and B. Zapfe (2007) ,Four Dimensional GPS Imaging of Space-Weather Storms, Space Weather, 5, S02003, doi:10.1029/2006SW000237. Datta-Barua, S., G. S. Bust, G. Crowley, and N. Curtis (2009a), Neutral wind estimation from 4-D ionospheric electron density images, J. Geophys. Res., 114, A06317, doi:10.1029/2008JA014004. Datta-Barua, S., G. Bust, and G. Crowley (2009b), "Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE)," presented at CEDAR, Santa Fe, New Mexico, July 1.

Datta-Barua, S.; Bust, G. S.; Curtis, N.; Reynolds, A.; Crowley, G.

2009-12-01

271

3D intraventricular flow mapping from colour Doppler images and wall motion.  

PubMed

We propose a new method to recover 3D time-resolved velocity vectors within the left ventricle (LV) using a combination of multiple registered 3D colour Doppler images and LV wall motion. Incorporation of wall motion, calculated from 3D B-Mode images, and the use of a multi-scale reconstruction framework allow recovery of 3D velocity over the entire ventricle, even in regions where there is little or no Doppler data. Our method is tested on the LV of a paediatric patient and is compared to 2D and 3D flow Magnetic Resonance Imaging (MRI). Use of wall motion information increased stroke volume accuracy by 14%, and enabled full 3D velocity mapping within the ventricle. Velocity distribution showed good agreement with respect to MRI, and vortex formation during diastole was successfully reconstructed. PMID:24579175

G髆ez, Alberto; de Vecchi, Adelaide; Pushparajah, Kuberan; Simpson, John; Giese, Daniel; Schaeffter, Tobias; Penney, Graeme

2013-01-01

272

Retinal, anterior segment and full eye imaging using ultrahigh speed swept source OCT with vertical-cavity surface emitting lasers  

PubMed Central

We demonstrate swept source OCT utilizing vertical-cavity surface emitting laser (VCSEL) technology for in vivo high speed retinal, anterior segment and full eye imaging. The MEMS tunable VCSEL enables long coherence length, adjustable spectral sweep range and adjustable high sweeping rate (50580 kHz axial scan rate). These features enable integration of multiple ophthalmic applications into one instrument. The operating modes of the device include: ultrahigh speed, high resolution retinal imaging (up to 580 kHz); high speed, long depth range anterior segment imaging (100 kHz) and ultralong range full eye imaging (50 kHz). High speed imaging enables wide-field retinal scanning, while increased light penetration at 1060 nm enables visualization of choroidal vasculature. Comprehensive volumetric data sets of the anterior segment from the cornea to posterior crystalline lens surface are also shown. The adjustable VCSEL sweep range and rate make it possible to achieve an extremely long imaging depth range of ~50 mm, and to demonstrate the first in vivo 3D OCT imaging spanning the entire eye for non-contact measurement of intraocular distances including axial eye length. Swept source OCT with VCSEL technology may be attractive for next generation integrated ophthalmic OCT instruments. PMID:23162712

Grulkowski, Ireneusz; Liu, Jonathan J.; Potsaid, Benjamin; Jayaraman, Vijaysekhar; Lu, Chen D.; Jiang, James; Cable, Alex E.; Duker, Jay S.; Fujimoto, James G.

2012-01-01

273

3D imaging and wavefront sensing with a plenoptic objective  

Microsoft Academic Search

Plenoptic cameras have been developed over the last years as a passive method for 3d scanning. Several superresolution algorithms have been proposed in order to increase the resolution decrease associated with lightfield acquisition with a microlenses array. A number of multiview stereo algorithms have also been applied in order to extract depth information from plenoptic frames. Real time systems have

J. M. Rodr韌uez-Ramos; J. P. L黭e; R. L髉ez; J. G. Marichal-Hern醤dez; I. Montilla; J. Trujillo-Sevilla; B. Femen韆; M. Puga; M. L髉ez; J. J. Fern醤dez-Valdivia; F. Rosa; C. Dominguez-Conde; J. C. Sanluis; L. F. Rodr韌uez-Ramos

2011-01-01

274

Statistical skull models from 3D X-ray images  

Microsoft Academic Search

We present 2 statistical models of the skull and mandible built upon an elastic registration method of 3D meshes. The aim of this work is to relate degrees of freedom of skull anatomy, as static relations are of main interest for anthropology and legal medicine. Statistical models can effectively provide reconstructions together with statistical precision. In our applications, patient-specific meshes

Maxime Berar; Michel Desvignes; G閞ard Bailly; Yohan Payan

2006-01-01

275

Statistical skull models from 3D X-ray images  

E-print Network

We present 2 statistical models of the skull and mandible built upon an elastic registration method of 3D meshes. The aim of this work is to relate degrees of freedom of skull anatomy, as static relations are of main interest for anthropology and legal medicine. Statistical models can effectively provide reconstructions together with statistical precision. In our applications, patient-specific meshes of the skull and the mandible are high-density meshes, extracted from 3D CT scans. All our patient-specific meshes are registrated in a subject-shared reference system using our 3D-to-3D elastic matching algorithm. Registration is based upon the minimization of a distance between the high density mesh and a shared low density mesh, defined on the vertexes, in a multi resolution approach. A Principal Component analysis is performed on the normalised registrated data to build a statistical linear model of the skull and mandible shape variation. The accuracy of the reconstruction is under the millimetre in the shape...

Berar, M; Bailly, G; Payan, Y; Berar, Maxime; Desvignes, Michel; Payan, Yohan

2006-01-01

276

Semi-Automatic 2D-to-3D Image Conversion Techniques for Touchscreen  

E-print Network

Input 3D Image Output Estimated Depth Map Left View Right View #12;What is Depth Map? The depth map an object is far away from the screen. 2D Color Image Depth Map or Depth image #12;De image based on the structure of the input 2D image, which involve two processes: (1) Depth Estimation

Po, Lai-Man

277

Experimental research on 3D reconstruction through range gated laser imaging  

NASA Astrophysics Data System (ADS)

A range gated laser imaging system has been designed and developed for high precision three-dimensional imaging. The system uses a Nd:YAG electro-optical Q-switched 532nm laser as transmitter, a double microchannel plate as gated sensor, and all the components are controlled by a trigger control unit with accuracy of subnanosecond. An experimental scheme is also designed to achieve high precision imaging; a sequence of 2D "slice" images are acquired in the experiment, and these images provide the basic data for 3D reconstruction. Basing on the centroid algorithm, we have developed the 3D reconstruction algorithm, and use it to reconstruct a 3D image of target from the experimental data. We compare the 3D image with the system performance model, and the results are corresponding.

Li, Sining; Lu, Wei; Zhang, Dayong; Li, Chao; Tu, Zhipeng

2014-09-01

278

Human conjunctival microvasculature assessed with a retinal function imager (RFI)  

PubMed Central

The conjunctival and cerebral vasculatures share similar embryological origins, with similar structural and physiological characteristics. Tracking the conjunctival microvasculature may provide useful information for predicting the onset, progression and prognosis of both systemic and central nervous system (CNS) vascular diseases. The bulbar conjunctival vasculature was imaged using a retinal function imager (RFI, Optical Imaging Ltd, Rehovot, Israel). Hemoglobin in red blood cells was used as an intrinsic motion-contrast agent in the generation of detailed noninvasive capillary-perfusion maps (nCPMs) and the calculation of the blood flow velocity. Five healthy subjects were imaged under normal conditions and again under the stress condition of wearing a contact lens. The retina was also imaged in one eye of one subject for comparison. The nCPMs showed the conjunctival microvasculature in exquisite detail, which appeared as clear as the retinal nCPMs. The blood flow velocities in the temporal conjunctival microvasculature were 0.86 0.08 (mean SD, mm/s) for the bare eye and 0.99 0.11 mm/s with contact lens wear. It is feasible to use RFI for imaging the conjunctival vasculature. PMID:23084966

Jiang, Hong; Ye, Yufeng; DeBuc, Delia Cabrera; Lam, Byron L; Rundek, Tatjana; Tao, Aizhu; Shao, Yilei; Wang, Jianhua

2012-01-01

279

ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images  

NASA Technical Reports Server (NTRS)

ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

2005-01-01

280

3D pulmonary airway color image reconstruction via shape from shading and virtual bronchoscopy imaging techniques  

NASA Astrophysics Data System (ADS)

The dependence on macro-optical imaging of the human body in the assessment of possible disease is rapidly increasing concurrent with, and as a direct result of, advancements made in medical imaging technologies. Assessing the pulmonary airways through bronchoscopy is performed extensively in clinical practice however remains highly subjective due to limited visualization techniques and the lack of quantitative analyses. The representation of 3D structures in 2D visualization modes, although providing an insight to the structural content of the scene, may in fact skew the perception of the structural form. We have developed two methods for visualizing the optically derived airway mucosal features whilst preserving the structural scene integrity. Shape from shading (SFS) techniques can be used to extract 3D structural information from 2D optical images. The SFS technique presented addresses many limitations previously encountered in conventional techniques resulting in high-resolution 3D color images. The second method presented to combine both color and structural information relies on combined CT and bronchoscopy imaging modalities. External imaging techniques such as CT provide a means of determining the gross structural anatomy of the pulmonary airways, however lack the important optically derived mucosal color. Virtual bronchoscopy is used to provide a direct link between the CT derived structural anatomy and the macro-optically derived mucosal color. Through utilization of a virtual and true bronchoscopy matching technique we are able to directly extract combined structurally sound 3D color segments of the pulmonary airways. Various pulmonary airway diseases are assessed and the resulting combined color and texture results are presented demonstrating the effectiveness of the presented techniques.

Suter, Melissa; Reinhardt, Joseph M.; Hoffman, Eric A.; McLennan, Geoffrey

2005-04-01

281

Enhanced imaging colonoscopy facilitates dense motion-based 3D reconstruction.  

PubMed

We propose a novel approach for estimating a dense 3D model of neoplasia in colonoscopy using enhanced imaging endoscopy modalities. Estimating a dense 3D model of neoplasia is important to make 3D measurements and to classify the superficial lesions in standard frameworks such as the Paris classification. However, it is challenging to obtain decent dense 3D models using computer vision techniques such as Structure-from-Motion due to the lack of texture in conventional (white light) colonoscopy. Therefore, we propose to use enhanced imaging endoscopy modalities such as Narrow Band Imaging and chromoendoscopy to facilitate the 3D reconstruction process. Thanks to the use of these enhanced endoscopy techniques, visualization is improved, resulting in more reliable feature tracks and 3D reconstruction results. We first build a sparse 3D model of neoplasia using Structure-from-Motion from enhanced endoscopy imagery. Then, the sparse reconstruction is densified using a Multi-View Stereo approach, and finally the dense 3D point cloud is transformed into a mesh by means of Poisson surface reconstruction. The obtained dense 3D models facilitate classification of neoplasia in the Paris classification, in which the 3D size and the shape of the neoplasia play a major role in the diagnosis. PMID:24111442

Alcantarilla, Pablo F; Bartoli, Adrien; Chadebecq, Francois; Tilmant, Christophe; Lepilliez, Vincent

2013-01-01

282

An enhanced segmentation of blood vessels in retinal images using contourlet  

Microsoft Academic Search

Retinal images acquired using a fundus camera often contain low grey, low level contrast and are of low dynamic range. This may seriously affect the automatic segmentation stage and subsequent results; hence, it is necessary to carry-out preprocessing to improve image contrast results before segmentation. Here we present a new multi-scale method for retinal image contrast enhancement using Contourlet transform.

S. H. Rezatofighi; A. Roodaki; H. Ahmadi Noubari

2008-01-01

283

Automatic Detection of Vascular Bifurcations and Crossovers from Color Retinal Fundus Images  

Microsoft Academic Search

Identifying the vascular bifurcations and crossovers in the retinal image is helpful for predicting many cardiovascular diseases and can be used as biometric features and for image registration. In this paper, we propose an efficient method to detect vascular bifurcations and crossovers based on the vessel geometrical features. We segment the blood vessels from the color retinal RGB image, and

Alauddin Bhuiyan; Baikunth Nath; Joselito Chua; Kotagiri Ramamohanarao

2007-01-01

284

Imaging of human differentiated 3D neural aggregates using light sheet fluorescence microscopy  

PubMed Central

The development of three dimensional (3D) cell cultures represents a big step for the better understanding of cell behavior and disease in a more natural like environment, providing not only single but multiple cell type interactions in a complex 3D matrix, highly resembling physiological conditions. Light sheet fluorescence microscopy (LSFM) is becoming an excellent tool for fast imaging of such 3D biological structures. We demonstrate the potential of this technique for the imaging of human differentiated 3D neural aggregates in fixed and live samples, namely calcium imaging and cell death processes, showing the power of imaging modality compared with traditional microscopy. The combination of light sheet microscopy and 3D neural cultures will open the door to more challenging experiments involving drug testing at large scale as well as a better understanding of relevant biological processes in a more realistic environment. PMID:25161607

Gualda, Emilio J.; Sim鉶, Daniel; Pinto, Catarina; Alves, Paula M.; Brito, Catarina

2014-01-01

285

Estimating 3D Hand Pose from a Cluttered Image  

Microsoft Academic Search

A method is proposed that can generate a ranked list of plausible three-dimensional hand configurations that best match an input image. Hand pose estimation is formulated as an image database indexing problem, where the closest matches for an input hand image are retrieved from a large database of synthetic hand images. In contrast to previ- ous approaches, the system can

Vassilis Athitsos; Stan Sclaroff

2003-01-01

286

Motion compensated frequency modulated continuous wave 3D coherent imaging ladar with scannerless architecture.  

PubMed

A principal difficulty of long dwell coherent imaging ladar is its extreme sensitivity to target or platform motion. This paper describes a motion compensated frequency modulated continuous wave 3D coherent imaging ladar method that overcomes this motion sensitivity, making it possible to work with nonstatic targets such as human faces, as well as imaging of targets through refractive turbulence. Key features of this method include scannerless imaging and high range resolution. The reduced motion sensitivity is shown with mathematical analysis and demonstration 3D images. Images of static and dynamic targets are provided demonstrating up to 600800 pixel imaging with millimeter range resolution. PMID:23262614

Krause, Brian W; Tiemann, Bruce G; Gatt, Philip

2012-12-20

287

Silhouette-based 3-D model reconstruction from multiple images  

Microsoft Academic Search

Abstract: The goal of this study is to investigate the reconstructionof 3D graphical models of real objects in a controlledimaging environment and present the work done inour group based on silhouette-based reconstruction. Althoughmany parts of the whole system have been wellknownin the literature and in practice, the main contributionof the paper is that it describes a complete, end-to-endsystem explained in

Adem Yasar M黮ayim; Ulas Yilmaz; Volkan Atalay

2003-01-01

288

Deformable M-Reps for 3D Medical Image Segmentation  

Microsoft Academic Search

M-reps (formerly called DSLs) are a multiscale medial means for modeling and rendering 3D solid geometry. They are particularly well suited to model anatomic objects and in particular to capture prior geometric information effectively in deformable models segmentation approaches. The representation is based on figural models, which define objects at coarse scale by a hierarchy of figures梕ach figure generally a

Stephen M. Pizer; P. Thomas Fletcher; Sarang C. Joshi; Andrew Thall; James Z. Chen; Yonatan Fridman; Daniel S. Fritsch; A. Graham Gash; John M. Glotzer; Michael R. Jiroutek; Conglin Lu; Keith E. Muller; Gregg Tracton; Paul A. Yushkevich; Edward L. Chaney

2003-01-01

289

A dual-modal retinal imaging system with adaptive optics  

PubMed Central

An adaptive optics scanning laser ophthalmoscope (AO-SLO) is adapted to provide optical coherence tomography (OCT) imaging. The AO-SLO function is unchanged. The system uses the same light source, scanning optics, and adaptive optics in both imaging modes. The result is a dual-modal system that can acquire retinal images in both en face and cross-section planes at the single cell level. A new spectral shaping method is developed to reduce the large sidelobes in the coherence profile of the OCT imaging when a non-ideal source is used with a minimal introduction of noise. The technique uses a combination of two existing digital techniques. The thickness and position of the traditionally named inner segment/outer segment junction are measured from individual photoreceptors. In-vivo images of healthy and diseased human retinas are demonstrated. PMID:24514529

Meadway, Alexander; Girkin, Christopher A.; Zhang, Yuhua

2013-01-01

290

A Featureless Approach to 3D Polyhedral Building Modeling from Aerial Images  

PubMed Central

This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach. PMID:22346575

Hammoudi, Karim; Dornaika, Fadi

2011-01-01

291

Non-contrast Enhanced MR Venography Using 3D Fresh Blood Imaging (FBI): Initial Experience  

Microsoft Academic Search

Objective: This study examined the efficacy of 3D-fresh blood imaging (FBI) in patients with venous disease in the iliac region to lower extremity. Materials and Methods: Fourteen patients with venous disease were examined (8 deep venous thrombosis (DVT) and 6 varix) by 3D-FBI and 2D-TOF MRA. All FBI images and 2D-TOF images were evaluated in terms of visualization of the

Kenichi Yokoyama; Toshiaki Nitatori; Sayuki Inaoka; Taro Takahara; Junichi Hachiya

292

Automatic 3D segmentation of ultrasound images using atlas registration and statistical texture prior  

Microsoft Academic Search

We are developing a molecular image-directed, 3D ultrasound-guided, targeted biopsy system for improved detection of prostate cancer. In this paper, we propose an automatic 3D segmentation method for transrectal ultrasound (TRUS) images, which is based on multi-atlas registration and statistical texture prior. The atlas database includes registered TRUS images from previous patients and their segmented prostate surfaces. Three orthogonal Gabor

Xiaofeng Yang; David Schuster; Viraj Master; Peter Nieh; Aaron Fenster; Baowei Fei

2011-01-01

293

Multiview Geometry for Texture Mapping 2D Images Onto 3D Range Data  

Microsoft Academic Search

The photorealistic modeling of large-scale scenes, such as urban structures, requires a fusion of range sensing tech- nology and traditional digital photography. This paper presents a system that integrates multiview geometry and automated 3D registration techniques for texture mapping 2D images onto 3D range data. The 3D rangescans and the 2D photographs are respectively used to generate a pair of

Lingyun Liu; Gene Yu; George Wolberg; Siavash Zokai

2006-01-01

294

Hyperspectral Image Lossless Compression Using the 3D Set Partitioned Embedded Zero Block Coding Alogrithm  

Microsoft Academic Search

In this paper, we propose a hyperspectral image lossless compression coder based on three-dimensional set partitioned embedded zero block coding (3D SPEZBC) algorithm. This coder adopts the 3D integer wavelet packet transform to decorrelate and the set-based partitioning zero block coding to process bitplane coding. It not only provides the same excellent coding performances as the 3D EZBC algorithm, but

Ying Hou; Guizhong Liu

2008-01-01

295

Four-Dimensional Imaging: Computer Visualization of 3D Movements in Living Specimens  

Microsoft Academic Search

The study of many biological processes requires the analysis of three-dimensional (3D) structures that change over time. Optical sectioning techniques can provide 3D data from living specimens; however, when 3D data are collected over a period of time, the quantity of image information produced leads to difficulties in interpretation. A computer-based system is described that permits the analysis and archiving

C. Thomas; P. Devries; J. Hardin; J. White

1996-01-01

296

Composite finite elements for 3D image based computing  

Microsoft Academic Search

We present an algorithmical concept for modeling and simulation with partial differential equations (PDEs) in image based\\u000a computing where the computational geometry is defined through previously segmented image data. Such problems occur in applications\\u000a from biology and medicine where the underlying image data has been acquired through, e.g. computed tomography (CT), magnetic\\u000a resonance imaging (MRI) or electron microscopy (EM). Based

Florian Liehr; Tobias Preusser; Martin Rumpf; Stefan Sauter; Lars Ole Schwen

2009-01-01

297

Determining 3D Flow Fields via Multi-camera Light Field Imaging  

PubMed Central

In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture 1. Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet. PMID:23486112

Truscott, Tadd T.; Belden, Jesse; Nielson, Joseph R.; Daily, David J.; Thomson, Scott L.

2013-01-01

298

A new approach towards image based virtual 3D city modeling by using close range photogrammetry  

NASA Astrophysics Data System (ADS)

3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country and high resolution satellite images are costly. In this study, proposed method is based on only simple video recording of area. Thus this proposed method is suitable for 3D city modeling. Photo-realistic, scalable, geo-referenced virtual 3D city model is useful for various kinds of applications such as for planning in navigation, tourism, disasters management, transportations, municipality, urban and environmental managements, real-estate industry. Thus this study will provide a good roadmap for geomatics community to create photo-realistic virtual 3D city model by using close range photogrammetry.

Singh, S. P.; Jain, K.; Mandla, V. R.

2014-05-01

299

A Workstation for Interactive Display and Quantitative Analysis of 3-D and 4-D Biomedical Images  

PubMed Central

The capability to extract objective and quantitatively accurate information from 3-D radiographic biomedical images has not kept pace with the capabilities to produce the images themselves. This is rather an ironic paradox, since on the one hand the new 3-D and 4-D imaging capabilities promise significant potential for providing greater specificity and sensitivity (i.e., precise objective discrimination and accurate quantitative measurement of body tissue characteristics and function) in clinical diagnostic and basic investigative imaging procedures than ever possible before, but on the other hand, the momentous advances in computer and associated electronic imaging technology which have made these 3-D imaging capabilities possible have not been concomitantly developed for full exploitation of these capabilities. Therefore, we have developed a powerful new microcomputer-based system which permits detailed investigations and evaluation of 3-D and 4-D (dynamic 3-D) biomedical images. The system comprises a special workstation to which all the information in a large 3-D image data base is accessible for rapid display, manipulation, and measurement. The system provides important capabilities for simultaneously representing and analyzing both structural and functional data and their relationships in various organs of the body. This paper provides a detailed description of this system, as well as some of the rationale, background, theoretical concepts, and practical considerations related to system implementation. ImagesFigure 5Figure 7Figure 8Figure 9Figure 10Figure 11Figure 12Figure 13Figure 14Figure 15Figure 16

Robb, R.A.; Heffeman, P.B.; Camp, J.J.; Hanson, D.P.

1986-01-01

300

Recovering 3D Shape and Motion from Image Streams using Non-Linear Least Squares  

Microsoft Academic Search

The simultaneous recovery of 3D shape and motion from image sequences is one of the moredifficult problems in computer vision. Classical approaches to the problem rely on using algebraictechniques to solve for these unknowns given two or more images. More recently, a batch analysisof image streams (the temporal tracks of distinguishable image features) under orthography hasresulted in highly accurate reconstructions.

Richard Szeliski; Sing Bing Kang

1993-01-01

301

Intensity-based segmentation and visualization of cells in 3D microscopic images using the GPU  

NASA Astrophysics Data System (ADS)

3D microscopy images contain abundant astronomical data, rendering 3D microscopy image processing time-consuming and laborious on a central processing unit (CPU). To solve these problems, many people crop a region of interest (ROI) of the input image to a small size. Although this reduces cost and time, there are drawbacks at the image processing level, e.g., the selected ROI strongly depends on the user and there is a loss in original image information. To mitigate these problems, we developed a 3D microscopy image processing tool on a graphics processing unit (GPU). Our tool provides efficient and various automatic thresholding methods to achieve intensity-based segmentation of 3D microscopy images. Users can select the algorithm to be applied. Further, the image processing tool provides visualization of segmented volume data and can set the scale, transportation, etc. using a keyboard and mouse. However, the 3D objects visualized fast still need to be analyzed to obtain information for biologists. To analyze 3D microscopic images, we need quantitative data of the images. Therefore, we label the segmented 3D objects within all 3D microscopic images and obtain quantitative information on each labeled object. This information can use the classification feature. A user can select the object to be analyzed. Our tool allows the selected object to be displayed on a new window, and hence, more details of the object can be observed. Finally, we validate the effectiveness of our tool by comparing the CPU and GPU processing times by matching the specification and configuration.

Kang, Mi-Sun; Lee, Jeong-Eom; Jeon, Woong-ki; Choi, Heung-Kook; Kim, Myoung-Hee

2013-02-01

302

3D imaging from theory to practice: the Mona Lisa story  

NASA Astrophysics Data System (ADS)

The warped poplar panel and the technique developed by Leonardo to paint the Mona Lisa present a unique research and engineering challenge for the design of a complete optical 3D imaging system. This paper discusses the solution developed to precisely measure in 3D the world's most famous painting despite its highly contrasted paint surface and reflective varnish. The discussion focuses on the opto-mechanical design and the complete portable 3D imaging system used for this unique occasion. The challenges associated with obtaining 3D color images at a resolution of 0.05 mm and a depth precision of 0.01 mm are illustrated by exploring the virtual 3D model of the Mona Lisa.

Blais, Francois; Cournoyer, Luc; Beraldin, J.-Angelo; Picard, Michel

2008-08-01

303

Nonlinear multi-scale complex wavelet diffusion based speckle reduction approach for 3D ultrasound images  

NASA Astrophysics Data System (ADS)

3D ultrasound imaging has advantages as a non-invasive and a faster examination procedure capable of displaying volume information in real time. However, its resolution is affected by speckle noise. Speckle reduction and feature preservation are seemingly opposing goals. In this paper, a nonlinear multi-scale complex wavelet diffusion based algorithm for 3D ultrasound imaging is introduced. Speckle is suppressed and sharp edges are preserved by applying iterative multi-scale diffusion on the complex wavelet coefficients. The proposed method is validated using synthetic, real phantom, and clinical 3D images, and it is found to outperform other methods in both qualitative and quantitative measures.

Uddin, Muhammad S.; Tahtali, Murat; Lambert, Andrew J.; Pickering, Mark R.; Marchese, Margaret; Stuart, Iain

2014-09-01

304

Processing sequence for non-destructive inspection based on 3D terahertz images  

NASA Astrophysics Data System (ADS)

In this paper we present an innovative data and image processing sequence to perform non-destructive inspection from 3D terahertz (THz) images. We develop all the steps starting from a 3D tomographic reconstruction of a sample from its radiographs acquired with a monochromatic millimetre wave imaging system. Thus an automated segmentation provides the different volumes of interest (VOI) composing the sample. Then a 3D visualization and dimensional measurements are performed on these VOI, separately, in order to provide an accurate nondestructive testing (NDT) of the studied sample. This sequence is implemented onto an unique software and validated through the analysis of different objects

Balacey, H.; Perraud, Jean-Baptiste; Bou Sleiman, J.; Guillet, Jean-Paul; Recur, B.; Mounaix, P.

2014-11-01

305

Validating retinal fundus image analysis algorithms: issues and a proposal.  

PubMed

This paper concerns the validation of automatic retinal image analysis (ARIA) algorithms. For reasons of space and consistency, we concentrate on the validation of algorithms processing color fundus camera images, currently the largest section of the ARIA literature. We sketch the context (imaging instruments and target tasks) of ARIA validation, summarizing the main image analysis and validation techniques. We then present a list of recommendations focusing on the creation of large repositories of test data created by international consortia, easily accessible via moderated Web sites, including multicenter annotations by multiple experts, specific to clinical tasks, and capable of running submitted software automatically on the data stored, with clear and widely agreed-on performance criteria, to provide a fair comparison. PMID:23794433

Trucco, Emanuele; Ruggeri, Alfredo; Karnowski, Thomas; Giancardo, Luca; Chaum, Edward; Hubschman, Jean Pierre; Al-Diri, Bashir; Cheung, Carol Y; Wong, Damon; Abr鄊off, Michael; Lim, Gilbert; Kumar, Dinesh; Burlina, Philippe; Bressler, Neil M; Jelinek, Herbert F; Meriaudeau, Fabrice; Quellec, Gw閚ol; Macgillivray, Tom; Dhillon, Bal

2013-05-01

306

3D reconstructions with pixel-based images are made possible by digitally clearing plant and animal tissue  

Technology Transfer Automated Retrieval System (TEKTRAN)

Reconstruction of 3D images from a series of 2D images has been restricted by the limited capacity to decrease the opacity of surrounding tissue. Commercial software that allows color-keying and manipulation of 2D images in true 3D space allowed us to produce 3D reconstructions from pixel based imag...

307

Mutual information as a measure of image quality for 3D dynamic lung imaging with EIT.  

PubMed

We report on a pilot study of dynamic lung electrical impedance tomography (EIT) at the University of Manchester. Low-noise EIT data at 100 frames per second were obtained from healthy male subjects during controlled breathing, followed by magnetic resonance imaging (MRI) subsequently used for spatial validation of the EIT reconstruction. The torso surface in the MR image and electrode positions obtained using MRI fiducial markers informed the construction of a 3D finite element model extruded along the caudal-distal axis of the subject. Small changes in the boundary that occur during respiration were accounted for by incorporating the sensitivity with respect to boundary shape into a robust temporal difference reconstruction algorithm. EIT and MRI images were co-registered using the open source medical imaging software, 3D Slicer. A quantitative comparison of quality of different EIT reconstructions was achieved through calculation of the mutual information with a lung-segmented MR image. EIT reconstructions using a linear shape correction algorithm reduced boundary image artefacts, yielding better contrast of the lungs, and had 10% greater mutual information compared with a standard linear EIT reconstruction. PMID:24710978

Crabb, M G; Davidson, J L; Little, R; Wright, P; Morgan, A R; Miller, C A; Naish, J H; Parker, G J M; Kikinis, R; McCann, H; Lionheart, W R B

2014-05-01

308

Validation of Retinal Image Registration Algorithms by a Projective Imaging Distortion Model  

PubMed Central

Fundus camera imaging of the retina is widely used to document ophthalmologic disorders including diabetic retinopathy, glaucoma, and age-related macular degeneration. The retinal images typically have a limited field of view due mainly to the curvedness of human retina, so multiple images are to be joined together using image registration technique to form a montage with a larger field of view. A variety of methods for retinal image registration have been proposed, but evaluating such methods objectively is difficult due to the lack of a reference standard for the true alignment of the individual images that make up the montage. A method of generating simulated retinal image set by modeling geometric distortions due to the eye geometry and the image acquisition process is described in this paper. We also present the validation tool for any retinal image registration method by tracing back the distortion path and accessing the geometric misalignment from the coordinate system of reference standard. The quantitative comparison for different registration methods is given in the experiment, so the registration performance is evaluated in an objective manner. PMID:18003507

Lee, Sangyeol; Abr鄊off, Michael D.; Reinhardt, Joseph M.

2008-01-01

309

Extracting Cylinders in Full 3D Data Using a Random Sampling Method and the Gaussian Image  

Microsoft Academic Search

This paper presents a new method for extracting cylinders from an unorganized set of 3D points. The originality of this approach is to separate the ex- traction problem into two distinct steps. The first step consists in extracting a constrained plane in the Gaussian image. This yields a subset of 3D points along with a direction. In the second step,

Thomas Chaperon; Fran鏾is Goulette

2001-01-01

310

Fully 3D Monte Carlo image reconstruction in SPECT using functional regions  

E-print Network

Fully 3D Monte Carlo image reconstruction in SPECT using functional regions Ziad El Bitar a for tomographic reconstruction. The fully 3D Monte Carlo (F3DMC) reconstruction technique consists in calculating this system matrix using Monte Carlo simulations. The inverse problem of tomographic reconstruction

Paris-Sud XI, Universit茅 de

311

Computerized analysis of 3-D pulmonary nodule images in surrounding and internal structure feature spaces  

Microsoft Academic Search

We are developing computerized feature extraction and classification methods to analyze malignant and benign pulmonary nodules in three-dimensional (3-D) thoracic CT images. Surrounding structure features were designed to characterize the relationships between nodules and their surrounding structures such as vessel, bronchi, and pleura. Internal structure features were derived from CT density and 3-D curvatures to characterize the inhomogeneous of CT

Yoshiki Kawata; Noboru Niki; Hironobu Ohmatsu; Masahiko Kusumoto; Ryutaro Kakinuma; Kensaku Mori; Hiroyuki Nishiyama; Kenji Eguchi; Masahiro Kaneko; Noriyuki Moriyama

2001-01-01

312

Finite Element Methods for Active Contour Models and Balloons for 2D and 3D Images  

Microsoft Academic Search

The use of energy-minimizing curves, known as "snakes" to extract features of interest in images has been introduced by Kass, Witkin and Terzopoulos [23]. A balloon model was introduced in [12] as a way to generalize and solve some of the problems encountered with the original method. We present a 3D generalization of the balloon model as a 3D deformable

Laurent D. Cohen; Isaac Cohen

1991-01-01

313

Fast Joint Estimation of Silhouettes and Dense 3D Geometry from Multiple Images  

E-print Network

Fast Joint Estimation of Silhouettes and Dense 3D Geometry from Multiple Images Kalin Kolev, Thomas Brox, and Daniel Cremers Abstract--We propose a probabilistic formulation of joint silhouette separately in order to construct a 3D surface consistent with the estimated silhouettes, we compute the most

Teschner, Matthias

314

Multiview 3D reconstruction of the archaeological site at Weymouth from image series  

Microsoft Academic Search

Multiview (n-view or multiple view) 3D reconstruction is the computationally complex process by which a full 3D model is derived from a series of overlapping images. It is based on research in the field of computer vision, which in turn relies on older methods from photogrammetry. This report presents a multiview reconstruction tool chain composed from various freely available, open

Benjamin Ducke; David Score; Joseph Reeves

2011-01-01

315

3-D Target Location from Stereoscopic SAR Images  

SciTech Connect

SAR range-Doppler images are inherently 2-dimensional. Targets with a height offset lay over onto offset range and azimuth locations. Just which image locations are laid upon depends on the imaging geometry, including depression angle, squint angle, and target bearing. This is the well known layover phenomenon. Images formed with different aperture geometries will exhibit different layover characteristics. These differences can be exploited to ascertain target height information, in a stereoscopic manner. Depending on the imaging geometries, height accuracy can be on the order of horizontal position accuracies, thereby rivaling the best IFSAR capabilities in fine resolution SAR images. All that is required for this to work are two distinct passes with suitably different geometries from any plain old SAR.

DOERRY,ARMIN W.

1999-10-01

316

Thermal Plasma Imager (TPI): An Imaging Thermal Ion Mass and 3-D Velocity Analyzer  

NASA Astrophysics Data System (ADS)

The Thermal Plasma Imager (TPI) is an imaging thermal ion mass and 3-dimensional (3-D) velocity analyzer. It is designed to measure the instantaneous mass composition and detailed, mass-resolved, 3-dimensional, velocity distributions of thermal-energy (0.5-50 eV/q) ions on a 3-axis stabilized spacecraft. It consists of a pair of semi-toroidal deflection and fast-switching time-of-flight (TOF) electrodes, a hemispherical electrostatic analyzer (HEA), and a micro-channel plate (MCP) detector. It uses the TOF electrodes to clock the flight times of individual incident ions, and the HEA to focus ions of a given energy-per-charge and incident angle (elevation and azimuth) onto a single point on the MCP. The TOF/HEA combination produces an instantaneous and mass-resolved "image" of a 2-D cone of the 3-D velocity distribution for each ion species, and combines a sequence of concentric 2-D conical samples into a 3-D distribution covering 360 in azimuth and 120 in elevation. It is currently under development for the Enhanced Polar Outflow Probe (e-POP) and Planet-C Venus missions. It is an improved, "3-dimensional" version of the SS520-2 Thermal Suprathermal Analyzer (TSA), which samples ions in its entrance aperture plane and uses the spacecraft spin to achieve 3-D ion sampling. In this paper, we present its detailed design characteristics and prototype instrument performance, and compare these with the ion velocity measurement performances from its 2-D TSA predecessor on SS520-2.

Yau, A. W.; Amerl, P. V.; King, E. P.; Miyake, W.; Abe, T.

2003-04-01

317

Causes and corrections for 3D image artifact in HCT  

Microsoft Academic Search

In recent years, helical computed tomography (HCT) has gained significant popularity in clinical applications. The advantages of HCT include the capability of scanning a complete anatomical volume in a single breath-hold and the capability of generating images at arbitrary locations. Because of the inherent inconsistency in the helical data set, however, recent studies have unveiled various HCT-related image artifacts. In

Jiang Hsieh

1999-01-01

318

Triangulation Based 3D Laser Imaging for Fracture Orientation Analysis  

Microsoft Academic Search

Laser imaging has recently been identified as a potential tool for rock mass characterization. This contribution focuses on the application of triangulation based, short-range laser imaging to determine fracture orientation and surface texture. This technology measures the distance to the target by triangulating the projected and reflected laser beams, and also records the reflection intensity. In this study, we acquired

J. Mah; S. Claire; M. Steve

2009-01-01

319

3D Stereoscopic Image Pairs by Depth-Map Generation  

Microsoft Academic Search

This work presents a new unsupervised technique aimed to generate stereoscopic views estimating depth information from a single input image. Using a single input image, vanishing lines\\/points are extracted using a few heuristics to generate an approximated depth map. The depth map is then used to generate stereo pairs. The overall method is well suited for real time application and

Sebastiano Battiato; Alessandro Capra; Salvatore Curti; M. La Cascia

2004-01-01

320

Bayesian 3D Modeling from Images Using Multiple Depth Maps  

Microsoft Academic Search

This paper addresses the problem of reconstructing the geometry and albedo of a Lambertian scene, given some fully calibrated images acquired with wide baselines. In order to completely model the input data, we propose to represent the scene as a set of colored depth maps, one per input image. We formulate the problem as a Bayesian MAP problem which leads

Pau Gargallo; Peter F. Sturm

2005-01-01

321

Retinal layer segmentation of macular OCT images using boundary classification  

PubMed Central

Optical coherence tomography (OCT) has proven to be an essential imaging modality for ophthalmology and is proving to be very important in neurology. OCT enables high resolution imaging of the retina, both at the optic nerve head and the macula. Macular retinal layer thicknesses provide useful diagnostic information and have been shown to correlate well with measures of disease severity in several diseases. Since manual segmentation of these layers is time consuming and prone to bias, automatic segmentation methods are critical for full utilization of this technology. In this work, we build a random forest classifier to segment eight retinal layers in macular cube images acquired by OCT. The random forest classifier learns the boundary pixels between layers, producing an accurate probability map for each boundary, which is then processed to finalize the boundaries. Using this algorithm, we can accurately segment the entire retina contained in the macular cube to an accuracy of at least 4.3 microns for any of the nine boundaries. Experiments were carried out on both healthy and multiple sclerosis subjects, with no difference in the accuracy of our algorithm found between the groups. PMID:23847738

Lang, Andrew; Carass, Aaron; Hauser, Matthew; Sotirchos, Elias S.; Calabresi, Peter A.; Ying, Howard S.; Prince, Jerry L.

2013-01-01

322

Light sheet adaptive optics microscope for 3D live imaging  

NASA Astrophysics Data System (ADS)

We report on the incorporation of adaptive optics (AO) into the imaging arm of a selective plane illumination microscope (SPIM). SPIM has recently emerged as an important tool for life science research due to its ability to deliver high-speed, optically sectioned, time-lapse microscope images from deep within in vivo selected samples. SPIM provides a very interesting system for the incorporation of AO as the illumination and imaging paths are decoupled and AO may be useful in both paths. In this paper, we will report the use of AO applied to the imaging path of a SPIM, demonstrating significant improvement in image quality of a live GFP-labeled transgenic zebrafish embryo heart using a modal, wavefront sensorless approach and a heart synchronization method. These experimental results are linked to a computational model showing that significant aberrations are produced by the tube holding the sample in addition to the aberration from the biological sample itself.

Bourgenot, C.; Taylor, J. M.; Saunter, C. D.; Girkin, J. M.; Love, G. D.

2013-02-01

323

The Mathematical Foundations of 3D Compton Scatter Emission Imaging  

PubMed Central

The mathematical principles of tomographic imaging using detected (unscattered) X- or gamma-rays are based on the two-dimensional Radon transform and many of its variants. In this paper, we show that two new generalizations, called conical Radon transforms, are related to three-dimensional imaging processes based on detected Compton scattered radiation. The first class of conical Radon transform has been introduced recently to support imaging principles of collimated detector systems. The second class is new and is closely related to the Compton camera imaging principles and invertible under special conditions. As they are poised to play a major role in future designs of biomedical imaging systems, we present an account of their most important properties which may be relevant for active researchers in the field. PMID:18382608

Truong, T. T.; Nguyen, M. K.; Zaidi, H.

2007-01-01

324

Adaptive optics with a micromachined membrane deformable mirror for high resolution retinal imaging  

Microsoft Academic Search

The resolution of conventional retinal imaging technologies is limited by the optics of the human eye. In this dissertation, the aberrations of the eye and their compensation techniques are investigated for the purpose of high-resolution retinal imaging. Both computer modeling and adaptive optics experiments with the novel micromachined membrane deformable mirror (MMDM) device are performed. First, a new aspherical computer

Lijun Zhu

1999-01-01

325

Temporal registration for low-quality retinal images of the murine eye  

Microsoft Academic Search

This paper presents an investigation into different approaches for segmentation-driven retinal image registration. This constitutes an intermediate step towards detecting changes occurring in the topography of blood vessels, which are caused by disease progression. A temporal dataset of retinal images was collected from small animals (i.e. mice). The perceived low quality of the dataset employed favoured the implementation of a

Lenos Andreou; Alin Achim

2010-01-01

326

3D printing of intracranial artery stenosis based on the source images of magnetic resonance angiograph  

PubMed Central

Background and purpose Three dimensional (3D) printing techniques for brain diseases have not been widely studied. We attempted to 憄rint the segments of intracranial arteries based on magnetic resonance imaging. Methods Three dimensional magnetic resonance angiography (MRA) was performed on two patients with middle cerebral artery (MCA) stenosis. Using scale-adaptive vascular modeling, 3D vascular models were constructed from the MRA source images. The magnified (ten times) regions of interest (ROI) of the stenotic segments were selected and fabricated by a 3D printer with a resolution of 30 祄. A survey to 8 clinicians was performed to evaluate the accuracy of 3D printing results as compared with MRA findings (4 grades, grade 1: consistent with MRA and provide additional visual information; grade 2: consistent with MRA; grade 3: not consistent with MRA; grade 4: not consistent with MRA and provide probable misleading information). If a 3D printing vessel segment was ideally matched to the MRA findings (grade 2 or 1), a successful 3D printing was defined. Results Seven responders marked 揼rade 1 to 3D printing results, while one marked 揼rade 4. Therefore, 87.5% of the clinicians considered the 3D printing were successful. Conclusions Our pilot study confirms the feasibility of using 3D printing technique in the research field of intracranial artery diseases. Further investigations are warranted to optimize this technique and translate it into clinical practice. PMID:25333049

Liu, Jia; Li, Ming-Li; Sun, Zhao-Yong; Chen, Jie

2014-01-01

327

Semi-implicit finite volume scheme for image processing in 3D cylindrical geometry  

NASA Astrophysics Data System (ADS)

Nowadays, 3D echocardiography is a well-known technique in medical diagnosis. Inexpensive echocardiographic acquisition devices are applied to scan 2D slices rotated along a prescribed direction. Then the discrete 3D image information is given on a cylindrical grid. Usually, this original discrete image intensity function is interpolated to a uniform rectangular grid and then numerical schemes for 3D image processing operations (e.g. nonlinear smoothing) in the uniform rectangular geometry are used. However, due to the generally large amount of noise present in echocardiographic images, the interpolation step can yield undesirable results. In this paper, we avoid this step and suggest a 3D finite volume method for image selective smoothing directly in the cylindrical image geometry. Specifically, we study a semi-implicit 3D cylindrical finite volume scheme for solving a Perona-Malik-type nonlinear diffusion equation and apply the scheme to 3D cylindrical echocardiographic images. The L?-stability and convergence of the scheme to the weak solution of the regularized Perona-Malik equation is proved.

Mikula, Karol; Sgallari, Fiorella

2003-12-01

328

A low-cost defocus blur module for video rate quantified 3D imaging  

E-print Network

Existing three-dimensional surface imaging systems are expensive, difficult to use, time consuming, and do not always provide the best accuracy or resolution. By using an offset aperture on a rotating disc, the 3D Monocular ...

Ho, Leeway, 1982-

2004-01-01

329

Bio-medical imaging: Localization of main structures in retinal fundus images  

NASA Astrophysics Data System (ADS)

Retinal fundus images have three main structures, the optic disk, fovea and blood vessels. By examining fundus images, an ophthalmologist can diagnose various clinical disorders of the eye and the body, typically indicated by changes in the diameter, area, branching angles and tortuosity of the three ma in retinal structures. Knowledge of the optic disk position is an important diagnostic index fo r many diseases related to the retina. In this paper, localization of optic disc is discussed. Optic disk detection is based on morphological operationsand smoothing filters. Blood vessels are extracted using the green component of a colour retinal image with the help of a median filter. Maximum intensity values are validated with blood vessels to localize the optic disk location. The proposed method has shown significant improvements in results.

Basit, A.; Egerton, S. J.

2013-12-01

330

Binary 3D image interpolation algorithm based global information and adaptive curves fitting  

NASA Astrophysics Data System (ADS)

Interpolation is a necessary processing step in 3-D reconstruction because of the non-uniform resolution. Conventional interpolation methods simply use two slices to obtain the missing slices between the two slices .when the key slice is missing, those methods may fail to recover it only employing the local information .And the surface of 3D object especially for the medical tissues may be highly complicated, so a single interpolation can hardly get high-quality 3D image. We propose a novel binary 3D image interpolation algorithm. The proposed algorithm takes advantages of the global information. It chooses the best curve adaptively from lots of curves based on the complexity of the surface of 3D object. The results of this algorithm are compared with other interpolation methods on artificial objects and real breast cancer tumor to demonstrate the excellent performance.

Zhang, Tian-yi; Zhang, Jin-hao; Guan, Xiang-chen; Li, Qiu-ping; He, Meng

2013-08-01

331

In vivo Human 3D Cardiac Fibre Architecture: Reconstruction Using Curvilinear Interpolation of Diffusion Tensor Images  

Microsoft Academic Search

\\u000a \\u000a In vivo imaging of the cardiac 3D fibre architecture is still a challenge, but it would have many clinical applications, for instance\\u000a to better understand pathologies and to follow up remodelling after therapy. Recently, cardiac MRI enabled the acquisition\\u000a of Diffusion Tensor images (DTI) of 2D slices. We propose a method for the complete 3D reconstruction of cardiac fibre architecture

Nicolas Toussaint; Maxime Sermesant; Christian T. Stoeck; Sebastian Kozerke; Philip G. Batchelor

2010-01-01

332

Reconstructing 3D Human Body Pose from Stereo Image Sequences Using Hierarchical Human Body Model Learning  

Microsoft Academic Search

This paper presents a novel method for reconstructing a 3D human body pose using depth information based on top-down learning. The human body pose is represented by a linear combination of prototypes of 2D depth images and their corresponding 3D body models in terms of the position of a predetermined set of joints. In a 2D depth image, the optimal

Hee-deok Yang; Seong-whan Lee

2006-01-01

333

Remarks on 3D human body posture reconstruction from multiple camera images  

Microsoft Academic Search

This paper proposes a human body posture estimation method based on back projection of human silhouette images extracted from multi-camera images. To achieve real-time 3D human body posture estimation, a server-client system is introduced into the multi-camera system, improvements of the background subtraction and back projection are investigated. To evaluate the feasibility of the proposed method, 3D estimation experiments of

Yusuke Nagasawa; Takako Ohta; Yukiko Mutsuji; Kazuhiko Takahashi; Masafumi Hashimoto

2007-01-01

334

3D object-oriented image analysis in 3D geophysical modelling: Analysing the central part of the East African Rift System  

NASA Astrophysics Data System (ADS)

Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D interactive modelling environment IGMAS+, and their density contrast values were calculated using an object-based inversion technique to calculate the forward signal of the objects and compare it with the measured satellite gravity. Thus, a new object-based approach was implemented to interpret and extract the 3D subsurface objects from 3D geophysical data. We also introduce a new approach to constrain the interpretation of the satellite gravity measurements that can be applied using any 3D geophysical model.

Fadel, I.; van der Meijde, M.; Kerle, N.; Lauritsen, N.

2015-03-01

335

Mono and multistatic polarimetric sparse aperture 3D SAR imaging  

Microsoft Academic Search

SAR imaging at low center frequencies (UHF and L-band) offers advantages over imaging at more conventional (X-band) frequencies, including foliage penetration for target detection and scene segmentation based on polarimetric coherency. However, bandwidths typically available at these center frequencies are small, affording poor resolution. By exploiting extreme spatial diversity (partial hemispheric k-space coverage) and nonlinear bandwidth extrapolation\\/interpolation methods such as

Stuart DeGraaf; Charles Twigg; Louis Phillips

2008-01-01

336

RECONSTRUCTION OF ISOMETRICALLY DEFORMABLE FLAT SURFACES IN 3D FROM MULTIPLE CAMERA IMAGES  

E-print Network

RECONSTRUCTION OF ISOMETRICALLY DEFORMABLE FLAT SURFACES IN 3D FROM MULTIPLE CAMERA IMAGES R experiment con- sisting of real images of a sheet of paper is shown. Index Terms-- Machine vision, Isometric is to infer the structure of a non- rigid isometric surface observed in multiple images. The example provided

Instituto de Sistemas e Robotica

337

Exploring Novel Parallelization Technologies for 3-D Imaging Applications Diego Rivera Dana Schaa Micha Moffie  

E-print Network

structure of objects by integrating multiple views and processing high resolution images. We findExploring Novel Parallelization Technologies for 3-D Imaging Applications Diego Rivera Dana Schaa, Boston, MA Abstract Multi-dimensional imaging techniques involve the pro- cessing of high resolution

Kaeli, David R.

338

6-DOF Pose Estimation from Single Ultrasound Image Using 3D IP Models Ryo Ishikawa 2  

E-print Network

diagnosis, various imaging modali- ties, such as CT scan, MRI, PET, and ultrasound (US), are widely used6-DOF Pose Estimation from Single Ultrasound Image Using 3D IP Models Bo Zheng 1 Ryo Ishikawa 2.ryo@canon.co.jp Abstract Real-time pose estimation of a free-hand Ultrasound (US) image without any position sensor

Tokyo, University of

339

MicroCT for Developmental Biology: A Versatile Tool for High-Contrast 3D Imaging  

E-print Network

TECHNIQUES MicroCT for Developmental Biology: A Versatile Tool for High-Contrast 3D Imaging a commercial microCT system. Quantitative comparisons of images of chick embryos treated with different, and on a wide variety of species. These methods establish microCT imaging as a useful tool for comparative

Metscher, Brian

340

3-D digital surface recovery of the optic nerve head from stereo fundus images  

Microsoft Academic Search

A novel algorithm for 3D digital mapping of curved surfaces from a 2D stereo image pair is developed. This approach for visualization of curved surface topography involves fusion of a stereo depth map with a linearly stretched intensity image of the curved surface. Prior to fusion of the depth map with the intensity image, a cubic B-spline interpolation technique is

Manuel Ramirez; S. Mitra; Alok Kher; Jose Morales

1992-01-01

341

A stereo imaging framework in 3-D mapping of benthic habitats and seafloor structures  

Microsoft Academic Search

We address the deployment of stereovision imaging for underwater 3D mapping. A key component in system performance is the ability to determine the vehicle's position during data acquisition, ensuring that the images are acquired at desired positions along the pre-planned trajectory. We investigate the use of stereo images from the integration of incremental motions between consecutive frames. This is achieved

Shahriar Negahdaripour; H. Madjidi

2002-01-01

342

Recovery of 3D depth map from image shading for underwater applications  

Microsoft Academic Search

We study the problem of exploiting image shading for the recovery of an object's 3D shape underwater. This requires the employment of image models that take into account shading effects due to the medium optical properties, as well as the surface shape and reflectance properties. A simplified image model that incorporates the attenuation of the incident light due to beam

Shaomin Zhang; S. Negahdaripour

1997-01-01

343

Lossless compression of hyperspectral images based on 3D context prediction  

Microsoft Academic Search

Prediction algorithms play an important role in lossless compression of hyperspectral images. However, conventional lossless compression algorithms based on prediction are usually inefficient in exploiting correlation in hyperspectral images. In this paper, a new algorithm for lossless compression of hyperspectral images based on 3D context prediction is proposed. The proposed algorithm consists of three parts to exploit the high spectral

Lin Bai; Mingyi He; Yuchao Dai

2008-01-01

344

The application of camera calibration in range-gated 3D imaging technology  

NASA Astrophysics Data System (ADS)

Range-gated laser imaging technology was proposed in 1966 by LF Gillespiethe in U.S. Army Night Vision Laboratory(NVL). Using pulse laser and intensified charge-coupled device(ICCD) as light source and detector respectively, range-gated laser imaging technology can realize space-slice imaging while restraining the atmospheric backs-catter, and in turn detect the target effectively, by controlling the delay between the laser pulse and strobe. Owing to the constraints of the development of key components such as narrow pulse laser and gated imaging devices, the research has been progressed slowly in the next few decades. Until the beginning of this century, as the hardware technology continues to mature, this technology has developed rapidly in fields such as night vision, underwater imaging, biomedical imaging, three-dimensional imaging, especially range-gated three-dimensional(3-D) laser imaging field purposing of access to target spatial information. 3-D reconstruction is the processing of restoration of 3-D objects visible surface geometric structure from three-dimensional(2-D) image. Range-gated laser imaging technology can achieve gated imaging of slice space to form a slice image, and in turn provide the distance information corresponding to the slice image. But to inverse the information of 3-D space, we need to obtain the imaging visual field of system, that is, the focal length of the system. Then based on the distance information of the space slice, the spatial information of each unit space corresponding to each pixel can be inversed. Camera calibration is an indispensable step in 3-D reconstruction, including analysis of the internal structure of camera parameters and the external parameters . In order to meet the technical requirements of the range-gated 3-D imaging, this paper intends to study the calibration of the zoom lens system. After summarizing the camera calibration technique comprehensively, a classic calibration method based on line is selected. One-to-one correspondence between visual filed and focal length of system is obtained and offers effective visual field information for the matching of imaging filed and illumination filed in range-gated 3-D imaging technology. On the basis of the experimental results, combined with the depth of field theory, the application of camera calibration in range-gated 3-D imaging technology is futher studied.

Liu, Xiao-quan; Wang, Xian-wei; Zhou, Yan

2013-09-01

345

3D segmentation and image annotation for quantitative diagnosis in lung CT images with pulmonary lesions  

NASA Astrophysics Data System (ADS)

Pulmonary nodules and ground glass opacities are highly significant findings in high-resolution computed tomography (HRCT) of patients with pulmonary lesion. The appearances of pulmonary nodules and ground glass opacities show a relationship with different lung diseases. According to corresponding characteristic of lesion, pertinent segment methods and quantitative analysis are helpful for control and treat diseases at an earlier and potentially more curable stage. Currently, most of the studies have focused on two-dimensional quantitative analysis of these kinds of deceases. Compared to two-dimensional images, three-dimensional quantitative analysis can take full advantage of isotropic image data acquired by using thin slicing HRCT in space and has better quantitative precision for clinical diagnosis. This presentation designs a computer-aided diagnosis component to segment 3D disease areas of nodules and ground glass opacities in lung CT images, and use AIML (Annotation and image makeup language) to annotate the segmented 3D pulmonary lesions with information of quantitative measurement which may provide more features and information to the radiologists in clinical diagnosis.

Li, Suo; Zhu, Yanjie; Sun, Jianyong; Zhang, Jianguo

2013-03-01

346

Artificial intelligence (AI)-based relational matching and multimodal medical image fusion: generalized 3D approaches  

NASA Astrophysics Data System (ADS)

A 3D relational image matching/fusion algorithm is introduced. It is implemented in the domain of medical imaging and is based on Artificial Intelligence paradigms--in particular, knowledge base representation and tree search. The 2D reference and target images are selected from 3D sets and segmented into non-touching and non-overlapping regions, using iterative thresholding and/or knowledge about the anatomical shapes of human organs. Selected image region attributes are calculated. Region matches are obtained using a tree search, and the error is minimized by evaluating a `goodness' of matching function based on similarities of region attributes. Once the matched regions are found and the spline geometric transform is applied to regional centers of gravity, images are ready for fusion and visualization into a single 3D image of higher clarity.

Vajdic, Stevan M.; Katz, Henry E.; Downing, Andrew R.; Brooks, Michael J.

1994-09-01

347

FPGA-based real-time anisotropic diffusion filtering of 3D ultrasound images  

NASA Astrophysics Data System (ADS)

Three-dimensional ultrasonic imaging, especially the emerging real-time version of it, is particularly valuable in medical applications such as echocardiography, obstetrics and surgical navigation. A known problem with ultrasound images is their high level of speckle noise. Anisotropic diffusion filtering has been shown to be effective in enhancing the visual quality of 3D ultrasound images and as preprocessing prior to advanced image processing. However, due to its arithmetic complexity and the sheer size of 3D ultrasound images, it is not possible to perform online, real-time anisotropic diffusion filtering using standard software implementations. We present an FPGA-based architecture that allows performing anisotropic diffusion filtering of 3D images at acquisition rates, thus enabling the use of this filtering technique in real-time applications, such as visualization, registration and volume rendering.

Castro-Pareja, Carlos R.; Dandekar, Omkar S.; Shekhar, Raj

2005-02-01

348

OVERALL PROCEDURES PROTOCOL AND PATIENT ENROLLMENT PROTOCOL: TESTING FEASIBILITY OF 3D ULTRASOUND DATA ACQUISITION AND RELIABILITY OF DATA RETRIEVAL FROM STORED 3D IMAGES  

EPA Science Inventory

The purpose of this study is to examine the feasibility of collecting, transmitting, and analyzing 3-D ultrasound data in the context of a multi-center study of pregnant women. The study will also examine the reliability of measurements obtained from 3-D imag...

349

Space Radar Image Isla Isabela in 3-D  

NASA Technical Reports Server (NTRS)

This is a three-dimensional view of Isabela, one of the Galapagos Islands located off the western coast of Ecuador, South America. This view was constructed by overlaying a Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) image on a digital elevation map produced by TOPSAR, a prototype airborne interferometric radar which produces simultaneous image and elevation data. The vertical scale in this image is exaggerated by a factor of 1.87. The SIR-C/X-SAR image was taken on the 40th orbit of space shuttle Endeavour. The image is centered at about 0.5 degree south latitude and 91 degrees west longitude and covers an area of 75 by 60 kilometers (47 by 37 miles). The radar incidence angle at the center of the image is about 20 degrees. The western Galapagos Islands, which lie about 1,200 kilometers (750 miles)west of Ecuador in the eastern Pacific, have six active volcanoes similar to the volcanoes found in Hawaii and reflect the volcanic processes that occur where the ocean floor is created. Since the time of Charles Darwin's visit to the area in 1835, there have been more than 60 recorded eruptions on these volcanoes. This SIR-C/X-SAR image of Alcedo and Sierra Negra volcanoes shows the rougher lava flows as bright features, while ash deposits and smooth pahoehoe lava flows appear dark. Vertical exaggeration of relief is a common tool scientists use to detect relationships between structure (for example, faults, and fractures) and topography. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI).

1999-01-01

350

3D transvaginal ultrasound imaging for identification of endometrial abnormality  

NASA Astrophysics Data System (ADS)

A multi-center study has previously evaluated the use of 2-dimensional transvaginal ultrasound (TVS) to measure the thickness of the endometrium as a risk indicator for endometrial abnormality in women with postmenopausal bleeding. In this paper we present methods using 3-dimensional TVS in order to improve the measurement, shape analysis and visualization of the endometrium. Active contour techniques are applied to identify the endometrium in a 3D dataset. The shape of the endometrium is then visualized and utilized to do quantitative measurements of the thickness. The voxels inside the endometrium are volume rendered in order to emphasize inhomogeneities. Since these inhomogeneities can exist both on the outside and the inside of the endometrium, the rendering algorithm has a controllable opacity function. A 3-dimensional distance transform is performed on the data volume measuring the shortest distance to the detected endometrium border for each voxel. This distance is used as a basis for opacity computations which allows the user to emphasize different regions of the endometrium. In particular, the opacity function can be computed such that regions that violate the risk indicator for the endometrium thickness are highlighted.

Olstad, Bjoern; Berg, Sevald; Torp, Anders H.; Schipper, Klaus P.; Eik-Nes, Sturla H.

1995-05-01

351

Contactless operating table control based on 3D image processing.  

PubMed

Interaction with mobile consumer devices leads to a higher acceptance and affinity of persons to natural user interfaces and perceptional interaction possibilities. New interaction modalities become accessible and are capable to improve human machine interaction even in complex and high risk environments, like the operation room. Here, manifold medical disciplines cause a great variety of procedures and thus staff and equipment. One universal challenge is to meet the sterility requirements, for which common contact-afflicted remote interfaces always pose a potential risk causing a hazard for the process. The proposed operating table control system overcomes this process risk and thus improves the system usability significantly. The 3D sensor system, the Microsoft Kinect, captures the motion of the user, allowing a touchless manipulation of an operating table. Three gestures enable the user to select, activate and manipulate all segments of the motorised system in a safe and intuitive way. The gesture dynamics are synchronised with the table movement. In a usability study, 15 participants evaluated the system with a system usability score by Broke of 79. This states a high potential for implementation and acceptance in interventional environments. In the near future, even processes with higher risks could be controlled with the proposed interface, while interfaces become safer and more direct. PMID:25569978

Schroder, Stephan; Loftfield, Nina; Langmann, Benjamin; Frank, Klaus; Reithmeier, Eduard

2014-08-01

352

Radar Imaging of Spheres in 3D using MUSIC  

SciTech Connect

We have shown that multiple spheres can be imaged by linear and planar EM arrays using only one component of polarization. The imaging approach involves calculating the SVD of the scattering response matrix, selecting a subset of singular values that represents noise, and evaluating the MUSIC functional. The noise threshold applied to the spectrum of singular values for optimal performance is typically around 1%. The resulting signal subspace includes more than one singular value per sphere. The presence of reflections from the ground improves height localization, even for a linear array parallel to the ground. However, the interference between direct and reflected energy modulates the field, creating periodic nulls that can obscure targets in typical images. These nulls are largely eliminated by normalizing the MUSIC functional with the broadside beam pattern of the array. The resulting images show excellent localization for 1 and 2 spheres. The performance for the 3 sphere configurations are complicated by shadowing effects and the greater range of the 3rd sphere in case 2. Two of the three spheres are easily located by MUSIC but the third is difficult to distinguish from other local maxima of the complex imaging functional. Improvement is seen when the linear array is replace with a planar array, which increases the effective aperture height. Further analysis of the singular values and their relationship to modes of scattering from the spheres, as well as better ways to exploit polarization, should improve performance. Work along these lines is currently being pursued by the authors.

Chambers, D H; Berryman, J G

2003-01-21

353

3D image reconstruction of fiber systems using electron tomography.  

PubMed

Over the past several years, electron microscopists and materials researchers have shown increased interest in electron tomography (reconstruction of three-dimensional information from a tilt series of bright field images obtained in a transmission electron microscope (TEM)). In this research, electron tomography has been used to reconstruct a three-dimensional image for fiber structures from secondary electron images in a scanning electron microscope (SEM). The implementation of this technique is used to examine the structure of fiber system before and after deformation. A test sample of steel wool was tilted around a single axis from -10 to 60 by one-degree steps with images taken at every degree; three-dimensional images were reconstructed for the specimen of fine steel fibers. This method is capable of reconstructing the three-dimensional morphology of this type of lineal structure, and to obtain features such as tortuosity, contact points, and linear density that are of importance in defining the mechanical properties of these materials. PMID:25464156

Fakron, Osama M; Field, David P

2015-02-01

354

Adaptive optics retinal imaging in the living mouse eye  

PubMed Central

Correction of the eye抯 monochromatic aberrations using adaptive optics (AO) can improve the resolution of in vivo mouse retinal images [Biss et al., Opt. Lett. 32(6), 659 (2007) and Alt et al., Proc. SPIE 7550, 755019 (2010)], but previous attempts have been limited by poor spot quality in the Shack-Hartmann wavefront sensor (SHWS). Recent advances in mouse eye wavefront sensing using an adjustable focus beacon with an annular beam profile have improved the wavefront sensor spot quality [Geng et al., Biomed. Opt. Express 2(4), 717 (2011)], and we have incorporated them into a fluorescence adaptive optics scanning laser ophthalmoscope (AOSLO). The performance of the instrument was tested on the living mouse eye, and images of multiple retinal structures, including the photoreceptor mosaic, nerve fiber bundles, fine capillaries and fluorescently labeled ganglion cells were obtained. The in vivo transverse and axial resolutions of the fluorescence channel of the AOSLO were estimated from the full width half maximum (FWHM) of the line and point spread functions (LSF and PSF), and were found to be better than 0.79 ?m 0.03 ?m (STD)(45% wider than the diffraction limit) and 10.8 ?m 0.7 ?m (STD)(two times the diffraction limit), respectively. The axial positional accuracy was estimated to be 0.36 ?m. This resolution and positional accuracy has allowed us to classify many ganglion cell types, such as bistratified ganglion cells, in vivo. PMID:22574260

Geng, Ying; Dubra, Alfredo; Yin, Lu; Merigan, William H.; Sharma, Robin; Libby, Richard T.; Williams, David R.

2012-01-01

355

Dynamic reconstruction and rendering of 3D tomosynthesis images  

NASA Astrophysics Data System (ADS)

Dynamic Reconstruction and Rendering (DRR) is a fast and flexible tomosynthesis image reconstruction and display implementation. By leveraging the computational efficiency gains afforded by off-the-shelf GPU hardware, tomosynthesis reconstruction can be performed on demand at real-time, user-interactive frame rates. Dynamic multiplanar reconstructions allow the user to adjust reconstruction and display parameters interactively, including axial sampling, slice location, plane tilt, magnification, and filter selection. Reconstruction on-demand allows tomosynthesis images to be viewed as true three-dimensional data rather than just a stack of two-dimensional images. The speed and dynamic rendering capabilities of DRR can improve diagnostic accuracy and lead to more efficient clinical workflows.

Kuo, Johnny; Ringer, Peter A.; Fallows, Steven G.; Bakic, Predrag R.; Maidment, Andrew D. A.; Ng, Susan

2011-03-01

356

A Molecular Image-directed, 3D Ultrasound-guided Biopsy System for the Prostate  

PubMed Central

Systematic transrectal ultrasound (TRUS)-guided biopsy is the standard method for a definitive diagnosis of prostate cancer. However, this biopsy approach uses two-dimensional (2D) ultrasound images to guide biopsy and can miss up to 30% of prostate cancers. We are developing a molecular image-directed, three-dimensional (3D) ultrasound image-guided biopsy system for improved detection of prostate cancer. The system consists of a 3D mechanical localization system and software workstation for image segmentation, registration, and biopsy planning. In order to plan biopsy in a 3D prostate, we developed an automatic segmentation method based wavelet transform. In order to incorporate PET/CT images into ultrasound-guided biopsy, we developed image registration methods to fuse TRUS and PET/CT images. The segmentation method was tested in ten patients with a DICE overlap ratio of 92.4% 1.1 %. The registration method has been tested in phantoms. The biopsy system was tested in prostate phantoms and 3D ultrasound images were acquired from two human patients. We are integrating the system for PET/CT directed, 3D ultrasound-guided, targeted biopsy in human patients. PMID:22708023

Fei, Baowei; Schuster, David M.; Master, Viraj; Akbari, Hamed; Fenster, Aaron; Nieh, Peter

2012-01-01

357

Digital breast tomosynthesis image reconstruction using 2D and 3D total variation minimization  

PubMed Central

Background Digital breast tomosynthesis (DBT) is an emerging imaging modality which produces three-dimensional radiographic images of breast. DBT reconstructs tomographic images from a limited view angle, thus data acquired from DBT is not sufficient enough to reconstruct an exact image. It was proven that a sparse image from a highly undersampled data can be reconstructed via compressed sensing (CS) techniques. This can be done by minimizing the l1 norm of the gradient of the image which can also be defined as total variation (TV) minimization. In tomosynthesis imaging problem, this idea was utilized by minimizing total variation of image reconstructed by algebraic reconstruction technique (ART). Previous studies have largely addressed 2-dimensional (2D) TV minimization and only few of them have mentioned 3-dimensional (3D) TV minimization. However, quantitative analysis of 2D and 3D TV minimization with ART in DBT imaging has not been studied. Methods In this paper two different DBT image reconstruction algorithms with total variation minimization have been developed and a comprehensive quantitative analysis of these two methods and ART has been carried out: The first method is ART?+?TV2D where TV is applied to each slice independently. The other method is ART?+?TV3D in which TV is applied by formulating the minimization problem 3D considering all slices. Results A 3D phantom which roughly simulates a breast tomosynthesis image was designed to evaluate the performance of the methods both quantitatively and qualitatively in the sense of visual assessment, structural similarity (SSIM), root means square error (RMSE) of a specific layer of interest (LOI) and total error values. Both methods show superior results in reducing out-of-focus slice blur compared to ART. Conclusions Computer simulations show that ART + TV3D method substantially enhances the reconstructed image with fewer artifacts and smaller error rates than the other two algorithms under the same configuration and parameters and it provides faster convergence rate. PMID:24172584

2013-01-01

358

Snapshot 3D optical coherence tomography system using image mappingspectrometry  

PubMed Central

A snapshot 3-Dimensional Optical Coherence Tomography system was developed using Image MappingSpectrometry. This system can give depth information (Z) at different spatial positions (XY) withinone camera integration time to potentially reduce motion artifact and enhance throughput. Thecurrent (x,y,?) datacube of (85356117) provides a 3Dvisualization of sample with 400 ?m depth and 13.4?m in transverse resolution. Axial resolution of 16.0?m can also be achieved in this proof-of-concept system. We present ananalysis of the theoretical constraints which will guide development of future systems withincreased imaging depth and improved axial and lateral resolutions. PMID:23736629

Nguyen, Thuc-Uyen; Pierce, Mark C; Higgins, Laura; Tkaczyk, Tomasz S

2013-01-01

359

3-D capacitance density imaging of fluidized bed  

DOEpatents

A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved.

Fasching, George E. (653 Vista Pl., Morgantown, WV 26505)

1990-01-01

360

An adaptive-optics scanning laser ophthalmoscope for imaging murine retinal microstructure  

NASA Astrophysics Data System (ADS)

In vivo retinal imaging is an outstanding tool to observe biological processes unfold in real-time. The ability to image microstructure in vivo can greatly enhance our understanding of function in retinal microanatomy under normal conditions and in disease. Transgenic mice are frequently used for mouse models of retinal diseases. However, commercially available retinal imaging instruments lack the optical resolution and spectral flexibility necessary to visualize detail comprehensively. We developed an adaptive optics scanning laser ophthalmoscope (AO-SLO) specifically for mouse eyes. Our SLO is a sensor-less adaptive optics system (no Shack Hartmann sensor) that employs a stochastic parallel gradient descent algorithm to modulate a deformable mirror, ultimately aiming to correct wavefront aberrations by optimizing confocal image sharpness. The resulting resolution allows detailed observation of retinal microstructure. The AO-SLO can resolve retinal microglia and their moving processes, demonstrating that microglia processes are highly motile, constantly probing their immediate environment. Similarly, retinal ganglion cells are imaged along with their axons and sprouting dendrites. Retinal blood vessels are imaged both using evans blue fluorescence and backscattering contrast.

Alt, Clemens; Biss, David P.; Tajouri, Nadja; Jakobs, Tatjana C.; Lin, Charles P.

2010-02-01

361

Space Radar Image of Kilauea, Hawaii in 3-D  

NASA Technical Reports Server (NTRS)

This is a three-dimensional perspective view of a false-color image of the eastern part of the Big Island of Hawaii. It was produced using all three radar frequencies -- X-band, C-band and L-band -- from the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) flying on the space shuttle Endeavour, overlaid on a U.S. Geological Survey digital elevation map. Visible in the center of the image in blue are the summit crater (Kilauea Caldera) which contains the smaller Halemaumau Crater, and the line of collapse craters below them that form the Chain of Craters Road. The image was acquired on April 12, 1994 during orbit 52 of the space shuttle. The area shown is approximately 34 by 57 kilometers (21 by 35 miles) with the top of the image pointing toward northwest. The image is centered at about 155.25 degrees west longitude and 19.5 degrees north latitude. The false colors are created by displaying three radar channels of different frequency. Red areas correspond to high backscatter at L-HV polarization, while green areas exhibit high backscatter at C-HV polarization. Finally, blue shows high return at X-VV polarization. Using this color scheme, the rain forest appears bright on the image, while the green areas correspond to lower vegetation. The lava flows have different colors depending on their types and are easily recognizable due to their shapes. The flows at the top of the image originated from the Mauna Loa volcano. Kilauea volcano has been almost continuously active for more than the last 11 years. Field teams that were on the ground specifically to support these radar observations report that there was vigorous surface activity about 400 meters (one-quartermile) inland from the coast. A moving lava flow about 200 meters (650 feet) in length was observed at the time of the shuttle overflight, raising the possibility that subsequent images taken during this mission will show changes in the landscape. Currently, most of the lava that is erupted travels the 8 kilometers (5 miles) from the Pu'u O'o crater (the active vent) just outside this image to the coast through a series of lava tubes, but in the past there have been many large lava flows that have traveled this distance, destroying houses and parts of the Hawaii Volcanoes National Park. This SIR-C/X-SAR image shows two types of lava flows that are common to Hawaiian volcanoes. Pahoehoe lava flows are relatively smooth, and appear very dark blue because much of the radar energy is reflected away from the radar. In contrast other lava flows are relatively rough and bounce much of the radar energy back to the radar, making that part of the image bright blue. This radar image is valuable because it allows scientists to study an evolving lava flow field from the Pu'u O'o vent. Much of the area on the northeast side (right) of the volcano is covered with tropical rain forest, and because trees reflect a lot of the radar energy, the forest appears bright in this radar scene. The linear feature running from Kilauea Crater to the right of the image is Highway 11leading to the city of Hilo which is located just beyond the right edge of this image. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA)

1999-01-01

362

HIGH PERFORMANCE 3-D IMAGE RECONSTRUCTION FOR MOLECULAR STRUCTURE DETERMINATION  

E-print Network

samples are prepared and how their images are taken. In Cryo-EM, an aqueous solution that contains for accurate interpretations of biological functions at the molecular level. Proteins and protein complexes of frozen hydrated samples (a technique often referred to as Cryo-EM [6]) has many advantages over other

Geddes, Cameron Guy Robinson

363

Nondestructive imaging of stem cell in 3D scaffold  

NASA Astrophysics Data System (ADS)

We have developed a line-scanning angled fluorescent laminar optical tomography (LS-aFLOT) system. This system enables three-dimensional imaging of fluorescent-labeled stem cell distribution within engineered tissue scaffold over a several-millimeter field-of-view.

Chen, Chao-Wei; Yeatts, Andrew B.; Fisher, John P.; Chen, Yu

2012-06-01

364

Determining 3-D motion and structure from image sequences  

NASA Technical Reports Server (NTRS)

A method of determining three-dimensional motion and structure from two image frames is presented. The method requires eight point correspondences between the two frames, from which motion and structure parameters are determined by solving a set of eight linear equations and a singular value decomposition of a 3x3 matrix. It is shown that the solution thus obtained is unique.

Huang, T. S.

1982-01-01

365

The pulsed all fiber laser application in the high-resolution 3D imaging LIDAR system  

NASA Astrophysics Data System (ADS)

An all fiber laser with master-oscillator-power-amplifier (MOPA) configuration at 1064nm/1550nm for the high-resolution three-dimensional (3D) imaging light detection and ranging (LIDAR) system was reported. The pulsewidth and the repetition frequency could be arbitrarily tuned 1ns~10ns and 10KHz~1MHz, and the peak power exceeded 100kW could be obtained with the laser. Using this all fiber laser in the high-resolution 3D imaging LIDAR system, the image resolution of 1024x1024 and the distance precision of +/-1.5 cm was obtained at the imaging distance of 1km.

Gao, Cunxiao; Zhu, Shaolan; Niu, Linquan; Feng, Li; He, Haodong; Cao, Zongying

2014-05-01

366

Simultaneous whole-animal 3D-imaging of neuronal activity using light field microscopy  

E-print Network

3D functional imaging of neuronal activity in entire organisms at single cell level and physiologically relevant time scales faces major obstacles due to trade-offs between the size of the imaged volumes, and spatial and temporal resolution. Here, using light-field microscopy in combination with 3D deconvolution, we demonstrate intrinsically simultaneous volumetric functional imaging of neuronal population activity at single neuron resolution for an entire organism, the nematode Caenorhabditis elegans. The simplicity of our technique and possibility of the integration into epi-fluoresence microscopes makes it an attractive tool for high-speed volumetric calcium imaging.

Prevedel, R; Hoffmann, M; Pak, N; Wetzstein, G; Kato, S; Schr鰀el, T; Raskar, R; Zimmer, M; Boyden, E S; Vaziri, A

2014-01-01

367

Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery.  

PubMed

Computer-assisted oral and maxillofacial surgery (OMS) has been rapidly evolving since the last decade. State-of-the-art surgical navigation in OMS still suffers from bulky tracking sensors, troublesome image registration procedures, patient movement, loss of depth perception in visual guidance, and low navigation accuracy. We present an augmented reality navigation system with automatic marker-free image registration using 3-D image overlay and stereo tracking for dental surgery. A customized stereo camera is designed to track both the patient and instrument. Image registration is performed by patient tracking and real-time 3-D contour matching, without requiring any fiducial and reference markers. Real-time autostereoscopic 3-D imaging is implemented with the help of a consumer-level graphics processing unit. The resulting 3-D image of the patient's anatomy is overlaid on the surgical site by a half-silvered mirror using image registration and IP-camera registration to guide the surgeon by exposing hidden critical structures. The 3-D image of the surgical instrument is also overlaid over the real one for an augmented display. The 3-D images present both stereo and motion parallax from which depth perception can be obtained. Experiments were performed to evaluate various aspects of the system; the overall image overlay error of the proposed system was 0.71爉m. PMID:24658253

Wang, Junchen; Suenaga, Hideyuki; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro; Liao, Hongen

2014-04-01

368

Space Radar Image of Long Valley, California in 3-D  

NASA Technical Reports Server (NTRS)

This three-dimensional perspective view of Long Valley, California was created from data taken by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This image was constructed by overlaying a color composite SIR-C radar image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle. The two data passes are compared to obtain elevation information. The interferometry data were acquired on April 13,1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR instrument. The color composite radar image was taken in October and was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is the large dark feature in the foreground. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.v. (DLR), the major partner in science, operations and data processing of X-SAR.

1994-01-01

369

Space Radar Image of Long Valley, California - 3D view  

NASA Technical Reports Server (NTRS)

This is a three-dimensional perspective view of Long Valley, California by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This view was constructed by overlaying a color composite SIR-C image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle and, which then, are compared to obtain elevation information. The data were acquired on April 13, 1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR radar instrument. The color composite radar image was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is off the image to the left. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.V.(DLR), the major partner in science, operations and data processing of X-SAR.

1994-01-01

370

Space Radar Image of Mammoth, California in 3-D  

NASA Technical Reports Server (NTRS)

This is a three-dimensional perspective of Mammoth Mountain, California. This view was constructed by overlaying a Spaceborne Imaging Radar-C (SIR-C) radar image on a U.S. Geological Survey digital elevation map. Vertical exaggeration is 1.87 times. The image is centered at 37.6 degrees north, 119.0 degrees west. It was acquired from the Spaceborne Imaging Radar C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) onboard space shuttle Endeavour on its 67th orbit on April 13, 1994. In this color representation, red is C-band HV-polarization, green is C-band VV-polarization and blue is the ratio of C-band VV to C-band HV. Blue areas are smooth, and yellow areas are rock out-crops with varying amounts of snow and vegetation. Crowley Lake is in the foreground, and Highway 395 crosses in the middle of the image. Mammoth Mountain is shown in the upper right. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI).

1999-01-01

371

3D and 4D magnetic susceptibility tomography based on complex MR images  

DOEpatents

Magnetic susceptibility is the physical property for T2*-weighted magnetic resonance imaging (T2*MRI). The invention relates to methods for reconstructing an internal distribution (3D map) of magnetic susceptibility values, .chi. (x,y,z), of an object, from 3D T2*MRI phase images, by using Computed Inverse Magnetic Resonance Imaging (CIMRI) tomography. The CIMRI technique solves the inverse problem of the 3D convolution by executing a 3D Total Variation (TV) regularized iterative convolution scheme, using a split Bregman iteration algorithm. The reconstruction of .chi. (x,y,z) can be designed for low-pass, band-pass, and high-pass features by using a convolution kernel that is modified from the standard dipole kernel. Multiple reconstructions can be implemented in parallel, and averaging the reconstructions can suppress noise. 4D dynamic magnetic susceptibility tomography can be implemented by reconstructing a 3D susceptibility volume from a 3D phase volume by performing 3D CIMRI magnetic susceptibility tomography at each snapshot time.

Chen, Zikuan; Calhoun, Vince D

2014-11-11

372

An open-source deconvolution software package for 3-D quantitative fluorescence microscopy imaging  

PubMed Central

Summary Deconvolution techniques have been widely used for restoring the 3-D quantitative information of an unknown specimen observed using a wide-field fluorescence microscope. Deconv, an open-source deconvolution software package, was developed for 3-D quantitative fluorescence microscopy imaging and was released under the GNU Public License. Deconv provides numerical routines for simulation of a 3-D point spread function and deconvolution routines implemented three constrained iterative deconvolution algorithms: one based on a Poisson noise model and two others based on a Gaussian noise model. These algorithms are presented and evaluated using synthetic images and experimentally obtained microscope images, and the use of the library is explained. Deconv allows users to assess the utility of these deconvolution algorithms and to determine which are suited for a particular imaging application. The design of Deconv makes it easy for deconvolution capabilities to be incorporated into existing imaging applications. PMID:19941558

SUN, Y.; DAVIS, P.; KOSMACEK, E. A.; IANZINI, F.; MACKEY, M. A.

2010-01-01

373

Radon transform based automatic metal artefacts generation for 3D threat image projection  

NASA Astrophysics Data System (ADS)

Threat Image Projection (TIP) plays an important role in aviation security. In order to evaluate human security screeners in determining threats, TIP systems project images of realistic threat items into the images of the passenger baggage being scanned. In this proof of concept paper, we propose a 3D TIP method which can be integrated within new 3D Computed Tomography (CT) screening systems. In order to make the threat items appear as if they were genuinely located in the scanned bag, appropriate CT metal artefacts are generated in the resulting TIP images according to the scan orientation, the passenger bag content and the material of the inserted threat items. This process is performed in the projection domain using a novel methodology based on the Radon Transform. The obtained results using challenging 3D CT baggage images are very promising in terms of plausibility and realism.

Megherbi, Najla; Breckon, Toby P.; Flitton, Greg T.; Mouton, Andre

2013-10-01

374

A novel 3D resolution measure for optical microscopes with applications to single molecule imaging  

NASA Astrophysics Data System (ADS)

The advent of single molecule microscopy has generated significant interest in imaging single biomolecular interactions within a cellular environment in three dimensions. It is widely believed that the classical 2D (3D) resolution limit of optical microscopes precludes the study of single molecular interactions at distances of less than 200 nm (1 micron). However, it is well known that the classical resolution limit is based on heuristic notions. In fact, recent single molecule experiments have shown that the 2D resolution limit, i.e. Rayleigh's criterion, can be surpassed in an optical microscope setup. This illustrates that Rayleigh's criterion is inadequate for modern imaging approaches, thereby necessitating a re-assessment of the resolution limits of optical microscopes. Recently, we proposed a new modern resolution measure that overcomes the limitations of Rayleigh's criterion. Known as the fundamental resolution measure FREM, the new result predicts that distances well below the classical 2D resolution limit can be resolved in an optical microscope. By imaging closely spaced single molecules, it was experimentally verified that the new resolution measure can be attained in an optical microscope setup. In the present work, we extend this result to the 3D case and propose a 3D fundamental resolution measure 3D FREM that overcomes the limitations of the classical 3D resolution limit. We obtain an analytical expression for the 3D FREM. We show how the photon count of the single molecules affects the 3D FREM. We also investigate the effect of deteriorating experimental factors such as pixelation of the detector and extraneous noise sources on the new resolution measure. In contrast to the classical 3D resolution criteria, our new result predicts that distances well below the classical limit can be resolved. We expect that our results would provide novel tools for the design and analysis of 3D single molecule imaging experiments.

Ram, Sripad; Abraham, Anish V.; Ward, E. Sally; Ober, Raimund J.

2007-02-01

375

Effect of retinal ischemia on the non-image forming visual system.  

PubMed

Retinal ischemic injury is an important cause of visual impairment. The loss of retinal ganglion cells (RGCs) is a key sign of retinal ischemic damage. A subset of RGCs expressing the photopigment melanopsin (mRGCs) regulates non-image-forming visual functions such as the pupillary light reflex (PLR), and circadian rhythms. We studied the effect of retinal ischemia on mRGCs and the non-image-forming visual system function. For this purpose, transient ischemia was induced by raising intraocular pressure to 120?mm Hg for 40?min followed by retinal reperfusion by restoring normal pressure. At 4 weeks post-treatment, animals were subjected to electroretinography and histological analysis. Ischemia induced a significant retinal dysfunction and histological alterations. At this time point, a significant decrease in the number of Brn3a(+) RGCs and in the anterograde transport from the retina to the superior colliculus and lateral geniculate nucleus was observed, whereas no differences in the number of mRGCs, melanopsin levels, and retinal projections to the suprachiasmatic nuclei and the olivary pretectal nucleus were detected. At low light intensity, a decrease in pupil constriction was observed in intact eyes contralateral to ischemic eyes, whereas at high light intensity, retinal ischemia did not affect the consensual PLR. Animals with ischemia in both eyes showed a conserved locomotor activity rhythm and a photoentrainment rate which did not differ from control animals. These results suggest that the non-image forming visual system was protected against retinal ischemic damage. PMID:25238585

Gonz醠ez Fleitas, Mar韆 Florencia; Bordone, Melina; Rosenstein, Ruth E; Dorfman, Dami醤

2014-09-19

376

Effects of point configuration on the accuracy in 3D reconstruction from biplane images  

SciTech Connect

Two or more angiograms are being used frequently in medical imaging to reconstruct locations in three-dimensional (3D) space, e.g., for reconstruction of 3D vascular trees, implanted electrodes, or patient positioning. A number of techniques have been proposed for this task. In this simulation study, we investigate the effect of the shape of the configuration of the points in 3D (the 'cloud' of points) on reconstruction errors for one of these techniques developed in our laboratory. Five types of configurations (a ball, an elongated ellipsoid (cigar), flattened ball (pancake), flattened cigar, and a flattened ball with a single distant point) are used in the evaluations. For each shape, 100 random configurations were generated, with point coordinates chosen from Gaussian distributions having a covariance matrix corresponding to the desired shape. The 3D data were projected into the image planes using a known imaging geometry. Gaussian distributed errors were introduced in the x and y coordinates of these projected points. Gaussian distributed errors were also introduced into the gantry information used to calculate the initial imaging geometry. The imaging geometries and 3D positions were iteratively refined using the enhanced-Metz-Fencil technique. The image data were also used to evaluate the feasible R-t solution volume. The 3D errors between the calculated and true positions were determined. The effects of the shape of the configuration, the number of points, the initial geometry error, and the input image error were evaluated. The results for the number of points, initial geometry error, and image error are in agreement with previously reported results, i.e., increasing the number of points and reducing initial geometry and/or image error, improves the accuracy of the reconstructed data. The shape of the 3D configuration of points also affects the error of reconstructed 3D configuration; specifically, errors decrease as the 'volume' of the 3D configuration increases, as would be intuitively expected, and shapes with larger spread, such as spherical shapes, yield more accurate reconstructions. These results are in agreement with an analysis of the solution volume of feasible geometries and could be used to guide selection of points for reconstruction of 3D configurations from two views.

Dmochowski, Jacek; Hoffmann, Kenneth R.; Singh, Vikas; Xu Jinhui; Nazareth, Daryl P. [Department of Mathematics and Statistics, UNC Charlotte, 9201 University City Boulevard, Charlotte, North Carolina 28223-0001 (United States); Department of Neurosurgery, Toshiba Stroke Center, University at Buffalo, Buffalo, New York 14214 (United States); Department of Computer Science, University at Buffalo, Buffalo, New York 14260 (United States); Toshiba Stroke Center, University at Buffalo, Buffalo, New York 14214 (United States)

2005-09-15

377

Application of the Kohonen network for automatic point correspondence in retinal images.  

PubMed

In this paper, an algorithm for automatic point correspondence is proposed towards retinal image registration. Given a pair of corresponding retinal images and a set of bifurcations or other salient points in one of the images, the algorithm detects effectively the set of corresponding points in the second image, by exploiting the properties of Kohonen's Self Organizing Maps and embedding them in a stochastic optimization procedure. The proposed algorithm was tested on 20 unimodal retinal pairs and the obtained results show an enhanced performance in terms of accuracy and robustness compared to the existing algorithm, on which it is based. PMID:18003506

Markaki, V E; Asvestas, P A; Matsopoulos, G K; Uzunoglu, N K

2007-01-01

378

Retinal imaging using commercial broadband optical coherence tomography  

PubMed Central

Aims To examine the practical improvement in image quality afforded by a broadband light source in a clinical setting and to define image quality metrics for future use in evaluating spectral domain optical coherence tomography (SD-OCT) images. Methods A commercially available SD-OCT system, configured with a standard source as well as an external broadband light source, was used to acquire 4 mm horizontal line scans of the right eye of 10 normal subjects. Scans were averaged to reduce speckling and multiple retinal layers were analysed in the resulting images. Results For all layers there was a significant improvement in the mean local contrast (average improvement by a factor of 1.66) when using the broadband light source. Intersession variability was shown not to be a major contributing factor to the observed improvement in image quality obtained with the broadband light source. We report the first observation of sublamination within the inner plexiform layer visible with SD-OCT. Conclusion The practical improvement with the broadband light source was significant, although it remains to be seen what the utility will be for diagnostic pathology. The approach presented here serves as a model for a more quantitative analysis of SD-OCT images, allowing for more meaningful comparisons between subjects, clinics and SD-OCT systems. PMID:19770161

Tanna, Hitesh; Dubis, Adam M; Ayub, Nazia; Tait, Diane M; Rha, Jungtae; Stepien, Kimberly E; Carroll, Joseph

2012-01-01

379

3D super-resolution imaging by localization microscopy.  

PubMed

Fluorescence microscopy is an important tool in all fields of biology to visualize structures and monitor dynamic processes and distributions. Contrary to conventional microscopy techniques such as confocal microscopy, which are limited by their spatial resolution, super-resolution techniques such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) have made it possible to observe and quantify structure and processes on the single molecule level. Here, we describe a method to image and quantify the molecular distribution of membrane-associated proteins in two and three dimensions with nanometer resolution. PMID:25331133

Magenau, Astrid; Gaus, Katharina

2015-01-01

380

An Ultrasonic Imaging System For 3-D Object Recognition  

NASA Astrophysics Data System (ADS)

An ultrasonic imaging system for the use in factory automation and computer vision is proposed in this paper. In order to keep the effectiveness in measuring range at large incidence angles multiple ultrasonic sensors are used in this system. A three-element delay-sum beamformer has been designed and the circuit of ultrasonic signal processor will be described. This ultrasonic system is fully programmable and it serves as an interface to the IBM PC. An X-Y axis mover, controlled via RS232, is installed so that the range profile, which will be presented in this paper, can be obtained.

Chen, Yung-Chang; Yang, C. W.; Chen, C. F.

1987-10-01

381

Clinical Application of 3D-FIESTA Image in Patients with Unilateral Inner Ear Symptom  

PubMed Central

Background and Objectives Unilateral auditory dysfunction such as tinnitus and hearing loss could be a warning sign of a retrocochlear lesion. Auditory brainstem response (ABR) and internal auditory canal magnetic resonance image (MRI) are suggested as novel diagnostic tools for retrocochlear lesions. However, the high cost of MRI and the low sensitivity of the ABR test could be an obstacle when assessing patients with unilateral ear symptoms. The purpose of this study was to introduce the clinical usefulness of three-dimensional fast imaging employing steady-state acquisition (3D-FIESTA) MRI in patients with unilateral ear symptoms. Subjects and Methods Two hundred and fifty-three patients with unilateral tinnitus or unilateral hearing loss who underwent 3D-FIESTA temporal bone MRI as a screening test were enrolled. We reviewed the abnormal findings in the 3D-FIESTA images and ear symptoms using the medical records. Results In patients with unilateral ear symptoms, 51.0% of the patients had tinnitus and 32.8% patients were assessed to have sudden sensory neural hearing loss. With 3D-FIESTA imaging, twelve patients were diagnosed with acoustic neuroma, four with enlarged vestibular aqueduct syndrome, and two with posterior inferior cerebellar artery aneurysm. Inner ear anomalies and vestibulocochlear nerve aplasia could be diagnosed with 3D-FIESTA imaging. Conclusions 3D-FIESTA imaging is a highly sensitive method for the diagnosis of cochlear or retrocochlear lesions. 3D-FIESTA imaging is a useful screening tool for patients with unilateral ear symptoms. PMID:24653918

Oh, Jae Ho; Chung, Jae Ho; Min, Hyun Jung; Cho, Seok Hyun; Park, Chul Won

2013-01-01

382

A Novel 3D Building Damage Detection Method Using Multiple Overlapping UAV Images  

NASA Astrophysics Data System (ADS)

In this paper, a novel approach is presented that applies multiple overlapping UAV imagesto building damage detection. Traditional building damage detection method focus on 2D changes detection (i.e., those only in image appearance), whereas the 2D information delivered by the images is often not sufficient and accurate when dealing with building damage detection. Therefore the detection of building damage in 3D feature of scenes is desired. The key idea of 3D building damage detection is the 3D Change Detection using 3D point cloud obtained from aerial images through Structure from motion (SFM) techniques. The approach of building damage detection discussed in this paper not only uses the height changes of 3D feature of scene but also utilizes the image's shape and texture feature. Therefore, this method fully combines the 2D and 3D information of the real world to detect the building damage. The results, tested through field study, demonstrate that this method is feasible and effective in building damage detection. It has also shown that the proposed method is easily applicable and suited well for rapid damage assessment after natural disasters.

Sui, H.; Tu, J.; Song, Z.; Chen, G.; Li, Q.

2014-09-01

383

Dynamic visual image modeling for 3D synthetic scenes in agricultural engineering  

NASA Astrophysics Data System (ADS)

The dynamic visual image modeling for 3D synthetic scenes by using dynamic multichannel binocular visual image based on the mobile self-organizing network. Technologies of 3D modeling synthetic scenes have been widely used in kinds of industries. The main purpose of this paper is to use multiple networks of dynamic visual monitors and sensors to observe an unattended area, to use the advantages of mobile network in rural areas for improving existing mobile network information service further and providing personalized information services. The goal of displaying is to provide perfect representation of synthetic scenes. Using low-power dynamic visual monitors and temperature/humidity sensor or GPS installed in the node equipment, monitoring data will be sent at scheduled time. Then through the mobile self-organizing network, 3D model is rebuilt by synthesizing the returned images. On this basis, we formalize a novel algorithm for multichannel binocular visual 3D images based on fast 3D modeling. Taking advantage of these low prices mobile, mobile self-organizing networks can get a large number of video from where is not suitable for human observation or unable to reach, and accurately synthetic 3D scene. This application will play a great role in promoting its application in agriculture.

Gao, Li; Yan, Juntao; Li, Xiaobo; Ji, Yatai; Li, Xin

384

Real-time 3D surface-image-guided beam setup in radiotherapy of breast cancer  

SciTech Connect

We describe an approach for external beam radiotherapy of breast cancer that utilizes the three-dimensional (3D) surface information of the breast. The surface data of the breast are obtained from a 3D optical camera that is rigidly mounted on the ceiling of the treatment vault. This 3D camera utilizes light in the visible range therefore it introduces no ionization radiation to the patient. In addition to the surface topographical information of the treated area, the camera also captures gray-scale information that is overlaid on the 3D surface image. This allows us to visualize the skin markers and automatically determine the isocenter position and the beam angles in the breast tangential fields. The field sizes and shapes of the tangential, supraclavicular, and internal mammary gland fields can all be determined according to the 3D surface image of the target. A least-squares method is first introduced for the tangential-field setup that is useful for compensation of the target shape changes. The entire process of capturing the 3D surface data and subsequent calculation of beam parameters typically requires less than 1 min. Our tests on phantom experiments and patient images have achieved the accuracy of 1 mm in shift and 0.5 deg. in rotation. Importantly, the target shape and position changes in each treatment session can both be corrected through this real-time image-guided system.

Djajaputra, David; Li Shidong [Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, 401 North Broadway, Baltimore, Maryland 21231 (United States)

2005-01-01

385

Registration of 3-D images using weighted geometrical features  

SciTech Connect

In this paper, the authors present a weighted geometrical features (WGF) registration algorithm. Its efficacy is demonstrated by combining points and a surface. The technique is an extension of Besl and McKay`s iterative closest point (ICP) algorithm. The authors use the WGF algorithm to register X-ray computed tomography (CT) and T2-weighted magnetic resonance (MR) volume head images acquired from eleven patients that underwent craniotomies in a neurosurgical clinical trial. Each patient had five external markers attached to transcutaneous posts screwed into the outer table of the skull. The authors define registration error as the distance between positions of corresponding markers that are not used for registration. The CT and MR images are registered using fiducial points (marker positions) only, a surface only, and various weighted combinations of points and a surface. The CT surface is derived from contours corresponding to the inner surface of the skull. The MR surface is derived from contours corresponding to the cerebrospinal fluid (CSF)-dura interface. Registration using points and a surface is found to be significantly more accurate than registration using only points or a surface.

Maurer, C.R. Jr.; Aboutanos, G.B.; Dawant, B.M.; Maciunas, R.J.; Fitzpatrick, J.M. [Vanderbilt Univ., Nashville, TN (United States)] [Vanderbilt Univ., Nashville, TN (United States)

1996-12-01

386

Sample preparation for 3D SIMS chemical imaging of cells.  

PubMed

Time-of-flight secondary ion mass spectrometry (ToF-SIMS) is an emerging technique for the characterization of biological systems. With the development of novel ion sources such as cluster ion beams, ionization efficiency has been increased, allowing for greater amounts of information to be obtained from the sample of interest. This enables the plotting of the distribution of chemical compounds against position with submicrometer resolution, yielding a chemical map of the material. In addition, by combining imaging with molecular depth profiling, a complete 3-dimensional rendering of the object is possible. The study of single biological cells presents significant challenges due to the fundamental complexity associated with any biological material. Sample preparation is of critical importance in controlling this complexity, owing to the fragile nature of biological cells and to the need to characterize them in their native state, free of chemical or physical changes. Here, we describe the four most widely used sample preparation methods for cellular imaging using ToF-SIMS, and provide guidance for data collection and analysis procedures. PMID:25361662

Winograd, Nicholas; Bloom, Anna

2015-01-01

387

3D+t Biventricular Strain from Tagged Magnetic Resonance Images by Phase-Unwrapped HARP  

PubMed Central

Purpose To validate a method called bi-ventricular strain unwrapped phase (BiSUP) for reconstructing three-dimensional plus time (3D+t) biventricular strain maps from phase-unwrapped harmonic phase (HARP) images derived from tagged cardiac magnetic resonance imaging (MRI). Materials and Methods A set of 30 human subjects were imaged with tagged MRI. In each study, HARP phase was computed and unwrapped in each short-axis and long-axis image. Inconsistencies in unwrapped phase were resolved using branch cuts manually placed with a graphical user interface. 3D strain maps were computed independently in each imaged time frame through systole and mid diastole in each study. The BiSUP strain and displacements were compared to those estimated by a 3D feature-based (FB) technique and a 2D+t HARP technique. Results The standard deviation of the difference between strains measured by the FB and the BiSUP methods was less than 4% of the average of the strains from the two methods. The correlation between peak minimum principal strain measured using the BiSUP and HARP techniques was over 83%. Conclusion The BiSUP technique can reconstruct full 3D+t strain maps from tagged MR images through the cardiac cycle in a reasonable amount of time and user interaction compared to other 3D analysis methods. PMID:21769965

Ambale, Bharath Venkatesh; Schiros, ChunG.; Gupta, Himanshu; Lloyd, Steven G.; Dell 慖talia, Louis; Denney, Thomas S.

2011-01-01

388

Optimized Protocol for Retinal Wholemount Preparation for Imaging and Immunohistochemistry  

PubMed Central

Working with delicate tissue can be a complicating factor when performing immunohistochemical assessment. Here, we present a method that utilizes a ring-supported hydrophilized PTFE membrane to provide structural support to both living and fixed tissue during immunohistochemical processing, which allows for the use of a variety of protocols that would otherwise cause damage to the tissue. First, this is demonstrated with bolus loading of fluorescent markers into living retinal tissue. This method allows for quick visualization of targeted structures, while the membrane support maintains tissue integrity during the injection and allows for easy transfer of the preparation for further imaging or processing. Second, a procedure for antibody staining in tissue fixed with carbodiimide is described. Though paraformaldehyde fixation is more common, carbodiimide fixation provides a superior substrate for the visualization of synaptic proteins. A limitation of carbodiimide is that the resulting fixed tissue is relatively fragile; however, this is overcome with the use of the supporting membrane. Retinal tissue is used to demonstrate these techniques, but they may be applied to any fragile tissue. PMID:24379013

Ivanova, Elena; Toychiev, Abduqodir H; Yee, Christopher W; Sagdullaev, Botir T

2014-01-01

389

Distortion-free wide-angle 3D imaging and visualization using off-axially distributed image sensing.  

PubMed

We propose a new off-axially distributed image sensing (ODIS) using a wide-angle lens for reconstructing distortion-free wide-angle slice images computationally. In the proposed system, the wide-angle image sensor captures a wide-angle 3D scene, and thus the collected information of the 3D objects is severely distorted. To correct this distortion, we introduce a new correction process involving a wide-angle lens to the computational reconstruction in ODIS. This enables us to reconstruct distortion-free, wide-angle slice images for visualization of 3D objects. Experimental results are carried out to verify the proposed method. To the best of our knowledge, this is the first time the use of a wide-angle lens in a multiple-perspective 3D imaging system is described. PMID:25121689

Zhang, Miao; Piao, Yongri; Kim, Nam-Woo; Kim, Eun-Soo

2014-07-15

390

Spectral domain optical coherence tomography imaging in optic disk pit associated with outer retinal dehiscence  

PubMed Central

A 37-year-old Bangladeshi male presented with an inferotemporal optic disk pit and serous macular detachment in the left eye. Imaging with spectral domain optical coherence tomography (OCT) revealed a multilayer macular schisis pattern with a small subfoveal outer retinal dehiscence. This case illustrates a rare phenotype of optic disk maculopathy with macular schisis and a small outer retinal layer dehiscence. Spectral domain OCT was a useful adjunct in delineating the retinal layers in optic disk pit maculopathy, and revealed a small area of outer retinal layer dehiscence that could only have been detected on high-resolution OCT. PMID:25349471

Wong, Chee Wai; Wong, Doric; Mathur, Ranjana

2014-01-01

391

Terahertz Lasers Reveal Information for 3D Images  

NASA Technical Reports Server (NTRS)

After taking off her shoes and jacket, she places them in a bin. She then takes her laptop out of its case and places it in a separate bin. As the items move through the x-ray machine, the woman waits for a sign from security personnel to pass through the metal detector. Today, she was lucky; she did not encounter any delays. The man behind her, however, was asked to step inside a large circular tube, raise his hands above his head, and have his whole body scanned. If you have ever witnessed a full-body scan at the airport, you may have witnessed terahertz imaging. Terahertz wavelengths are located between microwave and infrared on the electromagnetic spectrum. When exposed to these wavelengths, certain materials such as clothing, thin metal, sheet rock, and insulation become transparent. At airports, terahertz radiation can illuminate guns, knives, or explosives hidden underneath a passenger s clothing. At NASA s Kennedy Space Center, terahertz wavelengths have assisted in the inspection of materials like insulating foam on the external tanks of the now-retired space shuttle. "The foam we used on the external tank was a little denser than Styrofoam, but not much," says Robert Youngquist, a physicist at Kennedy. The problem, he explains, was that "we lost a space shuttle by having a chunk of foam fall off from the external fuel tank and hit the orbiter." To uncover any potential defects in the foam covering, such as voids or air pockets, that could keep the material from staying in place, NASA employed terahertz imaging to see through the foam. For many years, the technique ensured the integrity of the material on the external tanks.

2013-01-01

392

Real-time multispectral 3-D photoacoustic imaging of blood phantoms  

NASA Astrophysics Data System (ADS)

Photoacoustic imaging is exquisitely sensitive to blood and can infer blood oxygenation based on multispectral images. In this work we present multispectral real-time 3D photoacoustic imaging of blood phantoms. We used a custom-built 128-channel hemispherical transducer array coupled to two Nd:YAG pumped OPO laser systems synchronized to provide double pulse excitation at 680 nm and 1064 nm wavelengths, all during a triggered series of ultrasound pressure measurements lasting less than 300 ?s. The results demonstrated that 3D PAI is capable of differentiating between oxygenated and deoxygenated blood at high speed at mm-level resolution.

Kosik, Ivan; Carson, Jeffrey J. L.

2013-03-01

393

[A 3D real-time visualization system of medical image].  

PubMed

Based on the integration of the advanced visualization toolkit (VTK) and the real-time rendering System (VolumePro), 4D View, a 3D real-time visualization system of medical image, which applies real-time 3D medical image (such as MR, CT image, etc.) visualization and interaction was developed. Firstly the VTK, VolumePro and their integration were introduced briefly; then the system design and function template of 4D View system was discussed in detail; finally some visualization results acquired through 4D View was illustrated. According to the results, 4D View system can effectively resolve the poor real-time characteristic of 3D medical image visualization and interaction, so it will have a wide application area of the clinical diagnosis, therapy and medical research, etc in future. PMID:12557541

Liu, Jiquan; Feng, Jingyi; Cai, Chao; Duan, Huilong

2002-09-01

394

2D imaging and 3D sensing data acquisition and mutual registration for painting conservation  

NASA Astrophysics Data System (ADS)

We describe the application of 2D and 3D data acquisition and mutual registration to the conservation of paintings. RGB color image acquisition, IR and UV fluorescence imaging, together with the more recent hyperspectral imaging (32 bands) are among the most useful techniques in this field. They generally are meant to provide information on the painting materials, on the employed techniques and on the object state of conservation. However, only when the various images are perfectly registered on each other and on the 3D model, no ambiguity is possible and safe conclusions may be drawn. We present the integration of 2D and 3D measurements carried out on two different paintings: "Madonna of the Yarnwinder" by Leonardo da Vinci, and "Portrait of Lionello d'Este", by Pisanello, both painted in the XV century.

Fontana, Raffaella; Gambino, Maria Chiara; Greco, Marinella; Marras, Luciano; Pampaloni, Enrico M.; Pelagotti, Anna; Pezzati, Luca; Poggi, Pasquale

2004-12-01

395

2D imaging and 3D sensing data acquisition and mutual registration for painting conservation  

NASA Astrophysics Data System (ADS)

We describe the application of 2D and 3D data acquisition and mutual registration to the conservation of paintings. RGB color image acquisition, IR and UV fluorescence imaging, together with the more recent hyperspectral imaging (32 bands) are among the most useful techniques in this field. They generally are meant to provide information on the painting materials, on the employed techniques and on the object state of conservation. However, only when the various images are perfectly registered on each other and on the 3D model, no ambiguity is possible and safe conclusions may be drawn. We present the integration of 2D and 3D measurements carried out on two different paintings: "Madonna of the Yarnwinder" by Leonardo da Vinci, and "Portrait of Lionello d'Este", by Pisanello, both painted in the XV century.

Fontana, Raffaella; Gambino, Maria Chiara; Greco, Marinella; Marras, Luciano; Pampaloni, Enrico M.; Pelagotti, Anna; Pezzati, Luca; Poggi, Pasquale

2005-01-01

396

3D topography of biologic tissue by multiview imaging and structured light illumination  

NASA Astrophysics Data System (ADS)

Obtaining three-dimensional (3D) information of biologic tissue is important in many medical applications. This paper presents two methods for reconstructing 3D topography of biologic tissue: multiview imaging and structured light illumination. For each method, the working principle is introduced, followed by experimental validation on a diabetic foot model. To compare the performance characteristics of these two imaging methods, a coordinate measuring machine (CMM) is used as a standard control. The wound surface topography of the diabetic foot model is measured by multiview imaging and structured light illumination methods respectively and compared with the CMM measurements. The comparison results show that the structured light illumination method is a promising technique for 3D topographic imaging of biologic tissue.

Liu, Peng; Zhang, Shiwu; Xu, Ronald

2014-02-01

397

Marginal blind deconvolution of adaptive optics retinal images.  

PubMed

Adaptive Optics corrected flood imaging of the retina has been in use for more than a decade and is now a well-developed technique. Nevertheless, raw AO flood images are usually of poor contrast because of the three-dimensional nature of the imaging, meaning that the image contains information coming from both the in-focus plane and the out-of-focus planes of the object, which also leads to a loss in resolution. Interpretation of such images is therefore difficult without an appropriate post-processing, which typically includes image deconvolution. The deconvolution of retina images is difficult because the point spread function (PSF) is not well known, a problem known as blind deconvolution. We present an image model for dealing with the problem of imaging a 3D object with a 2D conventional imager in which the recorded 2D image is a convolution of an invariant 2D object with a linear combination of 2D PSFs. The blind deconvolution problem boils down to estimating the coefficients of the PSF linear combination. We show that the conventional method of joint estimation fails even for a small number of coefficients. We derive a marginal estimation of the unknown parameters (PSF coefficients, object Power Spectral Density and noise level) followed by a MAP estimation of the object. We show that the marginal estimation has good statistical convergence properties and we present results on simulated and experimental data. PMID:22109201

Blanco, L; Mugnier, L M

2011-11-01

398

Midsagittal plane extraction from brain images based on 3D SIFT  

NASA Astrophysics Data System (ADS)

Midsagittal plane (MSP) extraction from 3D brain images is considered as a promising technique for human brain symmetry analysis. In this paper, we present a fast and robust MSP extraction method based on 3D scale-invariant feature transform (SIFT). Unlike the existing brain MSP extraction methods, which mainly rely on the gray similarity, 3D edge registration or parameterized surface matching to determine the fissure plane, our proposed method is based on distinctive 3D SIFT features, in which the fissure plane is determined by parallel 3D SIFT matching and iterative least-median of squares plane regression. By considering the relative scales, orientations and flipped descriptors between two 3D SIFT features, we propose a novel metric to measure the symmetry magnitude for 3D SIFT features. By clustering and indexing the extracted SIFT features using a k-dimensional tree (KD-tree) implemented on graphics processing units, we can match multiple pairs of 3D SIFT features in parallel and solve the optimal MSP on-the-fly. The proposed method is evaluated by synthetic and in vivo datasets, of normal and pathological cases, and validated by comparisons with the state-of-the-art methods. Experimental results demonstrated that our method has achieved a real-time performance with better accuracy yielding an average yaw angle error below 0.91 and an average roll angle error no more than 0.89.

Wu, Huisi; Wang, Defeng; Shi, Lin; Wen, Zhenkun; Ming, Zhong

2014-03-01

399

From 3-D Sonar Images to Augmented Reality Models for Objects Buried on the Seafloor  

Microsoft Academic Search

The investigation of man-made objects lying on or embedded in the sea floor can be carried out with acoustic imaging techniques and subsequent data processing. In this paper, we describe a processing chain that starts with a 3-D acoustic image of the object to be examined and ends with an augmented reality model, which requires minimal user involvement. Essentially, the

Maria Palmese; Andrea Trucco

2008-01-01

400

A global optimization strategy for 3D-2D registration of vascular images  

E-print Network

set of sample test points are systematically generated. The values of dissimilarity to the registered, for example, a radiologist relies on 2D projection images (e.g. Digital subtraction Angiogram) to visualize the operation. Often, a preoperative 3D image (e.g. 3DRA) of the vasculature is produced for the radiologist

Chung, Albert C. S.

401

Intensity-Based 2D-3D Spine Image Registration Incorporating One Fiducial Marker  

E-print Network

in the cervical spine where the vertebral structures are small and fragile. We are particularly interestedIntensity-Based 2D-3D Spine Image Registration Incorporating One Fiducial Marker Daniel B spine image data. The use of one fiducial marker substantially improves registration accuracy

Pratt, Vaughan

402

An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System  

PubMed Central

Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values. PMID:24371468

Cengiz, Kubra

2013-01-01

403

3D tensor factorization approach to single-frame model-free blind-image deconvolution.  

PubMed

By applying a bank of 2D Gabor filters to a blurred image, single-frame blind-image deconvolution (SF BID) is formulated as a 3D tensor factorization (TF) problem, with the key contribution that neither origin nor size of the spatially invariant blurring kernel is required to be known or estimated. Mixing matrix, the original image, and its spatial derivatives are identified from the factors in the Tucker3 model of the multichannel version of the blurred image. Previous approaches to 2D Gabor-filter-bank-based SF BID relied on 2D representation of the multichannel version of the blurred image and matrix factorization methods such as nonnegative matrix factorization (NMF) and independent component analysis (ICA). Unlike matrix factorization-based methods 3D TF preserves local structure in the image. Moreover, 3D TF based on the PARAFAC model is unique up to permutation and scales under very mild conditions. To achieve this, NMF and ICA respectively require enforcement of sparseness and statistical independence constraints on the original image and its spatial derivatives. These constraints are generally not satisfied. The 3D TF-based SF BID method is demonstrated on an experimental defocused red-green-blue image. PMID:19756121

Kopriva, Ivica

2009-09-15

404

Tracking brain deformations in time-sequences of 3D US images  

E-print Network

Tracking brain deformations in time-sequences of 3D US images X. Pennec, P. Cachier and N. Ayache a neurosurgical intervention, the brain tissues shift and warp. In order to keep an accurate positioning of the surgical instruments, one has to estimate this deformation from intra-operative images. We present

Boyer, Edmond

405

3D building facade model reconstruction using parallel images acquired by line scan cameras  

Microsoft Academic Search

This paper proposes a stereo method robust against occlusions that can reconstruct 3D building facades from images captured from a vehicle. When using ground-based views of facades, the occlusion problem caused by roadside trees and telegraph poles in urban areas is the one of the most difficult barriers. In this study, we show the stereo matching based on parallel images

Kaori Kataoka; Tatsuya Osawa; Shiro Ozawa; Kaoru Wakabayashi; Kenichi Arakawa

2005-01-01

406

Data fusion method for 3-D object reconstruction from range images  

Microsoft Academic Search

A method of data fusion from a set of range images for 3-D object surface reconstruction is presented. The two major steps (multiview registration and data integration) of data fusion are carefully discussed. Firstly, the range images taken from multiple views are accurately registered through a set of translation and rotation matrices whose coefficients are carefully calculated through the developed

Xiaokun Li; William G. Wee

2005-01-01

407

AN APPROACH FOR INTERSUBJECT ANALYSIS OF 3D BRAIN IMAGES BASED ON CONFORMAL GEOMETRY  

E-print Network

AN APPROACH FOR INTERSUBJECT ANALYSIS OF 3D BRAIN IMAGES BASED ON CONFORMAL GEOMETRY Guangyu Zou Emission Tomography (PET) and Diffusion Tensor Imaging (DTI) have accelerated brain research in many aspects. In order to better understand the synergy of the many processes involved in normal brain function

Hua, Jing

408

Using specularities in comparing 3D models and 2D images  

Microsoft Academic Search

We aim to create systems that identify and locate objects by comparing known, 3D shapes to intensity images that they have produced. To do this we focus on verification methods that determine whether a known model in a specific pose is consistent with an image. We build on prior work that has done this successfully for Lambertian objects, to handle

Margarita Osadchy; David Jacobs; Ravi Ramamoorthi; David Tucker

2008-01-01

409

Restoration of 3D medical images with total variation scheme on wavelet domains (TVW).  

E-print Network

Restoration of 3D medical images with total variation scheme on wavelet domains (TVW). Arnaud Ogier, wavelets, total variation, restoration, ultrasound. 1 Introduction New applications of medical imaging restoration schemes have been proposed. It is clear that only an approach balancing between the mathematical

Paris-Sud XI, Universit茅 de

410

Fusion of laser and image sensory data for 3-D modeling of the free navigation space  

NASA Technical Reports Server (NTRS)

A fusion technique which combines two different types of sensory data for 3-D modeling of a navigation space is presented. The sensory data is generated by a vision camera and a laser scanner. The problem of different resolutions for these sensory data was solved by reduced image resolution, fusion of different data, and use of a fuzzy image segmentation technique.

Mass, M.; Moghaddamzadeh, A.; Bourbakis, N.

1994-01-01

411

COMPARING THREE PCA-BASED METHODS FOR THE 3D VISUALIZATION OF IMAGING SPECTROSCOPY DATA  

E-print Network

COMPARING THREE PCA-BASED METHODS FOR THE 3D VISUALIZATION OF IMAGING SPECTROSCOPY DATA Alexander spectroscopy data. We discuss three criteria for judging the quality of features in these visualizations, imaging spectroscopy, principal component analysis and multidimensional. 1. Introduction Direct volume

Liere, Robert van

412

Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes  

Microsoft Academic Search

We present a 3D shape-based object recognition system for simultaneous recognition of multiple objects in scenes containing clutter and occlusion. Recognition is based on matching surfaces by matching points using the spin image representation. The spin image is a data level shape descriptor that is used to match surfaces represented as surface meshes. We present a compression scheme for spin

Andrew Edie Johnson; Martial Hebert

1999-01-01

413

Accurate 3D face reconstruction from weakly calibrated wide baseline images with profile contours  

Microsoft Academic Search

We propose a method to generate a highly accurate 3D face model from a set of wide-baseline images in a weakly calibrated setup. Our approach is purely data driven, and produces faithful 3D models without any pre-defined models, unlike other statistical model-based approaches. Our results do not rely upon a critical initialization step nor parameters for optimization steps. We process

Yuping Lin; G閞ard G. Medioni; Jongmoo Choi

2010-01-01

414

Semi-Automated 3D City Modeling Using Stereo High-Resolution Satellite Images  

Microsoft Academic Search

This paper presents a methodology for generating semi-automated 3D models of urban area from stereo high-resolution satellite images. Prevalent adoption of the Rational Function Model (RFM) during recent years as a replacement for rigorous satellite orientation sensor model by most commercial imagery providers, makes 3D information extraction fast and reliable for most end-users. This paper presents an application of RFM

Pooya Sarabandi; Anne S. Kiremidjian; Ronald T. Eguchi

415

Terrain Dependent Correspondence Search for 3D Urban Modeling Using Multiple High-Resolution Satellite Images  

Microsoft Academic Search

This paper presents a methodology for generating semi-automated 3D models of urban area from multiple high- resolution satellite images. Adoption of Rational Function Models (RFM's) during recent years as a replacement for satellite's rigorous sensor model by many commercial imagery providers makes 3D information extraction fast and reliable for most of end-users. This paper presents an application of RFM in

Pooya Sarabandi; A. S. Kiremidjian

2007-01-01

416

3D MR imaging of dental cavities梐n in vitro study  

Microsoft Academic Search

A 3D spin-echo (3D SE) pulse sequence was used on a 4.7T research MRI system to produce images of extracted human first molar tooth placed in CuSO4 water solution. The maximal resolution achieved was 3563300?m3 in read and two phase directions, respectively. The high-intensity signal from water in solution together with the lack of signal from mineralized tooth tissue produce

W?adys?aw P. W?glarz; Marta Tanasiewicz; Tomasz Kupka; Tomasz Sk髍ka; Zenon Su?ek; Andrzej Jasi?ski

2004-01-01

417

Imaging and tracking elements of the International Space Station using a 3D autosynchronized scanner  

Microsoft Academic Search

The Neptec Design Group has developed a new 3D auto-synchronized laser scanner for space applications, based on a principle from the National Research Council of Canada. In imaging mode, the Laser Camera System (LCS) raster scans objects and computes high-resolution 3D maps of their surface features. In centroid acquisition mode, the LCS determines the position of discrete target points on

Claire Samson; Chad E. English; Adam M. DesLauriers; Iain Christie; Francois Blais

2002-01-01

418

Space Radar Image of Death Valley in 3-D  

NASA Technical Reports Server (NTRS)

This picture is a three-dimensional perspective view of Death Valley, California. This view was constructed by overlaying a SIR-C radar image on a U.S. Geological Survey digital elevation map. The SIR-C image is centered at 36.629 degrees north latitude and 117.069 degrees west longitude. We are looking at Stove Pipe Wells, which is the bright rectangle located in the center of the picture frame. Our vantage point is located atop a large alluvial fan centered at the mouth of Cottonwood Canyon. In the foreground on the left, we can see the sand dunes near Stove Pipe Wells. In the background on the left, the Valley floor gradually falls in elevation toward Badwater, the lowest spot in the United States. In the background on the right we can see Tucki Mountain. This SIR-C/X-SAR supersite is an area of extensive field investigations and has been visited by both Space Radar Lab astronaut crews. Elevations in the Valley range from 70 meters (230 feet) below sea level, the lowest in the United States, to more than 3,300 meters (10,800 feet) above sea level. Scientists are using SIR-C/X-SAR data from Death Valley to help the answer a number of different questions about Earth's geology. One question concerns how alluvial fans are formed and change through time under the influence of climatic changes and earthquakes. Alluvial fans are gravel deposits that wash down from the mountains over time. They are visible in the image as circular, fan-shaped bright areas extending into the darker valley floor from the mountains. Information about the alluvial fans helps scientists study Earth's ancient climate. Scientists know the fans are built up through climatic and tectonic processes and they will use the SIR-C/X-SAR data to understand the nature and rates of weathering processes on the fans, soil formation and the transport of sand and dust by the wind. SIR-C/X-SAR's sensitivity to centimeter-scale (inch-scale) roughness provides detailed maps of surface texture. Such information can be used to study the occurrence and movement of dust storms and sand dunes. The goal of these studies is to gain a better understanding of the record of past climatic changes and the effects of those changes on a sensitive environment. This may lead to a better ability to predict future response of the land to different potential global climate-change scenarios. Vertical exaggeration is 1.87 times; exaggeration of relief is a common tool scientists use to detect relationships between structure (for example, faults and fractures) and topography. Death Valley is also one of the primary calibration sites for SIR-C/X-SAR. In the lower right quadrant of the picture frame two bright dots can be seen which form a line extending to Stove Pipe Wells. These dots are corner reflectors that have been set up to calibrate the radar as the shuttle passes overhead. Thirty triangular-shaped reflectors (they look like aluminum pyramids) have been deployed by the calibration team from JPL over a 40- by 40-kilometer (25- by 25-mile) area in and around Death Valley. The signatures of these reflectors were analyzed by JPL scientists to calibrate the image used in this picture. The calibration team here also deployed transponders (electronic reflectors) and receivers to measure the radar signals from SIR-C/X-SAR on the ground. SIR-C/X-SAR radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, in conjunction with aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche

1999-01-01