Note: This page contains sample records for the topic 3-d retinal imaging from Science.gov.
While these samples are representative of the content of Science.gov,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of Science.gov
to obtain the most current and comprehensive results.
Last update: August 15, 2014.
1

2-D Registration and 3-D Shape Inference of the Retinal Fundus from Fluorescein Images  

PubMed Central

This study presents methods to 2-D registration of retinal image sequences and 3-D shape inference from fluorescein images. The Y-feature is a robust geometric entity that is largely invariant across modalities as well as across the temporal grey level variations induced by the propagation of the dye in the vessels. We first present a Y-feature extraction method that finds a set of Y-feature candidates using local image gradient information. A gradient-based approach is then used to align an articulated model of the Y-feature to the candidates more accurately while optimizing a cost function. Using mutual information, fitted Y-features are subsequently matched across images, including colors and fluorescein angiographic frames, for registration. To reconstruct the retinal fundus in 3-D, the extracted Y-features are used to estimate the epipolar geometry with a plane-and-parallax approach. The proposed solution provides a robust estimation of the fundamental matrix suitable for plane-like surfaces, such as the retinal fundus. The mutual information criterion is used to accurately estimate the dense disparity map, while the Y-features are used to estimate the bounds of the range space. Our experimental results validate the proposed method on a set of difficult fluorescein image pairs.

Choe, Tae Eun; Medioni, Gerard; Cohen, Isaac; Walsh, Alexander C.; Sadda, SriniVas R.

2008-01-01

2

3D OCT imaging in clinical settings: toward quantitative measurements of retinal structures  

NASA Astrophysics Data System (ADS)

The acquisition speed of current FD-OCT (Fourier Domain - Optical Coherence Tomography) instruments allows rapid screening of three-dimensional (3D) volumes of human retinas in clinical settings. To take advantage of this ability requires software used by physicians to be capable of displaying and accessing volumetric data as well as supporting post processing in order to access important quantitative information such as thickness maps and segmented volumes. We describe our clinical FD-OCT system used to acquire 3D data from the human retina over the macula and optic nerve head. B-scans are registered to remove motion artifacts and post-processed with customized 3D visualization and analysis software. Our analysis software includes standard 3D visualization techniques along with a machine learning support vector machine (SVM) algorithm that allows a user to semi-automatically segment different retinal structures and layers. Our program makes possible measurements of the retinal layer thickness as well as volumes of structures of interest, despite the presence of noise and structural deformations associated with retinal pathology. Our software has been tested successfully in clinical settings for its efficacy in assessing 3D retinal structures in healthy as well as diseased cases. Our tool facilitates diagnosis and treatment monitoring of retinal diseases.

Zawadzki, Robert J.; Fuller, Alfred R.; Zhao, Mingtao; Wiley, David F.; Choi, Stacey S.; Bower, Bradley A.; Hamann, Bernd; Izatt, Joseph A.; Werner, John S.

2006-03-01

3

Probabilistic intra-retinal layer segmentation in 3-D OCT images using global shape regularization.  

PubMed

With the introduction of spectral-domain optical coherence tomography (OCT), resulting in a significant increase in acquisition speed, the fast and accurate segmentation of 3-D OCT scans has become evermore important. This paper presents a novel probabilistic approach, that models the appearance of retinal layers as well as the global shape variations of layer boundaries. Given an OCT scan, the full posterior distribution over segmentations is approximately inferred using a variational method enabling efficient probabilistic inference in terms of computationally tractable model components: Segmenting a full 3-D volume takes around a minute. Accurate segmentations demonstrate the benefit of using global shape regularization: We segmented 35 fovea-centered 3-D volumes with an average unsigned error of 2.46±0.22?m as well as 80 normal and 66 glaucomatous 2-D circular scans with errors of 2.92±0.5?m and 4.09±0.98?m respectively. Furthermore, we utilized the inferred posterior distribution to rate the quality of the segmentation, point out potentially erroneous regions and discriminate normal from pathological scans. No pre- or postprocessing was required and we used the same set of parameters for all data sets, underlining the robustness and out-of-the-box nature of our approach. PMID:24835184

Rathke, Fabian; Schmidt, Stefan; Schnörr, Christoph

2014-07-01

4

A statistical model for 3D segmentation of retinal choroid in optical coherence tomography images  

NASA Astrophysics Data System (ADS)

The choroid is a densely layer under the retinal pigment epithelium (RPE). Its deeper boundary is formed by the sclera, the outer fibrous shell of the eye. However, the inhomogeneity within the layers of choroidal Optical Coherence Tomography (OCT)-tomograms presents a significant challenge to existing segmentation algorithms. In this paper, we performed a statistical study of retinal OCT data to extract the choroid. This model fits a Gaussian mixture model (GMM) to image intensities with Expectation Maximization (EM) algorithm. The goodness of fit for proposed GMM model is computed using Chi-square measure and is obtained lower than 0.04 for our dataset. After fitting GMM model on OCT data, Bayesian classification method is employed for segmentation of the upper and lower border of boundary of retinal choroid. Our simulations show the signed and unsigned error of -1.44 +/- 0.5 and 1.6 +/- 0.53 for upper border, and -5.7 +/- 13.76 and 6.3 +/- 13.4 for lower border, respectively.

Ghasemi, F.; Rabbani, H.

2014-03-01

5

Automated foveola localization in retinal 3D-OCT images using structural support vector machine prediction.  

PubMed

We develop an automated method to determine the foveola location in macular 3D-OCT images in either healthy or pathological conditions. Structural Support Vector Machine (S-SVM) is trained to directly predict the location of the foveola, such that the score at the ground truth position is higher than that at any other position by a margin scaling with the associated localization loss. This S-SVM formulation directly minimizes the empirical risk of localization error, and makes efficient use of all available training data. It deals with the localization problem in a more principled way compared to the conventional binary classifier learning that uses zero-one loss and random sampling of negative examples. A total of 170 scans were collected for the experiment. Our method localized 95.1% of testing scans within the anatomical area of the foveola. Our experimental results show that the proposed method can effectively identify the location of the foveola, facilitating diagnosis around this important landmark. PMID:23285565

Liu, Yu-Ying; Ishikawa, Hiroshi; Chen, Mei; Wollstein, Gadi; Schuman, Joel S; Rehg, James M

2012-01-01

6

3D photoacoustic imaging  

Microsoft Academic Search

Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation

Jeffrey J. L. Carson; Michael Roumeliotis; Govind Chaudhary; Robert Z. Stodilka; Mark A. Anastasio

2010-01-01

7

3D Imaging.  

ERIC Educational Resources Information Center

Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

Hastings, S. K.

2002-01-01

8

Retinal imaging and image analysis.  

PubMed

Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships. PMID:22275207

Abràmoff, Michael D; Garvin, Mona K; Sonka, Milan

2010-01-01

9

Retinal Imaging and Image Analysis  

PubMed Central

Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships.

Abramoff, Michael D.; Garvin, Mona K.; Sonka, Milan

2011-01-01

10

Adaptive-optics optical coherence tomography for high-resolution and high-speed 3D retinal in vivo imaging  

Microsoft Academic Search

We have combined Fourier-domain optical coherence tomography (FD-OCT) with a closed-loop adaptive optics (AO) system using a Hartmann-Shack wavefront sensor and a bimorph deformable mirror. The adaptive optics system measures and corrects the wavefront aberration of the human eye for improved lateral resolution (~4 mum) of retinal images, while maintaining the high axial resolution (~6 mum) of stand alone OCT.

Robert J. Zawadzki; Steven M. Jones; Scot S. Olivier; Mingtao Zhao; Bradley A. Bower; Joseph A. Izatt; Stacey Choi; Sophie Laut; John S. Werner

2005-01-01

11

3D-Imaging  

NASA Astrophysics Data System (ADS)

Trotz steigender Verkehrsdichte ist die Zahl der Verkehrsunfälle mit Personenschäden in den letzten Jahren gesunken. Um zukünftige Fahrzeuge sowohl für die Insassen als auch für andere Verkehrsteilnehmer noch sicherer zu machen, wird eine zuneh-mend dreidimensionale Umfelderfassung durch das Fahrzeug notwendig. Eine entsprechende 3D-Sen-sorik ist in der Lage, gefährliche Situationen vorausschauend zu erkennen, den Fahrer bestmöglich zu unterstützen und somit Unfälle zu vermeiden. Aber auch im Falle eines nicht mehr zu vermeidenden Unfalls lässt sich das Verletzungsrisiko für alle Beteiligten minimieren.

Buxbaum, Bernd; Lange, Robert; Ringbeck, Thorsten

12

Autofocus for 3D imaging  

Microsoft Academic Search

Three dimensional (3D) autofocus remains a significant challenge for the development of practical 3D multipass radar imaging. The current 2D radar autofocus methods are not readily extendable across sensor passes. We propose a general framework that allows a class of data adaptive solutions for 3D auto-focus across passes with minimal constraints on the scene contents. The key enabling assumption is

Forest Lee-Elkin

2008-01-01

13

3-D threat image projection  

NASA Astrophysics Data System (ADS)

Automated Explosive Detection Systems utilizing Computed Tomography perform a series X-ray scans of passenger bags being checked in at the airport, and produce various 2-D projection images and 3-D volumetric images of the bag. The determination as to whether the passenger bag contains an explosive and needs to be searched manually is performed through trained Transportation Security Administration screeners following an approved protocol. In order to keep the screeners vigilant with regards to screening quality, the Transportation Security Administration has mandated the use of Threat Image Projection on 2-D projection X-ray screening equipment used at all US airports. These algorithms insert visual artificial threats into images of the normal passenger bags in order to test the screeners with regards to their screening efficiency and their screening quality at determining threats. This technology for 2-D X-ray system is proven and is widespread amongst multiple manufacturers of X-ray projection systems. Until now, Threat Image Projection has been unsuccessful at being introduced into 3-D Automated Explosive Detection Systems for numerous reasons. The failure of these prior attempts are mainly due to imaging queues that the screeners pickup on, and therefore make it easy for the screeners to discern the presence of the threat image and thus defeating the intended purpose. This paper presents a novel approach for 3-D Threat Image Projection for 3-D Automated Explosive Detection Systems. The method presented here is a projection based approach where both the threat object and the bag remain in projection sinogram space. Novel approaches have been developed for projection based object segmentation, projection based streak reduction used for threat object isolation along with scan orientation independence and projection based streak generation for an overall realistic 3-D image. The algorithms are prototyped in MatLab and C++ and demonstrate non discernible 3-D threat image insertion into various luggage, and non discernable streak patterns for 3-D images when compared to actual scanned images.

Yildiz, Yesna O.; Abraham, Douglas Q.; Agaian, Sos; Panetta, Karen

2008-03-01

14

A framework for retinal layer intensity analysis for retinal artery occlusion patient based on 3D OCT  

NASA Astrophysics Data System (ADS)

Occlusion of retinal artery leads to severe ischemia and dysfunction of retina. Quantitative analysis of the reflectivity in the retina is very needed to quantitative assessment of the severity of retinal ischemia. In this paper, we proposed a framework for retinal layer intensity analysis for retinal artery occlusion patient based on 3D OCT images. The proposed framework consists of five main steps. First, a pre-processing step is applied to the input OCT images. Second, the graph search method was applied to segment multiple surfaces in OCT images. Third, the RAO region was detected based on texture classification method. Fourth, the layer segmentation was refined using the detected RAO regions. Finally, the retinal layer intensity analysis was performed. The proposed method was tested on tested on 27 clinical Spectral domain OCT images. The preliminary results show the feasibility and efficacy of the proposed method.

Liao, Jianping; Chen, Haoyu; Zhou, Chunlei; Chen, Xinjian

2014-03-01

15

Autofocus for 3D imaging  

NASA Astrophysics Data System (ADS)

Three dimensional (3D) autofocus remains a significant challenge for the development of practical 3D multipass radar imaging. The current 2D radar autofocus methods are not readily extendable across sensor passes. We propose a general framework that allows a class of data adaptive solutions for 3D auto-focus across passes with minimal constraints on the scene contents. The key enabling assumption is that portions of the scene are sparse in elevation which reduces the number of free variables and results in a system that is simultaneously solved for scatterer heights and autofocus parameters. The proposed method extends 2-pass interferometric synthetic aperture radar (IFSAR) methods to an arbitrary number of passes allowing the consideration of scattering from multiple height locations. A specific case from the proposed autofocus framework is solved and demonstrates autofocus and coherent multipass 3D estimation across the 8 passes of the "Gotcha Volumetric SAR Data Set" X-Band radar data.

Lee-Elkin, Forest

2008-05-01

16

ATR for 3D medical imaging  

Microsoft Academic Search

This paper presents a novel concept of Automatic Target Recognition (ATR) for 3D medical imaging. Such 3D imaging can be obtained from X-ray Computerized Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), Ultrasonography (USG), functional MRI, and others. In the case of CT, such 3D imaging can be derived from 3D-mapping of X-ray linear attenuation coefficients, related to

Tomasz Jannson; Andrew Kostrzewski; P. Paki Amouzou

2007-01-01

17

3-D segmentation of retinal blood vessels in spectral-domain OCT volumes of the optic nerve head  

NASA Astrophysics Data System (ADS)

Segmentation of retinal blood vessels can provide important information for detecting and tracking retinal vascular diseases including diabetic retinopathy, arterial hypertension, arteriosclerosis and retinopathy of prematurity (ROP). Many studies on 2-D segmentation of retinal blood vessels from a variety of medical images have been performed. However, 3-D segmentation of retinal blood vessels from spectral-domain optical coherence tomography (OCT) volumes, which is capable of providing geometrically accurate vessel models, to the best of our knowledge, has not been previously studied. The purpose of this study is to develop and evaluate a method that can automatically detect 3-D retinal blood vessels from spectral-domain OCT scans centered on the optic nerve head (ONH). The proposed method utilized a fast multiscale 3-D graph search to segment retinal surfaces as well as a triangular mesh-based 3-D graph search to detect retinal blood vessels. An experiment on 30 ONH-centered OCT scans (15 right eye scans and 15 left eye scans) from 15 subjects was performed, and the mean unsigned error in 3-D of the computer segmentations compared with the independent standard obtained from a retinal specialist was 3.4 +/- 2.5 voxels (0.10 +/- 0.07 mm).

Lee, Kyungmoo; Abràmoff, Michael D.; Niemeijer, Meindert; Garvin, Mona K.; Sonka, Milan

2010-03-01

18

Consistent stylization of stereoscopic 3D images  

Microsoft Academic Search

The application of stylization filters to photographs is common, Instagram being a popular recent example. These image manipulation applications work great for 2D images. However, stereoscopic 3D cameras are increasingly available to consumers (Nintendo 3DS, Fuji W3 3D, HTC Evo 3D). How will users apply these same stylizations to stereoscopic images?

Lesley Northam; Paul Asente; Craig S. Kaplan

2012-01-01

19

Estimation of 3D shape from image orientations  

PubMed Central

One of the main functions of vision is to estimate the 3D shape of objects in our environment. Many different visual cues, such as stereopsis, motion parallax, and shading, are thought to be involved. One important cue that remains poorly understood comes from surface texture markings. When a textured surface is slanted in 3D relative to the observer, the surface patterns appear compressed in the retinal image, providing potentially important information about 3D shape. What is not known, however, is how the brain actually measures this information from the retinal image. Here, we explain how the key information could be extracted by populations of cells tuned to different orientations and spatial frequencies, like those found in the primary visual cortex. To test this theory, we created stimuli that selectively stimulate such cell populations, by “smearing” (filtering) images of 2D random noise into specific oriented patterns. We find that the resulting patterns appear vividly 3D, and that increasing the strength of the orientation signals progressively increases the sense of 3D shape, even though the filtering we apply is physically inconsistent with what would occur with a real object. This finding suggests we have isolated key mechanisms used by the brain to estimate shape from texture. Crucially, we also find that adapting the visual system's orientation detectors to orthogonal patterns causes unoriented random noise to look like a specific 3D shape. Together these findings demonstrate a crucial role of orientation detectors in the perception of 3D shape.

Fleming, Roland W.; Holtmann-Rice, Daniel; Bulthoff, Heinrich H.

2011-01-01

20

Minimizing camera-eye optical aberrations during the 3D reconstruction of retinal structures  

NASA Astrophysics Data System (ADS)

3D reconstruction of blood vessels is a powerful visualization tool for physicians, since it allows them to refer to qualitative representation of their subject of study. In this paper we propose a 3D reconstruction method of retinal vessels from fundus images. The reconstruction method propose herein uses images of the same retinal structure in epipolar geometry. Images are preprocessed by RISA system for segmenting blood vessels and obtaining feature points for correspondences. The correspondence points process is solved using correlation. The LMedS analysis and Graph Transformation Matching algorithm are used for outliers suppression. Camera projection matrices are computed with the normalized eight point algorithm. Finally, we retrieve 3D position of the retinal tree points by linear triangulation. In order to increase the power of visualization, 3D tree skeletons are represented by surfaces via generalized cylinders whose radius correspond to morphological measurements obtained by RISA. In this paper the complete calibration process including the fundus camera and the optical properties of the eye, the so called camera-eye system is proposed. On one hand, the internal parameters of the fundus camera are obtained by classical algorithms using a reference pattern. On the other hand, we minimize the undesirable efects of the aberrations induced by the eyeball optical system assuming that contact enlarging lens corrects astigmatism, spherical and coma aberrations are reduced changing the aperture size and eye refractive errors are suppressed adjusting camera focus during image acquisition. Evaluation of two self-calibration proposals and results of 3D blood vessel surface reconstruction are presented.

Aldana-Iuit, Javier; Martinez-Perez, M. Elena; Espinosa-Romero, Arturo; Diaz-Uribe, Rufino

2010-04-01

21

3D Cardiac Deformation from Ultrasound Images  

Microsoft Academic Search

The quantitative estimation of regional cardiac deformation from 3D image sequences has important clinical implications for the assessment of viability in the heart wall. Such estimates have so far been obtained almost exclusively from Magnetic Resonance (MR) im- ages, speciflcally MR tagging. In this paper we describe a methodology for estimating cardiac deformations from 3D ultrasound images. The images are

Xenophon Papademetris; Albert J. Sinusas; Donald P. Dione; James S. Duncan

1999-01-01

22

Transplantation of Embryonic and Induced Pluripotent Stem Cell-Derived 3D Retinal Sheets into Retinal Degenerative Mice  

PubMed Central

Summary In this article, we show that mouse embryonic stem cell- or induced pluripotent stem cell-derived 3D retinal tissue developed a structured outer nuclear layer (ONL) with complete inner and outer segments even in an advanced retinal degeneration model (rd1) that lacked ONL. We also observed host-graft synaptic connections by immunohistochemistry. This study provides a “proof of concept” for retinal sheet transplantation therapy for advanced retinal degenerative diseases.

Assawachananont, Juthaporn; Mandai, Michiko; Okamoto, Satoshi; Yamada, Chikako; Eiraku, Mototsugu; Yonemura, Shigenobu; Sasai, Yoshiki; Takahashi, Masayo

2014-01-01

23

ATR for 3D medical imaging  

NASA Astrophysics Data System (ADS)

This paper presents a novel concept of Automatic Target Recognition (ATR) for 3D medical imaging. Such 3D imaging can be obtained from X-ray Computerized Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), Ultrasonography (USG), functional MRI, and others. In the case of CT, such 3D imaging can be derived from 3D-mapping of X-ray linear attenuation coefficients, related to 3D Fourier transform of Radon transform, starting from frame segmentation (or contour definition) into an object and background. Then, 3D template matching is provided, based on inertial tensor invariants, adopted from rigid body mechanics, by comparing the mammographic data base with a real object of interest, such as a malignant breast tumor. The method is more general than CAD breast mammography.

Jannson, Tomasz; Kostrzewski, Andrew; Paki Amouzou, P.

2007-10-01

24

Compositing 3-D rendered images  

Microsoft Academic Search

The complexity of anti-aliased 3-D rendering systems can be controlled by using a tool-building approach like that of the UNIX™ text-processing tools. Such an approach requires a simple picture representation amenable to anti-aliasing that all rendering programs can produce, a compositing algorithm for that representation and a command language to piece together scenes. This paper advocates a representation that combines

Tom Duff

1985-01-01

25

Imaged-Based 3D Face Modeling  

Microsoft Academic Search

We present an image-based 3D face modeling algorithm. Different from traditional complex stereo vision procedure, our new method needs only two orthogonal images for fast 3D modeling without any camera calibration. The proposed method has two steps. Firstly according to MPEG-4 protocol for 3D face structure, we appoint and deform feature points by radial basis functions (RBF) in the input

Mandun Zhang; Linna Ma; Xiangyong Zeng; Yangsheng Wang

2004-01-01

26

Hierarchical segmentation of 3-D range images  

Microsoft Academic Search

The authors present a novel approach for segmentation of dense three-dimensional range images. In this approach, four local properties, namely the 3-D coordinate, the surface normal, the Gaussian curvature, and the mean curvature of each data point, are combined in a hierarchical data structure to segment a given 3-D dense range map into surface patches. This algorithm is applicable to

Farshid Arman; Bikash Sabata; J. K. Aggarwal

1989-01-01

27

Hadamard camera for 3D imaging  

NASA Astrophysics Data System (ADS)

This paper at hand describes in details the work that has been carried out for fusing a commercial micro mirror sampling element with TOF acquisition methods and known Hadamard multiplexing techniques for implementation of fast and SNR optimized 3D image capture. The theoretical basics of TOF and Hadamard technique are presented and will be complemented by theoretical explanation of utilizing them for 3D volumetric image generation. Finally measurement results of scene image acquisition are going to be demonstrated and discussed as well as expanded by considerations about possible applications in THz-imaging and the following research steps.

Romasew, Eugen; Barenz, Joachim; Tholl, Hans Dieter

2007-11-01

28

Retinal imaging in uveitis  

PubMed Central

Ancillary investigations are the backbone of uveitis workup for posterior segment inflammations. They help in establishing the differential diagnosis and making certain diagnosis by ruling out certain pathologies and are a useful aid in monitoring response to therapy during follow-up. These investigations include fundus photography including ultra wide field angiography, fundus autofluorescence imaging, fluorescein angiography, optical coherence tomography and multimodal imaging. This review aims to be an overview describing the role of these retinal investigations for posterior uveitis.

Gupta, Vishali; Al-Dhibi, Hassan A.; Arevalo, J. Fernando

2014-01-01

29

Hyperspectral image coding using 3D transforms  

Microsoft Academic Search

This work considers the efficient coding of hyperspectral images. The shape-adaptive DCT is extended to the three-dimensional case. Both the 3D-SA-DCT and the conventional 3D-DCT are combined with either of two alternative techniques for coding the transform coefficients. The proposed schemes are compared with two state of the art coding algorithms, which serve as benchmarks, and are found to have

Dmitry Markman; David Malah

2001-01-01

30

On Simulating 3D Fluorescent Microscope Images  

Microsoft Academic Search

In recent years many various biomedical image segmentation methods have appeared. Though typically presented to be successful\\u000a the majority of them was not properly tested against ground truth images. The obvious way of testing the quality of new segmentation\\u000a was based on visual inspection by a specialist in the given field. The novel 3D biomedical image data simulator is presented

David Svoboda; Marek Kasík; Martin Maska; Jan Hubený; Stanislav Stejskal; Michal Zimmermann

2007-01-01

31

Acquisition and applications of 3D images  

NASA Astrophysics Data System (ADS)

The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

Sterian, Paul; Mocanu, Elena

2007-08-01

32

3D Biological Tissue Image Rendering Software  

Cancer.gov

Available for commercial development is software that provides automatic visualization of features inside biological image volumes in 3D. The software provides a simple and interactive visualization for the exploration of biological datasets through dataset-specific transfer functions and direct volume rendering.

33

Tilted planes in 3D image analysis  

NASA Astrophysics Data System (ADS)

Reliable 3D wholebody scanners which output digitized 3D images of a complete human body are now commercially available. This paper describes a software package, called 3DM, being developed by researchers at Clemson University and which manipulates and extracts measurements from such images. The focus of this paper is on tilted planes, a 3DM tool which allows a user to define a plane through a scanned image, tilt it in any direction, and effectively define three disjoint regions on the image: the points on the plane and the points on either side of the plane. With tilted planes, the user can accurately take measurements required in applications such as apparel manufacturing. The user can manually segment the body rather precisely. Tilted planes assist the user in analyzing the form of the body and classifying the body in terms of body shape. Finally, titled planes allow the user to eliminate extraneous and unwanted points often generated by a 3D scanner. This paper describes the user interface for tilted planes, the equations defining the plane as the user moves it through the scanned image, an overview of the algorithms, and the interaction of the tilted plane feature with other tools in 3DM.

Pargas, Roy P.; Staples, Nancy J.; Malloy, Brian F.; Cantrell, Ken; Chhatriwala, Murtuza

1998-03-01

34

Texture anisotropy in 3-D images  

Microsoft Academic Search

Two approaches to the characterization of three-dimensional (3-D) textures are presented: one based on gradient vectors and one on generalized co-occurrence matrices. They are investigated with the help of simulated data for their behavior in the presence of noise and for various values of the parameters they depend on. They are also applied to several medical volume images characterized by

Vassili A. Kovalev; Maria Petrou; Yaroslav S. Bondar

1999-01-01

35

Imaging chemical reactions - 3D velocity mapping  

NASA Astrophysics Data System (ADS)

Visualising a collision between an atom or a molecule or a photodissociation (half-collision) of a molecule on a single particle and single quantum level is like watching the collision of billiard balls on a pool table: Molecular beams or monoenergetic photodissociation products provide the colliding reactants at controlled velocity before the reaction products velocity is imaged directly with an elaborate camera system, where one should keep in mind that velocity is, in general, a three-dimensional (3D) vectorial property which combines scattering angles and speed. If the processes under study have no cylindrical symmetry, then only this 3D product velocity vector contains the full information of the elementary process under study.

Chichinin, A. I.; Gericke, K.-H.; Kauczok, S.; Maul, C.

36

Pattern based 3D image Steganography  

NASA Astrophysics Data System (ADS)

This paper proposes a new high capacity Steganographic scheme using 3D geometric models. The novel algorithm re-triangulates a part of a triangle mesh and embeds the secret information into newly added position of triangle meshes. Up to nine bits of secret data can be embedded into vertices of a triangle without causing any changes in the visual quality and the geometric properties of the cover image. Experimental results show that the proposed algorithm is secure, with high capacity and low distortion rate. Our algorithm also resists against uniform affine transformations such as cropping, rotation and scaling. Also, the performance of the method is compared with other existing 3D Steganography algorithms. [Figure not available: see fulltext.

Thiyagarajan, P.; Natarajan, V.; Aghila, G.; Prasanna Venkatesan, V.; Anitha, R.

2013-03-01

37

Two-photon in vivo imaging of retinal microstructures  

NASA Astrophysics Data System (ADS)

Non-invasive fluorescence retinal imaging in small animals is an important requirement in an array of translational vision applications. Two-photon imaging has the potential for long-term investigation of healthy and diseased retinal function and structure in vivo. Here, we demonstrate that two-photon microscopy through a mouse's pupil can yield high-quality optically sectioned fundus images. By remotely scanning using an electronically tunable lens we acquire highly-resolved 3D fluorescein angiograms. These results provide an important step towards various applications that will benefit from the use of infrared light, including functional imaging of retinal responses to light stimulation.

Schejter, Adi; Farah, Nairouz; Shoham, Shy

2014-02-01

38

3D GPR Imaging of Wooden Logs  

NASA Astrophysics Data System (ADS)

There has been a lack of an effective NDE technique to locate internal defects within wooden logs. The few available elastic wave propagation based techniques are limited to predicting E values. Other techniques such as X-rays have not been very successful in detecting internal defects in logs. If defects such as embedded metals could be identified before the sawing process, the saw mills could significantly increase their production by reducing the probability of damage to the saw blade and the associated downtime and the repair cost. Also, if the internal defects such as knots and decayed areas could be identified in logs, the sawing blade can be oriented to exclude the defective portion and optimize the volume of high valued lumber that can be obtained from the logs. In this research, GPR has been successfully used to locate internal defects (knots, decays and embedded metals) within the logs. This paper discusses GPR imaging and mapping of the internal defects using both 2D and 3D interpretation methodology. Metal pieces were inserted in a log and the reflection patterns from these metals were interpreted from the radargrams acquired using 900 MHz antenna. Also, GPR was able to accurately identify the location of knots and decays. Scans from several orientations of the log were collected to generate 3D cylindrical volume. The actual location of the defects showed good correlation with the interpreted defects in the 3D volume. The time/depth slices from 3D cylindrical volume data were useful in understanding the extent of defects inside the log.

Halabe, Udaya B.; Pyakurel, Sandeep

2007-03-01

39

3-D face structure extraction and recognition from images using 3-D morphing and distance mapping  

Microsoft Academic Search

We describe a novel approach for creating a three-dimensional (3-D) face structure from multiple image views of a human face taken at a priori unknown poses by appropriately morphing a generic 3-D face. A cubic explicit polynomial in 3-D is used to morph a generic face into the specific face structure. The 3-D face structure allows for accurate pose estimation

Chongzhen Zhang; Fernand S. Cohen

2002-01-01

40

Ames Lab 101: Real-Time 3D Imaging  

ScienceCinema

Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

41

Ames Lab 101: Real-Time 3D Imaging  

SciTech Connect

Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

Zhang, Song

2010-01-01

42

Ames Lab 101: Real-Time 3D Imaging  

ScienceCinema

Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

Zhang, Song

2012-08-29

43

Filling in the retinal image  

NASA Technical Reports Server (NTRS)

The optics of the eye form an image on a surface at the back of the eyeball called the retina. The retina contains the photoreceptors that sample the image and convert it into a neural signal. The spacing of the photoreceptors in the retina is not uniform and varies with retinal locus. The central retinal field, called the macula, is densely packed with photoreceptors. The packing density falls off rapidly as a function of retinal eccentricity with respect to the macular region and there are regions in which there are no photoreceptors at all. The retinal regions without photoreceptors are called blind spots or scotomas. The neural transformations which convert retinal image signals into percepts fills in the gaps and regularizes the inhomogeneities of the retinal photoreceptor sampling mosaic. The filling-in mechamism plays an important role in understanding visual performance. The filling-in mechanism is not well understood. A systematic collaborative research program at the Ames Research Center and SRI in Menlo Park, California, was designed to explore this mechanism. It was shown that the perceived fields which are in fact different from the image on the retina due to filling-in, control some aspects of performance and not others. Researchers have linked these mechanisms to putative mechanisms of color coding and color constancy.

Larimer, James; Piantanida, Thomas

1990-01-01

44

Retinal Image Quality During Accommodation  

PubMed Central

Purpose We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Methods Subjects viewed a monochromatic (552nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Results Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Conclusions Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye’s higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention.

Lopez-Gil, N.; Martin, J.; Liu, T.; Bradley, A.; Diaz-Munoz, D.; Thibos, L.

2013-01-01

45

3-D Volume Imaging for Dentistry: A New Dimension  

Microsoft Academic Search

The use of computed tomography for dental imaging procedures has in- creased recently. Use of CT for even seemingly routine diagnosis and treatment procedures suggests that the desire for 3-D imaging is more than a current trend but rather a shift toward a future of dimensional volume imaging. Recognizing this shift, several imaging manufacturers recently have developed 3-D imaging devices

Robert A. Danforth; Ivan Dus; James Mah

2003-01-01

46

A 3D parylene scaffold cage for culturing retinal pigment epithelial cells  

Microsoft Academic Search

This work reports a 3D parylene scaffold cage for culturing stem-cell-differentiated retinal pigment epithelial (RPE) cells for the therapy of age-related macular degeneration, which is caused by degenerated permeability of the Bruch's membrane first and then the loss of RPE cells. This reported cage can support sufficient nutrient permeation to nourish the RPE cells inside with in vivo like morphology.

Bo Lu; Danhong Zhu; David Hinton; Mark S. Humayun; Yu-Chong Tai

2012-01-01

47

Use of a transputer system for fast 3-D image reconstruction in 3-D PET  

Microsoft Academic Search

A transputer system was used for fast 3-D image reconstruction in 3-D PET (positron emission tomography). The optimization of the algorithm (written in Occam language) and its parallelization on the transputer system for its use in the HISPET (high spatial resolution PET) project are discussed. It is projected that a 100-transputer machine would suffice in processing process online data at

S. Barresi; D. Bollini; A. Del Guerra

1990-01-01

48

Image performance evaluation of a 3D surgical imaging platform  

Microsoft Academic Search

The O-arm (Medtronic Inc.) is a multi-dimensional surgical imaging platform. The purpose of this study was to perform a quantitative evaluation of the imaging performance of the O-arm in an effort to understand its potential for future nonorthopedic applications. Performance of the reconstructed 3D images was evaluated, using a custom-built phantom, in terms of resolution, linearity, uniformity and geometrical accuracy.

Ivailo E. Petrov; Hristo N. Nikolov; David W. Holdsworth; Maria Drangova

2011-01-01

49

Interactive display and analysis of 3-D medical images  

Microsoft Academic Search

The ANALYZE software system, which permits detailed investigation and evaluation of 3-D biomedical images, is discussed. ANALYZE can be used with 3-D imaging modalities based on X-ray computed tomography, radionuclide emission tomography, ultrasound tomography, and magnetic resonance imaging. The package is unique in its synergistic integration of fully interactive modules for direct display, manipulation, and measurement of multidimensional image data.

R. A. Robb; C. Barillot

1989-01-01

50

3-D Imaging of Partly Concealed Targets by Laser Radar.  

National Technical Information Service (NTIS)

Imaging laser radar can provide the capability of high resolution 3- D imaging at long ranges. In contrast to conventional passive imaging systems, such as CCD and infrared (IR) techniques, laser radar provides both intensity and range information which a...

D. Letalick H. Larsson T. Chevalier

2005-01-01

51

Infrastructure for 3D Imaging Test Bed.  

National Technical Information Service (NTIS)

In this report, we describe an experimental test bed we constructed with a variety of sensor modalities for data generation and model validation. Computer generated 3D target models are now routinely used in graphics and computer aided design, and increas...

H. Krim

2007-01-01

52

Dynamic contrast-enhanced 3D photoacoustic imaging  

NASA Astrophysics Data System (ADS)

Photoacoustic imaging (PAI) is a hybrid imaging modality that integrates the strengths from both optical imaging and acoustic imaging while simultaneously overcoming many of their respective weaknesses. In previous work, we reported on a real-time 3D PAI system comprised of a 32-element hemispherical array of transducers. Using the system, we demonstrated the ability to capture photoacoustic data, reconstruct a 3D photoacoustic image, and display select slices of the 3D image every 1.4 s, where each 3D image resulted from a single laser pulse. The present study aimed to exploit the rapid imaging speed of an upgraded 3D PAI system by evaluating its ability to perform dynamic contrast-enhanced imaging. The contrast dynamics can provide rich datasets that contain insight into perfusion, pharmacokinetics and physiology. We captured a series of 3D PA images of a flow phantom before and during injection of piglet and rabbit blood. Principal component analysis was utilized to classify the data according to its spatiotemporal information. The results suggested that this technique can be used to separate a sequence of 3D PA images into a series of images representative of main features according to spatiotemporal flow dynamics.

Wong, Philip; Kosik, Ivan; Carson, Jeffrey J. L.

2013-03-01

53

3D passive integral imaging using compressive sensing.  

PubMed

Passive 3D sensing using integral imaging techniques has been well studied in the literature. It has been shown that a scene can be reconstructed at various depths using several 2D elemental images. This provides the ability to reconstruct objects in the presence of occlusions, and passively estimate their 3D profile. However, high resolution 2D elemental images are required for high quality 3D reconstruction. Compressive Sensing (CS) provides a way to dramatically reduce the amount of data that needs to be collected to form the elemental images, which in turn can reduce the storage and bandwidth requirements. In this paper, we explore the effects of CS in acquisition of the elemental images, and ultimately on passive 3D scene reconstruction and object recognition. Our experiments show that the performance of passive 3D sensing systems remains robust even when elemental images are recovered from very few compressive measurements. PMID:23187517

Cho, Myungjin; Mahalanobis, Abhijit; Javidi, Bahram

2012-11-19

54

3D Imaging with Holographic Tomography  

NASA Astrophysics Data System (ADS)

There are two main types of tomography that enable the 3D internal structures of objects to be reconstructed from scattered data. The commonly known computerized tomography (CT) give good results in the x-ray wavelength range where the filtered back-projection theorem and Radon transform can be used. These techniques rely on the Fourier projection-slice theorem where rays are considered to propagate straight through the object. Another type of tomography called `diffraction tomography' applies in applications in optics and acoustics where diffraction and scattering effects must be taken into account. The latter proves to be a more difficult problem, as light no longer travels straight through the sample. Holographic tomography is a popular way of performing diffraction tomography and there has been active experimental research on reconstructing complex refractive index data using this approach recently. However, there are two distinct ways of doing tomography: either by rotation of the object or by rotation of the illumination while fixing the detector. The difference between these two setups is intuitive but needs to be quantified. From Fourier optics and information transformation point of view, we use 3D transfer function analysis to quantitatively describe how spatial frequencies of the object are mapped to the Fourier domain. We first employ a paraxial treatment by calculating the Fourier transform of the defocused OTF. The shape of the calculated 3D CTF for tomography, by scanning the illumination in one direction only, takes on a form that we might call a 'peanut,' compared to the case of object rotation, where a diablo is formed, the peanut exhibiting significant differences and non-isotropy. In particular, there is a line singularity along one transverse direction. Under high numerical aperture conditions, the paraxial treatment is not accurate, and so we make use of 3D analytical geometry to calculate the behaviour in the non-paraxial case. This time, we obtain a similar peanut, but without the line singularity.

Sheppard, Colin J. R.; Kou, Shan Shan

2010-04-01

55

Developing 3-D Imaging Mass Spectrometry  

Microsoft Academic Search

PhotoShop? the downloaded images are converted to a series of model sections by color coding the section periphery and the corpus callosum of each image blue and red, respectively. The colored regions are extracted from the original image and printed at a 1:1 scale on paper. A digital camera is used to record an optical image from each of the

Anna C. Crecelius; D. Shannon Cornett; Betsy Williams; Bobby Bodenheimer; Benoit Dawant; Richard M. Caprioli

2003-01-01

56

Automated multilayer segmentation and characterization in 3D spectral-domain optical coherence tomography images  

NASA Astrophysics Data System (ADS)

Spectral-domain optical coherence tomography (SD-OCT) is a 3-D imaging technique, allowing direct visualization of retinal morphology and architecture. The various layers of the retina may be affected differentially by various diseases. In this study, an automated graph-based multilayer approach was developed to sequentially segment eleven retinal surfaces including the inner retinal bands to the outer retinal bands in normal SD-OCT volume scans at three different stages. For stage 1, the four most detectable and/or distinct surfaces were identified in the four-times-downsampled images and were used as a priori positional information to limit the graph search for other surfaces at stage 2. Eleven surfaces were then detected in the two-times-downsampled images at stage 2, and refined in the original image space at stage 3 using the graph search integrating the estimated morphological shape models. Twenty macular SD-OCT (Heidelberg Spectralis) volume scans from 20 normal subjects (one eye per subject) were used in this study. The overall mean and absolute mean differences in border positions between the automated and manual segmentation for all 11 segmented surfaces were -0.20 +/- 0.53 voxels (-0.76 +/- 2.06 ?m) and 0.82 +/- 0.64 voxels (3.19 +/- 2.46 ?m). Intensity and thickness properties in the resultant retinal layers were investigated. This investigation in normal subjects may provide a comparative reference for subsequent investigations in eyes with disease.

Hu, Zhihong; Wu, Xiaodong; Hariri, Amirhossein; Sadda, SriniVas R.

2013-03-01

57

Sparse Signal Methods for 3-D Radar Imaging  

Microsoft Academic Search

Synthetic aperture radar (SAR) imaging is a valuable tool in a number of defense surveillance and monitoring applica- tions. There is increasing interest in 3-D reconstruction of objects from radar measurements. Traditional 3-D SAR image formation requires data collection over a densely sampled azimuth-elevation sector. In practice, such a dense measurement set is difficult or impossible to obtain, and effective

Christian D. Austin; Emre Ertin; Randolph L. Moses

2011-01-01

58

An efficient parallel architecture for 3D PET image reconstruction  

Microsoft Academic Search

Positron emission tomography (PET) is a functional imaging modality which provides information on in vivo physiological and metabolic functions in the human body. However, the long reconstruction time has prevented 3D PET from being practical. To make 3D PET feasible in a medical environment, in this paper, we propose an efficient parallel architecture for PET image reconstruction. The proposed parallel

Chung-Ming Chen; Cheng-Yi Wang

1996-01-01

59

AOTF-based 3D spectral imaging system  

NASA Astrophysics Data System (ADS)

The problem of 3D spectral imaging with random spectral access is discussed. Proposed solution is based on dual-channel double acousto-optical (AO) monochromator. Each of two AO cells in it has two spatially separated entrance pupils for transmission of stereoscopic images. In such a scheme spectral drift of the image doesn't appear, while spectral and spatial distortion is minimal. 3D spectral imaging based on this monochromator and Abbe stereomicroscope is described. Possible applications of proposed AOTF-based 3D imaging spectrometer are discussed.

Pozhar, Vitold; Machihin, Alexander

2012-05-01

60

Reconstruction of Realistic 3D Surface Model and 3D Animation from Range Images Obtained by Real Time 3D Measurement System  

Microsoft Academic Search

We have developed a new type of 3D measurement system which enabled one to obtain successive 3D range data at video rate with an error within ±0.3%. In this paper, we reconstruct realistic colored 3D surface model and 3D animation of the moving target from range images obtained by our 3D measurement system. We synthesize video images with the range

Takeo Miyasaka; Kazuhiro Kuroda; Makoto Hirose; Kazuo Araki

2000-01-01

61

Multifocus synthesis and its application to 3D image capturing  

NASA Astrophysics Data System (ADS)

A new technique for the high-resolution image synthesis called multifocus synthesis is presented. As an important extension, the 3-D real-world image capture is discussed. In the approach, the object image is taken at a number of focal distances by a single camera placed at a fixed position. Each of these images are then converted into the multiresolution using the optimized QMF. The resultant volume of coefficients are then analyzed and 3-D distance information is computed.

Yamaguchi, Hirohisa

1993-10-01

62

3D Cell Culture Imaging with Digital Holographic Microscopy  

NASA Astrophysics Data System (ADS)

Cells in higher organisms naturally exist in a three dimensional (3D) structure, a fact sometimes ignored by in vitro biological research. Confinement to a two dimensional culture imposes significant deviations from the native 3D state. One of the biggest obstacles to wider use of 3D cultures is the difficulty of 3D imaging. The confocal microscope, the dominant 3D imaging instrument, is expensive, bulky, and light-intensive; live cells can be observed for only a short time before they suffer photodamage. We present an alternative 3D imaging techinque, digital holographic microscopy, which can capture 3D information with axial resolution better than 2 ?m in a 100?m deep volume. Capturing a 3D image requires only a single camera exposure with a sub-millisecond laser pulse, allowing us to image cell cultures using five orders of magnitude less light energy than with confocal. This can be done with hardware costing ˜1000. We use the instrument to image growth of MCF7 breast cancer cells and p. pastoras yeast.

Dimiduk, Thomas; Nyberg, Kendra; Almeda, Dariela; Koshelva, Ekaterina; McGorty, Ryan; Kaz, David; Gardel, Emily; Auguste, Debra; Manoharan, Vinothan

2011-03-01

63

Survey of Retinal Image Segmentation and Registration  

Microsoft Academic Search

Diagnosis and treatment of several disorders affecting the retina and the choroid behind it require capturing a sequence of fundus images using the fundus camera. These images are to be processed for better diagnosis and planning of treatment. Retinal image segmentation is greatly required to extract certain features that may help in diagnosis and treatment. Also registration of retinal images

Mai S. Mabrouk; Nahed H. Solouma; Yasser M. Kadah

2006-01-01

64

3D beam reconstruction by fluorescence imaging.  

PubMed

We present a technique for mapping the complete 3D spatial intensity profile of a laser beam from its fluorescence in an atomic vapour. We propagate shaped light through a rubidium vapour cell and record the resonant scattering from the side. From a single measurement we obtain a camera limited resolution of 200 × 200 transverse points and 659 longitudinal points. In constrast to invasive methods in which the camera is placed in the beam path, our method is capable of measuring patterns formed by counterpropagating laser beams. It has high resolution in all 3 dimensions, is fast and can be completely automated. The technique has applications in areas which require complex beam shapes, such as optical tweezers, atom trapping and pattern formation. PMID:24104113

Radwell, N; Boukhet, M A; Franke-Arnold, S

2013-09-23

65

3D scene reconstruction from multi-aperture images  

NASA Astrophysics Data System (ADS)

With the development of virtual reality, there is a growing demand for 3D modeling of real scenes. This paper proposes a novel 3D scene reconstruction framework based on multi-aperture images. Our framework consists of four parts. Firstly, images with different apertures are captured via programmable aperture. Secondly, we use SIFT method for feature point matching. Then we exploit binocular stereo vision to calculate camera parameters and 3D positions of matching points, forming a sparse 3D scene model. Finally, we apply patch-based multi-view stereo to obtain a dense 3D scene model. Experimental results show that our method is practical and effective to reconstruct dense 3D scene.

Mao, Miao; Qin, Kaihuai

2014-04-01

66

Image performance evaluation of a 3D surgical imaging platform  

NASA Astrophysics Data System (ADS)

The O-arm (Medtronic Inc.) is a multi-dimensional surgical imaging platform. The purpose of this study was to perform a quantitative evaluation of the imaging performance of the O-arm in an effort to understand its potential for future nonorthopedic applications. Performance of the reconstructed 3D images was evaluated, using a custom-built phantom, in terms of resolution, linearity, uniformity and geometrical accuracy. Both the standard (SD, 13 s) and high definition (HD, 26 s) modes were evaluated, with the imaging parameters set to image the head (120 kVp, 100 mAs and 150 mAs, respectively). For quantitative noise characterization, the images were converted to Hounsfield units (HU) off-line. Measurement of the modulation transfer function revealed a limiting resolution (at 10% level) of 1.0 mm-1 in the axial dimension. Image noise varied between 15 and 19 HU for the HD and SD modes, respectively. Image intensities varied linearly over the measured range, up to 1300 HU. Geometric accuracy was maintained in all three dimensions over the field of view. The present study has evaluated the performance characteristics of the O-arm, and demonstrates feasibility for use in interventional applications and quantitative imaging tasks outside those currently targeted by the manufacturer. Further improvements to the reconstruction algorithms may further enhance performance for lower-contrast applications.

Petrov, Ivailo E.; Nikolov, Hristo N.; Holdsworth, David W.; Drangova, Maria

2011-03-01

67

3D Model Acquisition from Extended Image Sequences  

Microsoft Academic Search

This paper describes the extraction of 3D geometrical data from image sequences, for thepurpose of creating 3D models of objects in the world. The approach is uncalibrated -camera internal parameters and camera motion are not known or required.Processing an image sequence is underpinned by token correspondences between images.We utilise matching techniques which are both robust (detecting and discardingmismatches) and fully

Paul A. Beardsley; Philip H. S. Torr; Andrew Zisserman

1996-01-01

68

Extraction of 3-D information from sonar image sequences  

Microsoft Academic Search

This paper describes a set of methods that make it possible to estimate the position of a feature inside a three-dimensional (3D) space by starting from a sequence of two-dimensional (2D) acoustic images of the seafloor acquired with a sonar system. Typical sonar imaging systems are able to generate just 2D images, and the acquisition of 3D information involves sharp

Andrea Trucco; Simone Curletto

2003-01-01

69

Real-time 3D image registration for functional MRI.  

PubMed

Subject head movements are one of the main practical difficulties with brain functional MRI. A fast, accurate method for rotating and shifting a three-dimensional (3D) image using a shear factorization of the rotation matrix is described. Combined with gradient descent (repeated linearization) on a least squares objective function, 3D image realignment for small movements can be computed as rapidly as whole brain images can be acquired on current scanners. Magn Reson Med 42:1014-1018, 1999. PMID:10571921

Cox, R W; Jesmanowicz, A

1999-12-01

70

Automatic Detection, Segmentation and Classification of Retinal Horizontal Neurons in Large-scale 3D Confocal Imagery  

SciTech Connect

Automatic analysis of neuronal structure from wide-field-of-view 3D image stacks of retinal neurons is essential for statistically characterizing neuronal abnormalities that may be causally related to neural malfunctions or may be early indicators for a variety of neuropathies. In this paper, we study classification of neuron fields in large-scale 3D confocal image stacks, a challenging neurobiological problem because of the low spatial resolution imagery and presence of intertwined dendrites from different neurons. We present a fully automated, four-step processing approach for neuron classification with respect to the morphological structure of their dendrites. In our approach, we first localize each individual soma in the image by using morphological operators and active contours. By using each soma position as a seed point, we automatically determine an appropriate threshold to segment dendrites of each neuron. We then use skeletonization and network analysis to generate the morphological structures of segmented dendrites, and shape-based features are extracted from network representations of each neuron to characterize the neuron. Based on qualitative results and quantitative comparisons, we show that we are able to automatically compute relevant features that clearly distinguish between normal and abnormal cases for postnatal day 6 (P6) horizontal neurons.

Karakaya, Mahmut [ORNL; Kerekes, Ryan A [ORNL; Gleason, Shaun Scott [ORNL; Martins, Rodrigo [St. Jude Children's Research Hospital; Dyer, Michael [St. Jude Children's Research Hospital

2011-01-01

71

Automatic detection, segmentation and characterization of retinal horizontal neurons in large-scale 3D confocal imagery  

NASA Astrophysics Data System (ADS)

Automatic analysis of neuronal structure from wide-field-of-view 3D image stacks of retinal neurons is essential for statistically characterizing neuronal abnormalities that may be causally related to neural malfunctions or may be early indicators for a variety of neuropathies. In this paper, we study classification of neuron fields in large-scale 3D confocal image stacks, a challenging neurobiological problem because of the low spatial resolution imagery and presence of intertwined dendrites from different neurons. We present a fully automated, four-step processing approach for neuron classification with respect to the morphological structure of their dendrites. In our approach, we first localize each individual soma in the image by using morphological operators and active contours. By using each soma position as a seed point, we automatically determine an appropriate threshold to segment dendrites of each neuron. We then use skeletonization and network analysis to generate the morphological structures of segmented dendrites, and shape-based features are extracted from network representations of each neuron to characterize the neuron. Based on qualitative results and quantitative comparisons, we show that we are able to automatically compute relevant features that clearly distinguish between normal and abnormal cases for postnatal day 6 (P6) horizontal neurons.

Karakaya, Mahmut; Kerekes, Ryan A.; Gleason, Shaun S.; Martins, Rodrigo A. P.; Dyer, Michael A.

2011-03-01

72

Autofocus for 3D imaging with multipass SAR  

NASA Astrophysics Data System (ADS)

The emergence of 3D imaging from multipass radar collections motivates the need for 3D autofocus. While several effective methods exist to coherently align radar pulses for 2D image formation from a single elevation pass, further methods are needed to appropriately align radar collection surfaces from pass to pass. We propose one such method of 3D autofocus involving the optimization of a coherence factor metric for the dominant scatterers in an image scene. This method is demonstrated using a diffuse target from a multipass collection of circular SAR data.

Boss, Noah; Ertin, Emre; Moses, Randolph

2010-04-01

73

Plague and anthrax bacteria cell ultra structure 3D images  

NASA Astrophysics Data System (ADS)

The vast majority of information about cells and cell organelle structure were obtained by means of transmission electron microscopy investigation of cells serial thin sections. However often it is very difficult to derive information about 3D structure of specimens from such electron micrographs. A new program restoring 3D image of cells from the serial thin sections micrographs have been developed in our lab. The program makes it possible to visualize a 3D image of cell and obtain image of inner cell structure in arbitrary plane. The plague bacteria and anthrax cells with spore were visualized with resolution about 70 nm by means of the program.

Volkov, Uryi P.; Konnov, Nikolai P.; Novikova, Olga V.; Yakimenko, Roman A.

2002-07-01

74

Hyperspectral image compression with modified 3D SPECK  

Microsoft Academic Search

Hyperspectral image consist of a set of contiguous images bands collected by a hyperspectral sensor. The large amount of data of hyperspectral images emphasizes the importance of efficient compression for storage and transmission. This paper proposes the simplified version of the three dimensional Set Partitioning Embedded bloCK (3D SPECK) algorithm for lossy compression of hyperspectral image. A three dimensional discrete

Ruzelita Ngadiran; Said Boussakta; Ahmed Bouridane; Bayan Syarif

2010-01-01

75

Image plane interaction techniques in 3D immersive environments  

Microsoft Academic Search

This paper presents a set of interaction techniques for use in head- tracked immersive virtual environments. With these techniques, the user interacts with the 2D projections that 3D objects in the scene make on his image plane. The desktop analog is the use of a mouse to interact with objects in a 3D scene based on their projections on the

Jeffrey S. Pierce; Andrew S. Forsberg; Matthew J. Conway; Seung Hong; Robert C. Zeleznik; Mark R. Mine

1997-01-01

76

Plague and anthrax bacteria cell ultra structure 3D images  

Microsoft Academic Search

The vast majority of information about cells and cell organelle structure were obtained by means of transmission electron microscopy investigation of cells serial thin sections. However often it is very difficult to derive information about 3D structure of specimens from such electron micrographs. A new program restoring 3D image of cells from the serial thin sections micrographs have been developed

Uryi P. Volkov; Nikolai P. Konnov; Olga V. Novikova; Roman A. Yakimenko

2002-01-01

77

3-D capacitance density imaging system  

DOEpatents

A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved. 7 figs.

Fasching, G.E.

1988-03-18

78

Locating the Optic Disc in Retinal Images  

Microsoft Academic Search

We present a method to automatically outline the optic disc in a retinal image. Our method for finding the optic disc is based on the properties of the optic disc using simple image processing algorithms which include thresholding, detection of object roundness and circle detection by Hough transformation. Our method is able to recognize the retinal images with general properties

Mira Park; Jesse S. Jin; Suhuai Luo

2006-01-01

79

Accommodation response measurements for integral 3D image  

NASA Astrophysics Data System (ADS)

We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468 dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned 60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15 and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions. The results suggest that the IP image induced more natural accommodation responses compared to the binocular stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however, they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce accommodation to the depth positions of 3D images.

Hiura, H.; Mishina, T.; Arai, J.; Iwadate, Y.

2014-03-01

80

Image processing techniques in 3-D foot shape measurement system  

NASA Astrophysics Data System (ADS)

The 3-D foot-shape measurement system based on laser-line-scanning principle was designed and 3-D foot-shape measurements without blind areas and the automatic extraction of foot-parameters were achieved. The paper is focused on the study of the system structure and principle and image processing techniques. The key techniques related to the image processing for 3-D foot shape measurement system include laser stripe extraction, laser stripe coordinate transformation from CCD cameras image coordinates system to laser plane coordinates system, laser stripe assembly of eight CCD cameras and eliminating of image noise and disturbance. 3-D foot shape measurement makes it possible to realize custom-made shoe-making and shows great prosperity in shoe design, foot orthopaedic treatment, shoe size standardization and establishment of a feet database for consumers.

Liu, Guozhong; Li, Ping; Wang, Boxiong; Shi, Hui; Luo, Xiuzhi

2008-11-01

81

3D Imaging and analysis system using terahertz waves  

Microsoft Academic Search

We have developed the “3D Imaging Analysis System” that uses terahertz waves, the world's first such system for practical applications. This system has an unprecedented capability for nondestructive three-dimensional spectroscopic analysis of the spatial distribution of constituents.

M. Imamura; S. Nishina; A. Irisawa; T. Yamashita; E. Kato

2010-01-01

82

Development of a Postprocessing and 3D Graphical Imaging Facility.  

National Technical Information Service (NTIS)

This grant supported the acquisition of equipment towards the development of what has been termed a Postprocessing and 3D Graphical Imaging Facility. The primary function of the facility is in the analysis of numerical and experimental data, perhaps creat...

J. G. Brasseur

1990-01-01

83

Image based 3D city modeling : Comparative study  

NASA Astrophysics Data System (ADS)

3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city reconstruction; CityEngine is a good product. Agisoft Photoscan software creates much better 3D model with good texture quality and automatic processing. So this image based comparative study is useful for 3D city user community. Thus this study will provide a good roadmap for geomatics user community to create photo-realistic virtual 3D city model by using image based techniques.

Singh, S. P.; Jain, K.; Mandla, V. R.

2014-06-01

84

3D Model Streaming based on a JPEG 2000 Image  

Microsoft Academic Search

For PC and even mobile devices, video and image streaming technologies, such as H.264 and JPEG\\/JPEG 2000, are already mature. However, the 3D model streaming technology is still far from practical use. Therefore, we wonder if 3D model streaming can directly benefit from current image and video streaming technologies. Hence, in this paper, we propose a mesh streaming method based

Nein-Hsiung Lin; Ting-Hao Huang; Bing-Yu Chen

2007-01-01

85

Spatial 3D imaging by synthetic and digitized holography  

Microsoft Academic Search

A novel method named digitized holography is proposed for 3D display systems. This is the technique replacing the whole process of classical holography with digital processing of optical wave-fields. The digitized holography allows us to edit holograms and reconstruct spatial 3D images including real-existent objects and CG-modeled virtual objects. Index Terms—Holography, Digital recording, Optical imaging

Yasuaki Arima; Kyoji Matsushima; Sumio Nakahara

2011-01-01

86

3-D optical and electrical simulation for CMOS image sensors  

Microsoft Academic Search

The optical and electrical characteristics of CMOS image sensors, such as readout, saturation, reset, charge-voltage conversion, and crosstalk characteristics, are analyzed by a three-dimensional (3-D) device simulator SPECTRA and a 3-D optical simulator TOCCATA which were developed for the analysis of CCD image sensors. The model of readout operation for a buried photodiode with potential barrier and dip is discussed

Hideki Mutoh

2003-01-01

87

3D-3D tubular organs registration based on bifurcations for the CT images.  

PubMed

The registration of tubular organs (pulmonary tracheobronchial tree or vasculature) of 3D medical images is critical in various clinical applications such as surgical planning and radiotherapy. In this paper, we present a novel method for tubular organs registration based on the automatically detected bifurcation points of the tubular organs. We first perform a 3D tubular organ segmentation method to extract the centerlines of tubular organs and radius estimation in both planning and respiration-correlated CT (RCCT) images. This segmentation method automatically detects the bifurcation points by applying Adaboost algorithm with specially designed filters. We then apply a rigid registration method which minimizes the least square error of the corresponding bifurcation points between the planning CT images and the respiration-correlated CT images. Our method has over 96% success rate for detecting bifurcation points.We present very promising results of our method applied to the registration of the planning and respiration-correlated CT images. On average, the mean distance and the root-mean-square error (RMSE) of the corresponding bifurcation points between the respiration-correlated images and the registered planning images are less than 2.7 mm. PMID:19163937

Zhou, Jinghao; Chang, Sukmoon; Metaxas, Dimitris; Mageras, Gig

2008-01-01

88

Imaging fault zones using 3D seismic image processing techniques  

NASA Astrophysics Data System (ADS)

Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes and collecting these into "disturbance geobodies". These seismic image processing methods represents a first efficient step toward a construction of a robust technique to investigate sub-seismic strain, mapping noisy deformed zones and displacement within subsurface geology (Dutzer et al.,2011; Iacopini et al.,2012). In all these cases, accurate fault interpretation is critical in applied geology to building a robust and reliable reservoir model, and is essential for further study of fault seal behavior, and reservoir compartmentalization. They are also fundamental for understanding how deformation localizes within sedimentary basins, including the processes associated with active seismogenetic faults and mega-thrust systems in subduction zones. Dutzer, JF, Basford., H., Purves., S. 2009, Investigating fault sealing potential through fault relative seismic volume analysis. Petroleum Geology Conference series 2010, 7:509-515; doi:10.1144/0070509 Marfurt, K.J., Chopra, S., 2007, Seismic attributes for prospect identification and reservoir characterization. SEG Geophysical development Iacopini, D., Butler, RWH. & Purves, S. (2012). 'Seismic imaging of thrust faults and structural damage: a visualization workflow for deepwater thrust belts'. First Break, vol 5, no. 30, pp. 39-46.

Iacopini, David; Butler, Rob; Purves, Steve

2013-04-01

89

3D Finite Element Meshing from Imaging Data ?  

PubMed Central

This paper describes an algorithm to extract adaptive and quality 3D meshes directly from volumetric imaging data. The extracted tetrahedral and hexahedral meshes are extensively used in the Finite Element Method (FEM). A top-down octree subdivision coupled with the dual contouring method is used to rapidly extract adaptive 3D finite element meshes with correct topology from volumetric imaging data. The edge contraction and smoothing methods are used to improve the mesh quality. The main contribution is extending the dual contouring method to crack-free interval volume 3D meshing with feature sensitive adaptation. Compared to other tetrahedral extraction methods from imaging data, our method generates adaptive and quality 3D meshes without introducing any hanging nodes. The algorithm has been successfully applied to constructing the geometric model of a biomolecule in finite element calculations.

Zhang, Yongjie; Bajaj, Chandrajit; Sohn, Bong-Soo

2009-01-01

90

3-D recognition and shape estimation from image contours using invariant 3-D object curve models  

Microsoft Academic Search

The problem of recognizing and estimating the shape of objects with special markings (text, symbols, drawings, etc.) on their surfaces using B-spline curve modeling and a pair of binocular images is considered. As a direct consequence of the invariance of the B-splines to affine transformations the computation of the 3-D coordinates of the object curve points from a pair of

Femand S. Cohen; Jin-Yinn Wang

1992-01-01

91

Low Dose, Low Energy 3d Image Guidance during Radiotherapy  

NASA Astrophysics Data System (ADS)

Patient kilo-voltage X-ray cone beam volumetric imaging for radiotherapy was first demonstrated on an Elekta Synergy mega-voltage X-ray linear accelerator. Subsequently low dose, reduced profile reconstruction imaging was shown to be practical for 3D geometric setup registration to pre-treatment planning images without compromising registration accuracy. Reconstruction from X-ray profiles gathered between treatment beam deliveries was also introduced. The innovation of zonal cone beam imaging promises significantly reduced doses to patients and improved soft tissue contrast in the tumour target zone. These developments coincided with the first dynamic 3D monitoring of continuous body topology changes in patients, at the moment of irradiation, using a laser interferometer. They signal the arrival of low dose, low energy 3D image guidance during radiotherapy itself.

Moore, C. J.; Marchant, T.; Amer, A.; Sharrock, P.; Price, P.; Burton, D.

2006-04-01

92

EISCAT Aperture Synthesis Imaging (EASI _3D) for the EISCAT_3D Project  

NASA Astrophysics Data System (ADS)

Aperture Synthesis Imaging Radar (ASIR) is one of the technologies adopted by the EISCAT_3D project to endow it with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. This ability will open new research opportunities to map small structures associated with non-homogeneous, unstable processes such as aurora, summer and winter polar radar echoes (PMSE and PMWE), Natural Enhanced Ion Acoustic Lines (NEIALs), structures excited by HF ionospheric heating, meteors, space debris, and others. The underlying physico-mathematical principles of the technique are the same as the technique employed in radioastronomy to image stellar objects; both require sophisticated inversion techniques to obtain reliable images.

La Hoz, Cesar; Belyey, Vasyl

2012-07-01

93

Accelerated 3D catheter visualization from triplanar MR projection images.  

PubMed

One major obstacle for MR-guided catheterizations is long acquisition times associated with visualizing interventional devices. Therefore, most techniques presented hitherto rely on single-plane imaging to visualize the catheter. Recently, accelerated three-dimensional (3D) imaging based on compressed sensing has been proposed to reduce acquisition times. However, frame rates with this technique remain low, and the 3D reconstruction problem yields a considerable computational load. In X-ray angiography, it is well understood that the shape of interventional devices can be derived in 3D space from a limited number of projection images. In this work, this fact is exploited to develop a method for 3D visualization of active catheters from multiplanar two-dimensional (2D) projection MR images. This is favorable to 3D MRI as the overall number of acquired profiles, and consequently the acquisition time, is reduced. To further reduce measurement times, compressed sensing is employed. Furthermore, a novel single-channel catheter design is presented that combines a solenoidal tip coil in series with a single-loop antenna, enabling simultaneous tip tracking and shape visualization. The tracked tip and catheter properties provide constraints for compressed sensing reconstruction and subsequent 2D/3D curve fitting. The feasibility of the method is demonstrated in phantoms and in an in vivo pig experiment. PMID:20572136

Schirra, Carsten Oliver; Weiss, Steffen; Krueger, Sascha; Caulfield, Denis; Pedersen, Steen F; Razavi, Reza; Kozerke, Sebastian; Schaeffter, Tobias

2010-07-01

94

Prostate Mechanical Imaging: 3-D Image Composition and Feature Calculations  

PubMed Central

We have developed a method and a device entitled prostate mechanical imager (PMI) for the real-time imaging of prostate using a transrectal probe equipped with a pressure sensor array and position tracking sensor. PMI operation is based on measurement of the stress pattern on the rectal wall when the probe is pressed against the prostate. Temporal and spatial changes in the stress pattern provide information on the elastic structure of the gland and allow two-dimensional (2-D) and three-dimensional (3-D) reconstruction of prostate anatomy and assessment of prostate mechanical properties. The data acquired allow the calculation of prostate features such as size, shape, nodularity, consistency/hardness, and mobility. The PMI prototype has been validated in laboratory experiments on prostate phantoms and in a clinical study. The results obtained on model systems and in vivo images from patients prove that PMI has potential to become a diagnostic tool that could largely supplant DRE through its higher sensitivity, quantitative record storage, ease-of-use and inherent low cost.

Egorov, Vladimir; Ayrapetyan, Suren; Sarvazyan, Armen P.

2008-01-01

95

Automated curved planar reformation of 3D spine images  

Microsoft Academic Search

Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon,

Tomaz Vrtovec; Bostjan Likar; Franjo Pernus

2005-01-01

96

Optical microscopy system for 3D dynamic imaging  

NASA Astrophysics Data System (ADS)

We describe a prototype 3D optical microscopy system that utilizes parallel computing and high-speed networks to address a major obstacle to successful implementation of 3D dynamic microscopy: the huge computational demand of real-time dynamic 3D acquisition, reconstruction, and display, and the high-bandwidth demand of data transfer for remote processing and display. The system comprises image acquisition hardware and software, high- speed networks between acquisition and processing environments, parallel restoration using wavelet algorithms, and volume rendering and display in a virtual environment.

Hudson, Randy; Aarsvold, John N.; Chen, Chin-Tu; Chen, Jie; Davies, Peter; Disz, Terry; Foster, Ian; Griem, Melvin; Kwong, Man K.; Lin, Biquan

1996-04-01

97

A 3D surface imaging system for assessing human obesity  

NASA Astrophysics Data System (ADS)

The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

2009-08-01

98

Heresy: a virtual image-space 3D rasterization architecture  

Microsoft Academic Search

With the advent of virtual reality and other visualapplications that require photo and cinemarealism, 3D graphics hardware has started to enterinto the main stream. This paper describesthe design and evaluation of a cost-effective highperformance3D graphics system called Heresythat is based on virtual image-space architecture. Heresy features three novel architecturalmechanisms. First, the lazy shading mechanismrenders the shading computation effort to be

Tzi-cker Chiueh

1997-01-01

99

Estimating 3D Egomotion from Perspective Image Sequence  

Microsoft Academic Search

The computation of sensor motion from sets of displacement vectors obtained from consecutive pairs of images is discussed. The problem is investigated with emphasis on its application to autonomous robots and land vehicles. The effects of 3D camera rotation and translation upon the observed image are discussed, particularly the concept of the focus of expansion (FOE). It is shown that

Wilhelm Burger; Bir Bhanu

1990-01-01

100

Image segmentation and 3D visualization for MRI mammography  

Microsoft Academic Search

MRI mammography has a number of advantages, including the tomographic, and therefore three-dimensional (3-D) nature, of the images. It allows the application of MRI mammography to breasts with dense tissue, post operative scarring, and silicon implants. However, due to the vast quantity of images and subtlety of difference in MR sequence, there is a need for reliable computer diagnosis to

Lihua Li; Yong Chu; Angela F. Salem; Robert A. Clark

2002-01-01

101

Prototype of Video Endoscopic Capsule With 3-D Imaging Capabilities  

Microsoft Academic Search

Wireless video capsules can now carry out gastroenterological examinations. The images make it possible to analyze some diseases during postexamination, but the gastroenterologist could make a direct diagnosis if the video capsule integrated vision algorithms. The first step toward in situ diagnosis is the implementation of 3-D imaging techniques in the video capsule. By transmitting only the diagnosis instead of

Anthony Kolar; Olivier Romain; Jad Ayoub; Sylvain Viateur; Bertrand Granado

2010-01-01

102

3D image analysis of abdominal aortic aneurysm  

NASA Astrophysics Data System (ADS)

In this paper we propose a technique for 3-D segmentation of abdominal aortic aneurysm (AAA) from computed tomography angiography (CTA) images. Output data (3-D model) form the proposed method can be used for measurement of aortic shape and dimensions. Knowledge of aortic shape and size is very important in planning of minimally invasive procedure that is for selection of appropriate stent graft device for treatment of AAA. The technique is based on a 3-D deformable model and utilizes the level-set algorithm for implementation of the method. The method performs 3-D segmentation of CTA images and extracts a 3-D model of aortic wall. Once the 3-D model of aortic wall is available it is easy to perform all required measurements for appropriate stent graft selection. The method proposed in this paper uses the level-set algorithm for deformable models, instead of the classical snake algorithm. The main advantage of the level set algorithm is that it enables easy segmentation of complex structures, surpassing most of the drawbacks of the classical approach. We have extended the deformable model to incorporate the a priori knowledge about the shape of the AAA. This helps direct the evolution of the deformable model to correctly segment the aorta. The algorithm has been implemented in IDL and C languages. Experiments have been performed using real patient CTA images and have shown good results.

Subasic, Marko; Loncaric, Sven; Sorantin, Erich

2001-07-01

103

3D image analysis of abdominal aortic aneurysm  

NASA Astrophysics Data System (ADS)

This paper presents a method for 3-D segmentation of abdominal aortic aneurysm from computed tomography angiography images. The proposed method is automatic and requires minimal user assistance. Segmentation is performed in two steps. First inner and then outer aortic border is segmented. Those two steps are different due to different image conditions on two aortic borders. Outputs of these two segmentations give a complete 3-D model of abdominal aorta. Such a 3-D model is used in measurements of aneurysm area. The deformable model is implemented using the level-set algorithm due to its ability to describe complex shapes in natural manner which frequently occur in pathology. In segmentation of outer aortic boundary we introduced some knowledge based preprocessing to enhance and reconstruct low contrast aortic boundary. The method has been implemented in IDL and C languages. Experiments have been performed using real patient CTA images and have shown good results.

Subasic, Marko; Loncaric, Sven; Sorantin, Erich

2002-05-01

104

Single 3D cell segmentation from optical CT microscope images  

NASA Astrophysics Data System (ADS)

The automated segmentation of the nucleus and cytoplasm regions in 3D optical CT microscope images has been achieved with two methods, a global threshold gradient based approach and a graph-cut approach. For the first method, the first two peaks of a gradient figure of merit curve are selected as the thresholds for cytoplasm and nucleus segmentation. The second method applies a graph-cut segmentation twice: the first identifies the nucleus region and the second identifies the cytoplasm region. Image segmentation of single cells is important for automated disease diagnostic systems. The segmentation methods were evaluated with 200 3D images consisting of 40 samples of 5 different cell types. The cell types consisted of columnar, macrophage, metaplastic and squamous human cells and cultured A549 cancer cells. The segmented cells were compared with both 2D and 3D reference images and the quality of segmentation was determined by the Dice Similarity Coefficient (DSC). In general, the graph-cut method had a superior performance to the gradient-based method. The graph-cut method achieved an average DSC of 86% and 72% for nucleus and cytoplasm segmentations respectively for the 2D reference images and 83% and 75% for the 3D reference images. The gradient method achieved an average DSC of 72% and 51% for nucleus and cytoplasm segmentation for the 2D reference images and 71% and 51% for the 3D reference images. The DSC of cytoplasm segmentation was significantly lower than for the nucleus since the cytoplasm was not differentiated as well by image intensity from the background.

Xie, Yiting; Reeves, Anthony P.

2014-03-01

105

3D TRUS Image Segmentation in Prostate Brachytherapy.  

PubMed

Brachytherapy is a minimally invasive interventional surgery used to treat prostate cancer. It is composed of three steps: dose pre-planning, implantation of radioactive seeds, and dose post-planning. In these procedures, it is crucial to determine the positions of needles and seeds, measure the volume of the prostate gland. Three-dimensional transrectal ultrasound (TRUS) imaging has been demonstrated to be a useful technique to perform such tasks. Compared to CT, MRI or X-ray imaging, US image suffers from low contrast, image speckle and shadows, making it challenging for segmentation of needles, the prostates and seeds in the 3D TRUS images. In this paper, we reviewed 3D TRUS image segmentation methods used in prostate brachytherapy including the segmentations of the needles, the prostate, as well as the seeds. Furthermore, some experimental results with agar phantom, turkey and chicken phantom, as well as the patient data are reported. PMID:17281931

Ding, Mingyue; Gardi, Lori; Wei, Zhouping; Fenster, Aaron

2005-01-01

106

3D shape initialization of objects in multiview image sequences  

NASA Astrophysics Data System (ADS)

The ultimate goal for future telecommunication is highly effective inter-personal information exchange. The effectiveness of telecommunication is greatly enhanced by 3-D telepresence. This requires that visual information is presented in such a way that the viewer is under the impression of actually being physically close to the party with whom the communication takes place. One way to achieve a natural 3-D impression is to encode image sequences using 3-D model objects and animate them again by computer graphic means regarding the observers eye positions. This concept will use a parametric 3-D scene description in order to model a scene. The parameters of the model objects will be estimated from trinocular input image sequences by means of image analysis. This paper starts with an overview on the European ACTS project PANORAMA, in which the above mentioned concept will be realized and evaluated. In the main part the shape initialization of physical objects from a multiview image sequence will be discussed. For this the range information given by three disparity maps from different stereo views is backprojected into 3-D space. The resulting cloud of 3-D points is then approximated by a flexible triangular net by using a technique named discrete smooth interpolation. The discrete smooth interpolation is a particular surface interpolation technique, which is solved by an iterative approach. It allows to generate a surface, defined as a wireframe mesh, that fits (or interpolates) a given set of 3D points by observing, at the same time, some given constraints about the surface characteristics, like roughness, behavior at the boundaries, etc. The finally presented results show the capabilities of this approach in video communication.

Riegel, Thomas B.; Pedersini, Federico; Manzotti, Roberto

1998-03-01

107

Integrated optical 3D digital imaging based on DSP scheme  

NASA Astrophysics Data System (ADS)

We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.

Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.

2008-03-01

108

Imaging of Buried 3D Magnetic Rolled-up Nanomembranes  

PubMed Central

Increasing performance and enabling novel functionalities of microelectronic devices, such as three-dimensional (3D) on-chip architectures in optics, electronics, and magnetics, calls for new approaches in both fabrication and characterization. Up to now, 3D magnetic architectures had mainly been studied by integral means without providing insight into local magnetic microstructures that determine the device performance. We prove a concept that allows for imaging magnetic domain patterns in buried 3D objects, for example, magnetic tubular architectures with multiple windings. The approach is based on utilizing the shadow contrast in transmission X-ray magnetic circular dichroism (XMCD) photoemission electron microscopy and correlating the observed 2D projection of the 3D magnetic domains with simulated XMCD patterns. That way, we are not only able to assess magnetic states but also monitor the field-driven evolution of the magnetic domain patterns in individual windings of buried magnetic rolled-up nanomembranes.

2014-01-01

109

Imaging of Buried 3D Magnetic Rolled-up Nanomembranes.  

PubMed

Increasing performance and enabling novel functionalities of microelectronic devices, such as three-dimensional (3D) on-chip architectures in optics, electronics, and magnetics, calls for new approaches in both fabrication and characterization. Up to now, 3D magnetic architectures had mainly been studied by integral means without providing insight into local magnetic microstructures that determine the device performance. We prove a concept that allows for imaging magnetic domain patterns in buried 3D objects, for example, magnetic tubular architectures with multiple windings. The approach is based on utilizing the shadow contrast in transmission X-ray magnetic circular dichroism (XMCD) photoemission electron microscopy and correlating the observed 2D projection of the 3D magnetic domains with simulated XMCD patterns. That way, we are not only able to assess magnetic states but also monitor the field-driven evolution of the magnetic domain patterns in individual windings of buried magnetic rolled-up nanomembranes. PMID:24849571

Streubel, Robert; Han, Luyang; Kronast, Florian; Unal, Ahmet A; Schmidt, Oliver G; Makarov, Denys

2014-07-01

110

3D ultrasound imaging of the human corpus luteum.  

PubMed

The aim of this article was to present the extent to which the state-of-the art ultrasonographic imaging can be used to visualize the features of the human corpus luteum (CL). In the late 1970s, the first ultrasonographic images of human CLs were published. The advent of transvaginal, high-resolution transducers has greatly improved the quality of imaging as did the subsequent introduction of color Doppler and 3D ultrasonography. In the present technical note, the examples of the various technical and imaging modalities used to examine the human CLs are shown. CL is a short-lived structure with a highly variable morphological appearance and the 3D ultrasonographic technique is an ideal tool to perform standardized measurements on the CL. The introduction of new imaging techniques in clinical reproductive medicine can only be successful if operators are properly trained. PMID:24856469

Brezinka, Christoph

2014-04-01

111

3D EFT imaging with planar electrode array: Numerical simulation  

NASA Astrophysics Data System (ADS)

Electric field tomography (EFT) is the new modality of the quasistatic electromagnetic sounding of conductive media recently investigated theoretically and realized experimentally. The demonstrated results pertain to 2D imaging with circular or linear arrays of electrodes (and the linear array provides quite poor quality of imaging). In many applications 3D imaging is essential or can increase value of the investigation significantly. In this report we present the first results of numerical simulation of the EFT imaging system with planar array of electrodes which allows 3D visualization of the subsurface conductivity distribution. The geometry of the system is similar to the geometry of our EIT breast imaging system providing 3D conductivity imaging in form of cross-sections set with different depth from the surface. The EFT principle of operation and reconstruction approach differs from the EIT system significantly. So the results of numerical simulation are important to estimate if comparable quality of imaging is possible with the new contactless method. The EFT forward problem is solved using finite difference time domain (FDTD) method for the 8×8 square electrodes array. The calculated results of measurements are used then to reconstruct conductivity distributions by the filtered backprojections along electric field lines. The reconstructed images of the simple test objects are presented.

Tuykin, T.; Korjenevsky, A.

2010-04-01

112

Imaging of retinal and choroidal vascular tumours  

PubMed Central

The most common intraocular vascular tumours are choroidal haemangiomas, vasoproliferative tumours, and retinal haemangioblastomas. Rarer conditions include cavernous retinal angioma and arteriovenous malformations. Options for ablating the tumour include photodynamic therapy, argon laser photocoagulation, trans-scleral diathermy, cryotherapy, anti-angiogenic agents, plaque radiotherapy, and proton beam radiotherapy. Secondary effects are common and include retinal exudates, macular oedema, epiretinal membranes, retinal fibrosis, as well as serous and tractional retinal detachment, which are treated using standard methods (ie, intravitreal anti-angiogenic agents or steroids as well as vitreoretinal procedures, such as epiretinal membrane peeling and release of retinal traction). The detection, diagnosis, and monitoring of vascular tumours and their complications have improved considerably thanks to advances in imaging. These include spectral domain and enhanced depth imaging optical coherence tomography (SD-OCT and EDI-OCT, respectively), wide-angle photography and angiography as well as wide-angle fundus autofluorescence. Such novel imaging has provided new diagnostic clues and has profoundly influenced therapeutic strategies so that vascular tumours and secondary effects are now treated concurrently instead of sequentially, enhancing any opportunities for conserving vision and the eye. In this review, we describe how SD-OCT, EDI-OCT, autofluorescence, wide-angle photography and wide-angle angiography have facilitated the evaluation of eyes with the more common vascular tumours, that is, choroidal haemangioma, retinal vasoproliferative tumours, and retinal haemangioblastoma.

Heimann, H; Jmor, F; Damato, B

2013-01-01

113

3D FaceCam: a fast and accurate 3D facial imaging device for biometrics applications  

NASA Astrophysics Data System (ADS)

Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes). Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3D image rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.

Geng, Jason; Zhuang, Ping; May, Patrick; Yi, Steven; Tunnell, David

2004-08-01

114

Reconstruction of 3D scenes from sequences of images  

NASA Astrophysics Data System (ADS)

Reconstruction of three-dimensional (3D) scenes is an active research topic in the field of computer vision and 3D display. It's a challenge to model 3D objects rapidly and effectively. A 3D model can be extracted from multiple images. The system only requires a sequence of images taken with cameras without knowing the parameters of camera, which provide flexibility to a high degree. We focus on quickly merging point cloud of the object from depth map sequences. The whole system combines algorithms of different areas in computer vision, such as camera calibration, stereo correspondence, point cloud splicing and surface reconstruction. The procedure of 3D reconstruction is decomposed into a number of successive steps. Firstly, image sequences are received by the camera freely moving around the object. Secondly, the scene depth is obtained by a non-local stereo matching algorithm. The pairwise is realized with the Scale Invariant Feature Transform (SIFT) algorithm. An initial matching is then made for the first two images of the sequence. For the subsequent image that is processed with previous image, the point of interest corresponding to ones in previous images are refined or corrected. The vertical parallax between the images is eliminated. The next step is to calibrate camera, and intrinsic parameters and external parameters of the camera are calculated. Therefore, The relative position and orientation of camera are gotten. A sequence of depth maps are acquired by using a non-local cost aggregation method for stereo matching. Then point cloud sequence is achieved by the scene depths, which consists of point cloud model using the external parameters of camera and the point cloud sequence. The point cloud model is then approximated by a triangular wire-frame mesh to reduce geometric complexity and to tailor the model to the requirements of computer graphics visualization systems. Finally, the texture is mapped onto the wire-frame model, which can also be used for 3D display. According to the experimental results, we can reconstruct a 3D point cloud model more quickly and efficiently than other methods.

Niu, Bei; Sang, Xinzhu; Chen, Duo; Cai, Yuanfa

2013-08-01

115

Processing of 3D DIC microscopy images for data visualization  

Microsoft Academic Search

Differential interference contrast (DIC) microscopy is a popular method for studying the three dimensional structure of living cells. Currently, no volume rendering tools exist which support visualisation of 3D DIC data. The authors develop a transformation method which removes the differential appearance of DIC imager, producing data which is suitable for volume rendering, and compare this method to an edge-enhancing

P. A. Feineigle; Andrew P. Witkin; Virginia L. Stonick

1996-01-01

116

3D: The next generation near-infrared imaging spectrometer  

Microsoft Academic Search

The new MPE near infrared imaging spectrometer 3D represents a new generation of astronomical instrumentation. It is based on a 256^2^ NICMOS-3 Rockwell array and can simultaneously obtain 256 H- or K-band spectra at R=1100 or 2100 from a square 16x16 pixel field on the sky. Typical pixel scales are 0.3\\

L. Weitzel; A. Krabbe; H. Kroker; N. Thatte; L. E. Tacconi-Garman; M. Cameron; R. Genzel

1996-01-01

117

Segmentation of 3D range images using pyramidal data structures  

Microsoft Academic Search

Given a 3D range image of a scene containing multiple arbitrarily shaped objects, the authors segment the scene into homogeneous surface patches. A novel modular framework for the segmentation task is proposed. In the first module, over-segmentation is achieved using zeroth and first order local surface properties. The segmentation is then refined in the second module using high order surface

Bikash Sabata; Farshid Arman; J. K. Aggarwal

1990-01-01

118

Segmentation of 3D Range Images Using Pyramidal Data Structures  

Microsoft Academic Search

Given a 3-D range image of a scene containing multiple arbi- trarily shaped objects, we segment the scene into homogeneous surface patches. A new modular framework for the segment& tion task is proposed. In the first module, over-segmentation is achieved wing zeroth and first order local surface properties. The segmentation is then relined in the second module using high order

Bikash Sabata; Farshid Arman; J. K. Aggarwal

1993-01-01

119

3D imaging lidar for lunar robotic exploration  

NASA Astrophysics Data System (ADS)

Part of the requirements of the future Constellation program is to optimize lunar surface operations and reduce hazards to astronauts. Toward this end, many robotic platforms, rovers in specific, are being sought to carry out a multitude of missions involving potential EVA sites survey, surface reconnaissance, path planning and obstacle detection and classification. 3D imaging lidar technology provides an enabling capability that allows fast, accurate and detailed collection of three-dimensional information about the rover's environment. The lidar images the region of interest by scanning a laser beam and measuring the pulse time-of-flight and the bearing. The accumulated set of laser ranges and bearings constitutes the threedimensional image. As part of the ongoing NASA Ames research center activities in lunar robotics, the utility of 3D imaging lidar was evaluated by testing Optech's ILRIS-3D lidar on board the K-10 Red rover during the recent Human - Robotics Systems (HRS) field trails in Lake Moses, WA. This paper examines the results of the ILRIS-3D trials, presents the data obtained and discusses its application in lunar surface robotic surveying and scouting.

Hussein, Marwan W.; Tripp, Jeffrey W.

2009-05-01

120

A kinetic and 3D image input device  

Microsoft Academic Search

Gesture recognition in real time can bridge a gap between humans and computers. Object segmentation from the background is a critical problem in the conventional gesture recognition technology. We have developed a new input device which can detect a kinetic and 3D image of a hand in real time. We call it “Motion Processor”. The Motion Processor with infrared light

Shunichi Numazaki; Akira Morishita; Naoko Umeki; Minoru Ishikawa; Miwako Doi

1998-01-01

121

3-D transformations of images in scanline order  

Microsoft Academic Search

Currerntly texture mapping onto projections of 3-D surfaces is time consuming and subject to considerable aliasing errors. Usually the procedure is to perform some inverse mapping from the area of the pixel onto the surface texture. It is difficult to do this correctly. There is an alternate approach where the texture surface is transformed as a 2-D image until it

Ed Catmull; Alvy Ray Smith

1980-01-01

122

Towards statistically optimal interpolation for 3D medical imaging  

Microsoft Academic Search

The use of a statistical estimation technique called kriging, which produces estimation error measurements and analyzes the volumetric grid to determine sample value variability, is described. The use of interpolation in 3D medical imaging are first reviewed. Several different interpolation techniques, including linear trilinear, and tricubic interpolation techniques, are described and assessed. The kriging statistical estimation process is presented, and

Rob W. Parrott; Martin R. Stytz; Philli Amburn; D. Robinson

1993-01-01

123

Management of Impacted Cuspids Using 3-D Volumetric Imaging  

Microsoft Academic Search

Management of impacted cuspids is a complex clinical problem in- volving proper assessment and in- terdisciplinary treatment planning. In this paper, we describe the use of 3-D volumetric imaging in the management of impacted cuspids and illustrate this application in case reports of maxillary and mandibular impacted cuspids. Impacted Cuspids

James Mah; Reyes Enciso; Michael Jorgensen

2003-01-01

124

3-D IMAGE PROCESSING IN THE FUTURE OF IMMERSIVE MEDIA  

Microsoft Academic Search

This survey paper discusses the 3D image processing challenges posed by present and future immersive telecommunications, especially immersive video conferencing and television. We introduce the concepts of presence, immersion and co-presence, and discuss their relation with virtual collaborative environments in the context of communications. Several examples are used to illustrate the current state of the art. We highlight the crucial

Francesco Isgrò; Emanuele Trucco; Peter Kauff; Oliver Schreer

125

3D wavefront image formation for NIITEK GPR  

NASA Astrophysics Data System (ADS)

The U.S. Department of Defense Humanitarian Demining (HD) Research and Development Program focuses on developing, testing, demonstrating, and validating new technology for immediate use in humanitarian demining operations around the globe. Beginning in the late 1990's, the U.S. Army Countermine Division funded the development of the NIITEK ground penetrating radar (GPR) for detection of anti-tank (AT) landmines. This work is concerned with signal processing algorithms to suppress sources of artifacts in the NIITEK GPR, and formation of three-dimensional (3D) imagery from the resultant data. We first show that the NIITEK GPR data correspond to a 3D Synthetic Aperture Radar (SAR) database. An adaptive filtering method is utilized to suppress ground return and self-induced resonance (SIR) signals that are generated by the interaction of the radar-carrying platform and the transmitted radar signal. We examine signal processing methods to improve the fidelity of imagery for this 3D SAR system using pre-processing methods that suppress Doppler aliasing as well as other side lobe leakage artifacts that are introduced by the radar radiation pattern. The algorithm, known as digital spotlighting, imposes a filtering scheme on the azimuth-compressed SAR data, and manipulates the resultant spectral data to achieve a higher PRF to suppress the Doppler aliasing. We also present the 3D version of the Fourier-based wavefront reconstruction, a computationally-efficient and approximation-free SAR imaging method, for image formation with the NIITEK 3D SAR database.

Soumekh, Mehrdad; Ton, Tuan; Howard, Pete

2009-05-01

126

Holography of 3D surface reconstructed CT images.  

PubMed

A multiplex hologram (cylindrical holographic stereogram) was successfully made from three-dimensional (3D) surface reconstruction CT images of a child with plagiocephaly. This method appears to be suitable as one of the projectional aids of 3D surface reconstruction CT images that are proving useful in plastic and reconstructive surgery. The principle of the method is described. Also discussed is the possibility of developing a computer-aided hologram synthesizing system that could be used for images obtained with U-arm X-ray equipment (by either cinefilm, or videotape, or digital subtraction angiography) or by CT as well as MR. For practical use, it is necessary for the hologram to be synthesized in a short time. One of the key problems in developing such a machine is the need for an incoherent-to-coherent image converter. PMID:3335666

Fujioka, M; Ohyama, N; Honda, T; Tsujiuchi, J; Suzuki, M; Hashimoto, S; Ikeda, S

1988-01-01

127

3-D object-oriented image analysis of geophysical data  

NASA Astrophysics Data System (ADS)

Geophysical data are the main source of information about the subsurface. Geophysical techniques are, however, highly non-unique in determining specific physical parameters and boundaries of subsurface objects. To obtain actual physical information, an inversion process is often applied, in which measurements at or above the Earth surface are inverted into a 2- or 3-D subsurface spatial distribution of the physical property. Interpreting these models into structural objects, related to physical processes, requires a priori knowledge and expert analysis which is susceptible to subjective choices and is therefore often non-repeatable. In this research, we implemented a recently introduced object-based approach to interpret the 3-D inversion results of a single geophysical technique using the available a priori information and the physical and geometrical characteristics of the interpreted objects. The introduced methodology is semi-automatic and repeatable, and allows the extraction of subsurface structures using 3-D object-oriented image analysis (3-D OOA) in an objective knowledge-based classification scheme. The approach allows for a semi-objective setting of thresholds that can be tested and, if necessary, changed in a very fast and efficient way. These changes require only changing the thresholds used in a so-called ruleset, which is composed of algorithms that extract objects from a 3-D data cube. The approach is tested on a synthetic model, which is based on a priori knowledge on objects present in the study area (Tanzania). Object characteristics and thresholds were well defined in a 3-D histogram of velocity versus depth, and objects were fully retrieved. The real model results showed how 3-D OOA can deal with realistic 3-D subsurface conditions in which the boundaries become fuzzy, the object extensions become unclear and the model characteristics vary with depth due to the different physical conditions. As expected, the 3-D histogram of the real data was substantially more complex. Still, the 3-D OOA-derived objects were extracted based on their velocity and their depth location. Spatially defined boundaries, based on physical variations, can improve the modelling with spatially dependent parameter information. With 3-D OOA, the non-uniqueness on the location of objects and their physical properties can be potentially significantly reduced.

Fadel, I.; Kerle, N.; Meijde, M. van der

2014-07-01

128

3-D object-oriented image analysis of geophysical data  

NASA Astrophysics Data System (ADS)

Geophysical data are the main source of information about the subsurface. Geophysical techniques are, however, highly non-unique in determining specific physical parameters and boundaries of subsurface objects. To obtain actual physical information, an inversion process is often applied, in which measurements at or above the Earth surface are inverted into a 2- or 3-D subsurface spatial distribution of the physical property. Interpreting these models into structural objects, related to physical processes, requires a priori knowledge and expert analysis which is susceptible to subjective choices and is therefore often non-repeatable. In this research, we implemented a recently introduced object-based approach to interpret the 3-D inversion results of a single geophysical technique using the available a priori information and the physical and geometrical characteristics of the interpreted objects. The introduced methodology is semi-automatic and repeatable, and allows the extraction of subsurface structures using 3-D object-oriented image analysis (3-D OOA) in an objective knowledge-based classification scheme. The approach allows for a semi-objective setting of thresholds that can be tested and, if necessary, changed in a very fast and efficient way. These changes require only changing the thresholds used in a so-called ruleset, which is composed of algorithms that extract objects from a 3-D data cube. The approach is tested on a synthetic model, which is based on a priori knowledge on objects present in the study area (Tanzania). Object characteristics and thresholds were well defined in a 3-D histogram of velocity versus depth, and objects were fully retrieved. The real model results showed how 3-D OOA can deal with realistic 3-D subsurface conditions in which the boundaries become fuzzy, the object extensions become unclear and the model characteristics vary with depth due to the different physical conditions. As expected, the 3-D histogram of the real data was substantially more complex. Still, the 3-D OOA-derived objects were extracted based on their velocity and their depth location. Spatially defined boundaries, based on physical variations, can improve the modelling with spatially dependent parameter information. With 3-D OOA, the non-uniqueness on the location of objects and their physical properties can be potentially significantly reduced.

Fadel, I.; Kerle, N.; Meijde, M. van der

2014-05-01

129

Automatic Feature Extraction from 3D Range Images of Skulls  

Microsoft Academic Search

The extraction of a representative set of features has always been a challenging research topic in image analysis. This interest\\u000a is even more important when dealing with 3D images. The huge size of these datasets together with the complexity of the tasks\\u000a where they are needed demand new approaches to the feature extraction problem. The need of an automatic procedure

Lucia Ballerini; Marcello Calisti; Oscar Cordón; Sergio Damas; Jose Santamaría

2008-01-01

130

Digital holography particle image velocimetry for 3D flow measurement  

Microsoft Academic Search

A digital in-line holography recording system was used in the holography particle image velocimetry for a 3D flow measurement that are made up of the new full field fluid mechanics experimental technique--DHPIV. In this experiment, the traditional holography film was replaced by a CCD chip that record the interference stripe directly without the darkroom processing, and the virtual image slices

Runjie Wei; Gongxin Shen; Hanquan Ding

2003-01-01

131

Photoacoustic ophthalmoscopy for in vivo retinal imaging.  

PubMed

We have developed a non-invasive photoacoustic ophthalmoscopy (PAOM) for in vivo retinal imaging. PAOM detects the photoacoustic signal induced by pulsed laser light shined onto the retina. By using a stationary ultrasonic transducer in contact with the eyelids and scanning only the laser light across the retina, PAOM provides volumetric imaging of the retinal micro-vasculature and retinal pigment epithelium at a high speed. For B-scan frames containing 256 A-lines, the current PAOM has a frame rate of 93 Hz, which is comparable with state-of-the-art commercial spectral-domain optical coherence tomography (SD-OCT). By integrating PAOM with SD-OCT, we further achieved OCT-guided PAOM, which can provide multi-modal retinal imaging simultaneously. The capabilities of this novel technology were demonstrated by imaging both the microanatomy and microvasculature of the rat retina in vivo. PMID:20389409

Jiao, Shuliang; Jiang, Minshan; Hu, Jianming; Fawzi, Amani; Zhou, Qifa; Shung, K Kirk; Puliafito, Carmen A; Zhang, Hao F

2010-02-15

132

3D interfractional patient position verification using 2D-3D registration of orthogonal images  

SciTech Connect

Reproducible positioning of the patient during fractionated external beam radiation therapy is imperative to ensure that the delivered dose distribution matches the planned one. In this paper, we expand on a 2D-3D image registration method to verify a patient's setup in three dimensions (rotations and translations) using orthogonal portal images and megavoltage digitally reconstructed radiographs (MDRRs) derived from CT data. The accuracy of 2D-3D registration was improved by employing additional image preprocessing steps and a parabolic fit to interpolate the parameter space of the cost function utilized for registration. Using a humanoid phantom, precision for registration of three-dimensional translations was found to be better than 0.5 mm (1 s.d.) for any axis when no rotations were present. Three-dimensional rotations about any axis were registered with a precision of better than 0.2 deg. (1 s.d.) when no translations were present. Combined rotations and translations of up to 4 deg. and 15 mm were registered with 0.4 deg. and 0.7 mm accuracy for each axis. The influence of setup translations on registration of rotations and vice versa was also investigated and mostly agrees with a simple geometric model. Additionally, the dependence of registration accuracy on three cost functions, angular spacing between MDRRs, pixel size, and field-of-view, was examined. Best results were achieved by mutual information using 0.5 deg. angular spacing and a 10x10 cm{sup 2} field-of-view with 140x140 pixels. Approximating patient motion as rigid transformation, the registration method is applied to two treatment plans and the patients' setup errors are determined. Their magnitude was found to be {<=}6.1 mm and {<=}2.7 deg. for any axis in all of the six fractions measured for each treatment plan.

Jans, H.-S.; Syme, A.M.; Rathee, S.; Fallone, B.G. [Department of Medical Physics, Cross Cancer Institute, Departments of Oncology and Physics, University of Alberta, 11560 University Avenue, Edmonton, Alberta T6G IZ2 (Canada); Department of Medical Physics, Cross Cancer Institute, Department of Oncology, University of Alberta, 11560 University Avenue, Edmonton, Alberta T6G IZ2 (Canada); Department of Medical Physics, Cross Cancer Institute, Departments of Oncology and Physics, University of Alberta, 11560 University Avenue, Edmonton, Alberta T6G IZ2 (Canada)

2006-05-15

133

Refraction Correction in 3D Transcranial Ultrasound Imaging  

PubMed Central

We present the first correction of refraction in three-dimensional (3D) ultrasound imaging using an iterative approach that traces propagation paths through a two-layer planar tissue model, applying Snell’s law in 3D. This approach is applied to real-time 3D transcranial ultrasound imaging by precomputing delays offline for several skull thicknesses, allowing the user to switch between three sets of delays for phased array imaging at the push of a button. Simulations indicate that refraction correction may be expected to increase sensitivity, reduce beam steering errors, and partially restore lost spatial resolution, with the greatest improvements occurring at the largest steering angles. Distorted images of cylindrical lesions were created by imaging through an acrylic plate in a tissue-mimicking phantom. As a result of correcting for refraction, lesions were restored to 93.6% of their original diameter in the lateral direction and 98.1% of their original shape along the long axis of the cylinders. In imaging two healthy volunteers, the mean brightness increased by 8.3% and showed no spatial dependency.

Lindsey, Brooks D.; Smith, Stephen W.

2014-01-01

134

Refraction correction in 3D transcranial ultrasound imaging.  

PubMed

We present the first correction of refraction in three-dimensional (3D) ultrasound imaging using an iterative approach that traces propagation paths through a two-layer planar tissue model, applying Snell's law in 3D. This approach is applied to real-time 3D transcranial ultrasound imaging by precomputing delays offline for several skull thicknesses, allowing the user to switch between three sets of delays for phased array imaging at the push of a button. Simulations indicate that refraction correction may be expected to increase sensitivity, reduce beam steering errors, and partially restore lost spatial resolution, with the greatest improvements occurring at the largest steering angles. Distorted images of cylindrical lesions were created by imaging through an acrylic plate in a tissue-mimicking phantom. As a result of correcting for refraction, lesions were restored to 93.6% of their original diameter in the lateral direction and 98.1% of their original shape along the long axis of the cylinders. In imaging two healthy volunteers, the mean brightness increased by 8.3% and showed no spatial dependency. PMID:24275538

Lindsey, Brooks D; Smith, Stephen W

2014-01-01

135

Integration of real-time 3D image acquisition and multiview 3D display  

NASA Astrophysics Data System (ADS)

Seamless integration of 3D acquisition and 3D display systems offers enhanced experience in 3D visualization of the real world objects or scenes. The vivid representation of captured 3D objects displayed on a glasses-free 3D display screen could bring the realistic viewing experience to viewers as if they are viewing real-world scene. Although the technologies in 3D acquisition and 3D display have advanced rapidly in recent years, effort is lacking in studying the seamless integration of these two different aspects of 3D technologies. In this paper, we describe our recent progress on integrating a light-field 3D acquisition system and an autostereoscopic multiview 3D display for real-time light field capture and display. This paper focuses on both the architecture design and the implementation of the hardware and the software of this integrated 3D system. A prototype of the integrated 3D system is built to demonstrate the real-time 3D acquisition and 3D display capability of our proposed system.

Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Li, Wei; Wang, Jingyi; Liu, Yongchun

2014-03-01

136

Generation of photorealistic 3D image using optical digitizer.  

PubMed

A technique to generate a photorealistic three-dimensional (3D) image and color-textured model using a dedicated optical digitizer is presented. The proposed technique is started with the range and texture image acquisition from different viewpoints, followed by the registration and integration of multiple range images to get a complete and nonredundant point cloud that represents a real-life object. The accuracy of the range image and the precision of correspondence between the range image and texture image are guaranteed by sensor system calibration. Based on the point cloud, a geometric model is established by considering the connectivity of adjacent range image points. In order to enhance the photorealistic effect, we suggest a texture blending technique that utilizes a composite-weight strategy to blend the texture images within the overlapped region. This technique allows more efficient removal of the artifacts existing in the registered texture image, leading to a 3D image with photorealistic quality and color-texture modeling. Experimental results are also presented to testify to the validity of the proposed method. PMID:22441476

Liu, X M; Peng, X; Yin, Y K; Li, A M; Liu, X L; Wu, W

2012-03-20

137

Prototype of video endoscopic capsule with 3-d imaging capabilities.  

PubMed

Wireless video capsules can now carry out gastroenterological examinations. The images make it possible to analyze some diseases during postexamination, but the gastroenterologist could make a direct diagnosis if the video capsule integrated vision algorithms. The first step toward in situ diagnosis is the implementation of 3-D imaging techniques in the video capsule. By transmitting only the diagnosis instead of the images, the video capsule autonomy is increased. This paper focuses on the Cyclope project, an embedded active vision system that is able to provide 3-D and texture data in real time. The challenge is to realize this integrated sensor with constraints on size, consumption, and processing, which are inherent limitations of the video capsule. We present the hardware and software development of a wireless multispectral vision sensor which enables the transmission of the 3-D reconstruction of a scene in real time. An FPGA-based prototype has been designed to show the proof of concept. Experiments in the laboratory, in vitro, and in vivo on a pig have been performed to determine the performance of the 3-D vision system. A roadmap towardthe integrated system is set out. PMID:23853370

Kolar, Anthony; Romain, Olivier; Ayoub, Jad; Viateur, Sylvain; Granado, Bertrand

2010-08-01

138

Optical-CT imaging of complex 3D dose distributions  

PubMed Central

The limitations of conventional dosimeters restrict the comprehensiveness of verification that can be performed for advanced radiation treatments presenting an immediate and substantial problem for clinics attempting to implement these techniques. In essence, the rapid advances in the technology of radiation delivery have not been paralleled by corresponding advances in the ability to verify these treatments. Optical-CT gel-dosimetry is a relatively new technique with potential to address this imbalance by providing high resolution 3D dose maps in polymer and radiochromic gel dosimeters. We have constructed a 1st generation optical-CT scanner capable of high resolution 3D dosimetry and applied it to a number of simple and increasingly complex dose distributions including intensity-modulated-radiation-therapy (IMRT). Prior to application to IMRT, the robustness of optical-CT gel dosimetry was investigated on geometry and variable attenuation phantoms. Physical techniques and image processing methods were developed to minimize deleterious effects of refraction, reflection, and scattered laser light. Here we present results of investigations into achieving accurate high-resolution 3D dosimetry with optical-CT, and show clinical examples of 3D IMRT dosimetry verification. In conclusion, optical-CT gel dosimetry can provide high resolution 3D dose maps that greatly facilitate comprehensive verification of complex 3D radiation treatments. Good agreement was observed at high dose levels (>50%) between planned and measured dose distributions. Some systematic discrepancies were observed however (rms discrepancy 3% at high dose levels) indicating further work is required to eliminate confounding factors presently compromising the accuracy of optical-CT 3D gel-dosimetry.

Oldham, Mark; Kim, Leonard; Hugo, Geoffrey

2006-01-01

139

Computation of 3-D velocity fields from 3-D cine CT images of a human heart  

Microsoft Academic Search

A method of computing the three-dimensional (3-D) velocity field from 3-D cine computer tomographs (CTs) of a beating heart is proposed. Using continuum theory, the authors develop two constraints on the 3-D velocity field generated by a beating heart. With these constraints, the computation of the 3-D velocity field is formulated as an optimization problem and a solution to the

Samuel M. Song; Richard M. Leahy

1991-01-01

140

3D imaging of fetus vertebra by synchrotron radiation microtomography  

NASA Astrophysics Data System (ADS)

A synchrotron radiation computed microtomography system allowing high resolution 3D imaging of bone samples has been developed at ESRF. The system uses a high resolution 2D detector based on a CCd camera coupled to a fluorescent screen through light optics. The spatial resolution of the device is particularly well adapted to the imaging of bone structure. In view of studying growth, vertebra samples of fetus with differential gestational ages were imaged. The first results show that fetus vertebra is quite different from adult bone both in terms of density and organization.

Peyrin, Francoise; Salome, Murielle; Denis, Frederic; Braillon, Pierre; Laval-Jeantet, Anne-Marie; Cloetens, Peter

1997-10-01

141

3D imaging of the mesospheric emissive layer  

NASA Astrophysics Data System (ADS)

A new and original stereo-imaging method is introduced to measure the altitude of the OH airglow layer and provide a 3D map of the altitude of the layer centroid. Near-IR photographs of the layer are taken at two sites distant of 645 km. Each photograph is processed in order to invert the perspective effect and provide a satellite-type view of the layer. When superposed, the two views present a common diamond-shaped area. Pairs of matched points that correspond to a physical emissive point in the common area are identified in calculating a normalized crosscorrelation coefficient. This method is suitable for obtaining 3D representations in the case of low-contrast objects. An observational campaign was conducted in July 2006 in Peru. The images were taken simultaneously at Cerro Cosmos (12° 09' 08.2" S, 75° 33' 49.3" W, altitude 4630 m) close to Huancayo and Cerro Verde Tellolo (16° 33' 17.6" S, 71° 39' 59.4" W, altitude 2330 m) close to Arequipa. 3D maps of the layer surface are retrieved. They are compared with pseudo-relief intensity maps of the same region. The mean altitude of the emission barycenter is located at 87.1 km on July 26 and 89.5 km on July 28. Comparable relief wavy features appear in the 3D and intensity maps.

Nadjib Kouahla, Mohamed; Faivre, Michael; Moreels, Guy; Clairemidi, Jacques; Mougin-Sisini, Davy; Meriwether, John W.; Lehmacher, Gerald A.; Vidal, Erick; Veliz, Oskar

142

Linear tracking for 3-D medical ultrasound imaging.  

PubMed

As the clinical application grows, there is a rapid technical development of 3-D ultrasound imaging. Compared with 2-D ultrasound imaging, 3-D ultrasound imaging can provide improved qualitative and quantitative information for various clinical applications. In this paper, we proposed a novel tracking method for a freehand 3-D ultrasound imaging system with improved portability, reduced degree of freedom, and cost. We designed a sliding track with a linear position sensor attached, and it transmitted positional data via a wireless communication module based on Bluetooth, resulting in a wireless spatial tracking modality. A traditional 2-D ultrasound probe fixed to the position sensor on the sliding track was used to obtain real-time B-scans, and the positions of the B-scans were simultaneously acquired when moving the probe along the track in a freehand manner. In the experiments, the proposed method was applied to ultrasound phantoms and real human tissues. The results demonstrated that the new system outperformed a previously developed freehand system based on a traditional six-degree-of-freedom spatial sensor in phantom and in vivo studies, indicating its merit in clinical applications for human tissues and organs. PMID:23757592

Huang, Qing-Hua; Yang, Zhao; Hu, Wei; Jin, Lian-Wen; Wei, Gang; Li, Xuelong

2013-12-01

143

3D imaging: how to achieve highest accuracy  

NASA Astrophysics Data System (ADS)

The generation of 3D information from images is a key technology in many different areas, e.g. in 3D modeling and representation of architectural or heritage objects, in human body motion tracking and scanning, in 3D scene analysis of traffic scenes, in industrial applications and many more. The basic concepts rely on mathematical representations of central perspective viewing as they are widely known from photogrammetry or computer vision approaches. The objectives of these methods differ, more or less, from high precision and well-structured measurements in (industrial) photogrammetry to fully-automated non-structured applications in computer vision. Accuracy and precision is a critical issue for the 3D measurement of industrial, engineering or medical objects. As state of the art, photogrammetric multi-view measurements achieve relative precisions in the order of 1:100000 to 1:200000, and relative accuracies with respect to retraceable lengths in the order of 1:50000 to 1:100000 of the largest object diameter. In order to obtain these figures a number of influencing parameters have to be optimized. These are, besides others: physical representation of object surface (targets, texture), illumination and light sources, imaging sensors, cameras and lenses, calibration strategies (camera model), orientation strategies (bundle adjustment), image processing of homologue features (target measurement, stereo and multi-image matching), representation of object or workpiece coordinate systems and object scale. The paper discusses the above mentioned parameters and offers strategies for obtaining highest accuracy in object space. Practical examples of high-quality stereo camera measurements and multi-image applications are used to prove the relevance of high accuracy in different applications, ranging from medical navigation to static and dynamic industrial measurements. In addition, standards for accuracy verifications are presented and demonstrated by practical examples and tests.

Luhmann, Thomas

2011-06-01

144

Method for extracting the aorta from 3D CT images  

NASA Astrophysics Data System (ADS)

Bronchoscopic biopsy of the central-chest lymph nodes is vital in the staging of lung cancer. Three-dimensional multi-detector CT (MDCT) images provide vivid anatomical detail for planning bronchoscopy. Unfortunately, many lymph nodes are situated close to the aorta, and an inadvertent needle biopsy could puncture the aorta, causing serious harm. As an eventual aid for more complete planning of lymph-node biopsy, it is important to define the aorta. This paper proposes a method for extracting the aorta from a 3D MDCT chest image. The method has two main phases: (1) Off-line Model Construction, which provides a set of training cases for fitting new images, and (2) On-Line Aorta Construction, which is used for new incoming 3D MDCT images. Off-Line Model Construction is done once using several representative human MDCT images and consists of the following steps: construct a likelihood image, select control points of the medial axis of the aortic arch, and recompute the control points to obtain a constant-interval medial-axis model. On-Line Aorta Construction consists of the following operations: construct a likelihood image, perform global fitting of the precomputed models to the current case's likelihood image to find the best fitting model, perform local fitting to adjust the medial axis to local data variations, and employ a region recovery method to arrive at the complete constructed 3D aorta. The region recovery method consists of two steps: model-based and region-growing steps. This region growing method can recover regions outside the model coverage and non-circular tube structures. In our experiments, we used three models and achieved satisfactory results on twelve of thirteen test cases.

Taeprasartsit, Pinyo; Higgins, William E.

2007-03-01

145

Comparison of 3D Set Partitioning Methods in Hyperspectral Image Compression Featuring an Improved 3D-SPIHT  

Microsoft Academic Search

Summary form only given. Hyperspectral images were generated through the collection of hundreds of narrow and contiguously spaced spectral bands of data producing a highly correlated long sequence of images. An investigation and comparison was made on the performance of several three-dimensional embedded wavelet algorithms for compression of hyperspectral images. These algorithms include 3D-SPIHT, AT-3DSPIHT, 3D-SPECK (three-dimensional set partitioned embedded

Xiaoli Tang; Sungdae Cho; William A. Pearlman

2003-01-01

146

Image Appraisal for 2D and 3D Electromagnetic Inversion  

SciTech Connect

Linearized methods are presented for appraising image resolution and parameter accuracy in images generated with two and three dimensional non-linear electromagnetic inversion schemes. When direct matrix inversion is employed, the model resolution and posterior model covariance matrices can be directly calculated. A method to examine how the horizontal and vertical resolution varies spatially within the electromagnetic property image is developed by examining the columns of the model resolution matrix. Plotting the square root of the diagonal of the model covariance matrix yields an estimate of how errors in the inversion process such as data noise and incorrect a priori assumptions about the imaged model map into parameter error. This type of image is shown to be useful in analyzing spatial variations in the image sensitivity to the data. A method is analyzed for statistically estimating the model covariance matrix when the conjugate gradient method is employed rather than a direct inversion technique (for example in 3D inversion). A method for calculating individual columns of the model resolution matrix using the conjugate gradient method is also developed. Examples of the image analysis techniques are provided on 2D and 3D synthetic cross well EM data sets, as well as a field data set collected at the Lost Hills Oil Field in Central California.

Alumbaugh, D.L.; Newman, G.A.

1999-01-28

147

Phantom image results of an optimized full 3D USCT  

NASA Astrophysics Data System (ADS)

A promising candidate for improved imaging of breast cancer is ultrasound computer tomography (USCT). Current experimental USCT systems are still focused in elevation dimension resulting in a large slice thickness, limited depth of field, loss of out-of-plane reflections, and a large number of movement steps to acquire a stack of images. 3DUSCT emitting and receiving spherical wave fronts overcomes these limitations. We built an optimized 3DUSCT with nearly isotropic 3DPSF, realizing for the first time the full benefits of a 3Dsystem. In this paper results of the 3D point spread function measured with a dedicated phantom and images acquired with a clinical breast phantom are presented. The point spread function could be shown to be nearly isotropic in 3D, to have very low spatial variability and fit the predicted values. The contrast of the phantom images is very satisfactory in spite of imaging with a sparse aperture. The resolution and imaged details of the reflectivity reconstruction are comparable to a 3TeslaMRI volume of the breast phantom. Image quality and resolution is isotropic in all three dimensions, confirming the successful optimization experimentally.

Ruiter, Nicole V.; Zapf, Michael; Hopp, Torsten; Dapp, Robin; Gemmeke, Hartmut

2012-02-01

148

Automated Recognition of 3D Features in GPIR Images  

NASA Technical Reports Server (NTRS)

A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a directed-graph data structure. Relative to past approaches, this multiaxis approach offers the advantages of more reliable detections, better discrimination of objects, and provision of redundant information, which can be helpful in filling gaps in feature recognition by one of the component algorithms. The image-processing class also includes postprocessing algorithms that enhance identified features to prepare them for further scrutiny by human analysts (see figure). Enhancement of images as a postprocessing step is a significant departure from traditional practice, in which enhancement of images is a preprocessing step.

Park, Han; Stough, Timothy; Fijany, Amir

2007-01-01

149

Retinal imaging using adaptive optics technology?  

PubMed Central

Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effect of wave front distortions. Retinal imaging using AO aims to compensate for higher order aberrations originating from the cornea and the lens by using deformable mirror. The main application of AO retinal imaging has been to assess photoreceptor cell density, spacing, and mosaic regularity in normal and diseased eyes. Apart from photoreceptors, the retinal pigment epithelium, retinal nerve fiber layer, retinal vessel wall and lamina cribrosa can also be visualized with AO technology. Recent interest in AO technology in eye research has resulted in growing number of reports and publications utilizing this technology in both animals and humans. With the availability of first commercially available instruments we are making transformation of AO technology from a research tool to diagnostic instrument. The current challenges include imaging eyes with less than perfect optical media, formation of normative databases for acquired images such as cone mosaics, and the cost of the technology. The opportunities for AO will include more detailed diagnosis with description of some new findings in retinal diseases and glaucoma as well as expansion of AO into clinical trials which has already started.

Kozak, Igor

2014-01-01

150

Retinal imaging using adaptive optics technology.  

PubMed

Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effect of wave front distortions. Retinal imaging using AO aims to compensate for higher order aberrations originating from the cornea and the lens by using deformable mirror. The main application of AO retinal imaging has been to assess photoreceptor cell density, spacing, and mosaic regularity in normal and diseased eyes. Apart from photoreceptors, the retinal pigment epithelium, retinal nerve fiber layer, retinal vessel wall and lamina cribrosa can also be visualized with AO technology. Recent interest in AO technology in eye research has resulted in growing number of reports and publications utilizing this technology in both animals and humans. With the availability of first commercially available instruments we are making transformation of AO technology from a research tool to diagnostic instrument. The current challenges include imaging eyes with less than perfect optical media, formation of normative databases for acquired images such as cone mosaics, and the cost of the technology. The opportunities for AO will include more detailed diagnosis with description of some new findings in retinal diseases and glaucoma as well as expansion of AO into clinical trials which has already started. PMID:24843304

Kozak, Igor

2014-04-01

151

Inter-Image Statistics for 3D Environment Modeling  

Microsoft Academic Search

In this article we present a method for automatically recovering complete and dense depth maps of an indoor environment by\\u000a fusing incomplete data for the 3D environment modeling problem. The geometry of indoor environments is usually extracted by\\u000a acquiring a huge amount of range data and registering it. By acquiring a small set of intensity images and a very limited

Luz Abril Torres-méndez; Gregory Dudek

2008-01-01

152

3-D GPR IMAGING OF THE NEODANI FAULT, CENTRAL JAPAN  

Microsoft Academic Search

GPR data collected across a segment of the Neodani Fault in the Tokai region of central Japan represents the first 3-D GPR data successfully collected across a major seismogenic fault in Japan. Despite the inherent difficulty of GPR to significantly penetrate wet, clay-rich soils, a 3 meter bedrock offset across the fault was imaged through 3-6 meters of saturated unconsolidated

Ernest C. Hauser; Daiei Inoue

153

An Approach to Finding and Refinement Planes in 3D Points Cloud, Obtained Under 3D Recovery from Image Set  

Microsoft Academic Search

An algorithm of structure analysis of an input 3D points cloud and planes discrimination is proposed. The algorithm is based on the hierarchical and randomized Hough transform. The algorithm al- lows detecting image regions corresponding to planes instead of separate points, and partially converting 3D model from cloud of points to refined mesh.

Ekaterina V. Semeikina; Dmitry V. Yurin

154

Triangulation Based 3D Laser Imaging for Fracture Orientation Analysis  

NASA Astrophysics Data System (ADS)

Laser imaging has recently been identified as a potential tool for rock mass characterization. This contribution focuses on the application of triangulation based, short-range laser imaging to determine fracture orientation and surface texture. This technology measures the distance to the target by triangulating the projected and reflected laser beams, and also records the reflection intensity. In this study, we acquired 3D laser images of rock faces using the Laser Camera System (LCS), a portable instrument developed by Neptec Design Group (Ottawa, Canada). The LCS uses an infrared laser beam and is immune to the lighting conditions. The maximum image resolution is 1024 x 1024 volumetric image elements. Depth resolution is 0.5 mm at 5 m. An above ground field trial was conducted at a blocky road cut with well defined joint sets (Kingston, Ontario). An underground field trial was conducted at the Inco 175 Ore body (Sudbury, Ontario) where images were acquired in the dark and the joint set features were more subtle. At each site, from a distance of 3 m away from the rock face, a grid of six images (approximately 1.6 m by 1.6 m) was acquired at maximum resolution with 20% overlap between adjacent images. This corresponds to a density of 40 image elements per square centimeter. Polyworks, a high density 3D visualization software tool, was used to align and merge the images into a single digital triangular mesh. The conventional method of determining fracture orientations is by manual measurement using a compass. In order to be accepted as a substitute for this method, the LCS should be capable of performing at least to the capabilities of manual measurements. To compare fracture orientation estimates derived from the 3D laser images to manual measurements, 160 inclinometer readings were taken at the above ground site. Three prominent joint sets (strike/dip: 236/09, 321/89, 325/01) were identified by plotting the joint poles on a stereonet. Underground, two main joint sets (strike/dip: 060/00, 114/86) were identified from 49 manual inclinometer measurements A stereonet of joint poles from the 3D laser data was generated using the commercial software Split-FX. Joint sets were identified successfully and their orientations correlated well with the hand measurements. However, Split-Fx overlays a simply 2D grid of equal-sized triangles onto the 3D surface and requires significant user input. In a more automated approach, we have developed a MATLAB script which directly imports the Polyworks 3D triangular mesh. A typical mesh is composed of over 1 million triangles of variable sizes: smooth regions are represented by large triangles, whereas rough surfaces are captured by several smaller triangles. Using the triangle vertices, the script computes the strike and dip of each triangle. This approach opens possibilities for statistical analysis of a large population of fracture orientation estimates, including surface texture. The methodology will be used to evaluate both synthetic and field data.

Mah, J.; Claire, S.; Steve, M.

2009-05-01

155

3D pupil plane imaging of opaque targets  

NASA Astrophysics Data System (ADS)

Correlography is a technique that allows image formation from non-imaged speckle patterns via their relationship to the autocorrelation of the scene. Algorithms designed to form images from this type of data represent a particular type of phase retrieval algorithm since the autocorrelation function is related to the Fourier magnitude of the scene but not the Fourier phase. Methods for forming 2-D images from far field intensity measurements have been explored previously, but no 3-D methods have been put forward for forming range images of a scene from this kind of measurement. Farfield intensity measurements are attractive large focusing optics are not required to form images. Pupil plane intensity imaging is also attractive due to the fact that the effects of atmospheric turbulence close to the imaging system are mitigated by the cancelation of phase errors in the intensity operation. This paper suggests a method for obtaining 3-D images of a scene through the use of successive 2-D pupil plane intensity measurements sampled with an APD (Avalanche Photo-Diode) array. The 2-D array samples the returning pulse from a laser at a fast enough rate to avoid aliasing of the pulse shape in time. The spatial pattern received by the array allows the Autocorrelation of the scene to be determined as a function of time. The temporal autocorrelation function contains range information to each point in the scene illuminated by the pulsed laser. The proposed algorithm uses a model for the LADAR pulse and its relation to the autocorrelation of the scene as a function of time to estimate the range to every point in the reconstructed scene assuming that all surfaces are opaque (meaning a second return from the same point in the scene is not anticipated). The method is demonstrated using a computer simulation.

Cain, Stephen C.

2010-08-01

156

Automated extraction of lymph nodes from 3-D abdominal CT images using 3-D minimum directional difference filter.  

PubMed

This paper presents a method for extracting lymph node regions from 3-D abdominal CT images using 3-D minimum directional difference filter. In the case of surgery of colonic cancer, resection of metastasis lesions is performed with resection of a primary lesion. Lymph nodes are main route of metastasis and are quite important for deciding resection area. Diagnosis of enlarged lymph nodes is quite important process for surgical planning. However, manual detection of enlarged lymph nodes on CT images is quite burden task. Thus, development of lymph node detection process is very helpful for assisting such surgical planning task. Although there are several report that present lymph node detection, these methods detect lymph nodes primary from PET images or detect in 2-D image processing way. There is no method that detects lymph nodes directly from 3-D images. The purpose of this paper is to show an automated method for detecting lymph nodes from 3-D abdominal CT images. This method employs a 3-D minimum directional difference filter for enhancing blob structures with suppressing line structures. After that, false positive regions caused by residua and vein are eliminated using several kinds of information such as size, blood vessels, air in the colon. We applied the proposed method to three cases of 3-D abdominal CT images. The experimental results showed that the proposed method could detect 57.0% of enlarged lymph nodes with 58 FPs per case. PMID:18044586

Kitasaka, Takayuki; Tsujimura, Yukihiro; Nakamura, Yoshihiko; Mori, Kensaku; Suenaga, Yasuhito; Ito, Masaaki; Nawano, Shigeru

2007-01-01

157

3D segmentation of breast tumor in ultrasound images  

NASA Astrophysics Data System (ADS)

This paper proposes a three-dimensional (3D) region-based segmentation algorithm for extracting a diagnostic tumor from ultrasound images by using a split-and-merge and seeded region growing with a distortion-based homogeneity cost. In the proposed algorithm, 2D cutting planes are first obtained by the equiangular revolution of a cross sectional plane on a reference axis for a 3D volume data. In each cutting plane, an elliptic seed mask that is included tightly in a tumor of interest is set. At the same time, each plane is finely segmented using the split-and-merge with a distortion-based cost. In the result segmented finely, all of the regions that are across or contained in the elliptic seed mask are then merged. The merged region is taken as a seed region for the seeded region growing. In the seeded region growing, the seed region is recursively merged with adjacent regions until a predefined condition is reached. Then, the contour of the final seed region is extracted as a contour of the tumor. Finally, a 3D volume of the tumor is rendered from the set of tumor contours obtained for the entire cutting planes. Experimental results for a 3D artificial volume data show that the proposed method yields maximum three times reduction in error rate over the Krivanek"s method. For a real 3D ultrasonic volume data, the error rates of the proposed method are shown to be lower than 17% when the results obtained manually are used as a reference data. It also is found that the contours of the tumor extracted by the proposed algorithm coincide closely with those estimated by human vision.

Kwak, Jong In; Jung, Mal Nam; Kim, Sang Hyun; Kim, Nam Chul

2003-05-01

158

Quantitative 3D Optical Imaging: Applications in Dosimetry and Biophysics  

NASA Astrophysics Data System (ADS)

Optical-CT has been shown to be a potentially useful imaging tool for the two very different spheres of biologists and radiation therapy physicists, but it has yet to live up to that potential. In radiation therapy, researchers have used optical-CT for the readout of 3D dosimeters, but it is yet to be a clinically relevant tool as the technology is too slow to be considered practical. Biologists have used the technique for structural imaging, but have struggled with emission tomography as the reality of photon attenuation for both excitation and emission have made the images quantitatively irrelevant. Dosimetry. The DLOS (Duke Large field of view Optical-CT Scanner) was designed and constructed to make 3D dosimetry utilizing optical-CT a fast and practical tool while maintaining the accuracy of readout of the previous, slower readout technologies. Upon construction/optimization/implementation of several components including a diffuser, band pass filter, registration mount & fluid filtration system the dosimetry system provides high quality data comparable to or exceeding that of commercial products. In addition, a stray light correction algorithm was tested and implemented. The DLOS in combination with the 3D dosimeter it was designed for, PREAGETM, then underwent rigorous commissioning and benchmarking tests validating its performance against gold standard data including a set of 6 irradiations. DLOS commissioning tests resulted in sub-mm isotropic spatial resolution (MTF >0.5 for frequencies of 1.5lp/mm) and a dynamic range of ˜60dB. Flood field uniformity was 10% and stable after 45minutes. Stray light proved to be small, due to telecentricity, but even the residual can be removed through deconvolution. Benchmarking tests showed the mean 3D passing gamma rate (3%, 3mm, 5% dose threshold) over the 6 benchmark data sets was 97.3% +/- 0.6% (range 96%-98%) scans totaling ˜10 minutes, indicating excellent ability to perform 3D dosimetry while improving the speed of readout. Noise was low at ˜2% for 2mm reconstructions. The DLOS/PRESAGERTM benchmark tests show consistently excellent performance, with very good agreement to simple known distributions. The telecentric design was critical to enabling fast (~15mins) imaging with minimal stray light artifacts. The system produces accurate isotropic 2mm3 dose data over clinical volumes (e.g. 16cm diameter phantoms, 12 cm height), and represents a uniquely useful and versatile new tool for commissioning complex radiotherapy techniques. The system also has wide versatility, and has successfully been used in preliminary tests with protons and with kV irradiations. Biology. Attenuation corrections for optical-emission-CT were done by modeling physical parameters in the imaging setup within the framework of an ordered subset expectation maximum (OSEM) iterative reconstruction algorithm. This process has a well documented history in single photon emission computed tomography (SPECT), but is inherently simpler due to the lack of excitation photons to account for. Excitation source strength distribution, excitation and emission attenuation were modeled. The accuracy of the correction was investigated by imaging phantoms containing known distributions of attenuation and fluorophores. The correction was validated on a manufactured phantom designed to give uniform emission in a central cuboidal region and later applied to a cleared mouse brain with GFP (green-fluorescentprotein) labeled vasculature and a cleared 4T1 xenograft flank tumor with constitutive RFP (red-fluorescent-protein). Reconstructions were compared to corresponding slices imaged with a fluorescent dissection microscope. Significant optical-ECT attenuation artifacts were observed in the uncorrected phantom images and appeared up to 80% less intense than the verification image in the central region. The corrected phantom images showed excellent agreement with the verification image with only slight variations. The corrected tissue sample reconstructions showed general agreement between the verification images. Comp

Thomas, Andrew Stephen

159

3D super-resolution imaging with blinking quantum dots.  

PubMed

Quantum dots are promising candidates for single molecule imaging due to their exceptional photophysical properties, including their intense brightness and resistance to photobleaching. They are also notorious for their blinking. Here we report a novel way to take advantage of quantum dot blinking to develop an imaging technique in three-dimensions with nanometric resolution. We first applied this method to simulated images of quantum dots and then to quantum dots immobilized on microspheres. We achieved imaging resolutions (fwhm) of 8-17 nm in the x-y plane and 58 nm (on coverslip) or 81 nm (deep in solution) in the z-direction, approximately 3-7 times better than what has been achieved previously with quantum dots. This approach was applied to resolve the 3D distribution of epidermal growth factor receptor (EGFR) molecules at, and inside of, the plasma membrane of resting basal breast cancer cells. PMID:24093439

Wang, Yong; Fruhwirth, Gilbert; Cai, En; Ng, Tony; Selvin, Paul R

2013-11-13

160

The 3D model control of image processing  

NASA Technical Reports Server (NTRS)

Telerobotics studies remote control of distant robots by a human operator using supervisory or direct control. Even if the robot manipulators has vision or other senses, problems arise involving control, communications, and delay. The communication delays that may be expected with telerobots working in space stations while being controlled from an Earth lab have led to a number of experiments attempting to circumvent the problem. This delay in communication is a main motivating factor in moving from well understood instantaneous hands-on manual control to less well understood supervisory control; the ultimate step would be the realization of a fully autonomous robot. The 3-D model control plays a crucial role in resolving many conflicting image processing problems that are inherent in resolving in the bottom-up approach of most current machine vision processes. The 3-D model control approach is also capable of providing the necessary visual feedback information for both the control algorithms and for the human operator.

Nguyen, An H.; Stark, Lawrence

1989-01-01

161

3-D Seismic Methods for Shallow Imaging Beneath Pavement  

NASA Astrophysics Data System (ADS)

The research presented in this dissertation focuses on survey design and acquisition of near-surface 3D seismic reflection and surface wave data on pavement. Increased efficiency for mapping simple subsurface interfaces through a combined use of modified land survey designs and a hydraulically driven acquisition device are demonstrated. Using these techniques subsurface reflectors can be quickly and efficiently imaged in the course of an afternoon. The use of surface waves to analyze the upper several tens of meters of the subsurface has become an important technique for near-surface investigations. A new method for acquiring and visualizing surface wave information in three-dimensions is demonstrated. As will be shown, a volume of shear wave velocities can be created by acquiring surface waves along multiple, coincident lines. Using a series of computer algorithms the data can then be graphed in 2D or 3D space providing a method of visualization not previously available.

Miller, Brian

162

A new technique for 3D gamma-ray imaging: Conceptual study of a 3D camera  

NASA Astrophysics Data System (ADS)

A novel technique for 3D gamma-ray imaging is presented. This method combines the positron annihilation Compton scattering imaging technique with a supplementary position sensitive detector, which registers gamma-rays scattered in the object at angles of about 90°. The 3D coordinates of the scattering location can be determined rather accurately by applying the Compton principle. This method requires access to the object from two orthogonal sides and allows one to achieve a position resolution of few mm in all three space coordinates. A feasibility study for a 3D camera is presented based on Monte Carlo calculations.

Domingo-Pardo, C.

2012-05-01

163

Image to Point Cloud Method of 3D-MODELING  

NASA Astrophysics Data System (ADS)

This article describes the method of constructing 3D models of objects (buildings, monuments) based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.

Chibunichev, A. G.; Galakhov, V. P.

2012-07-01

164

3-D respiratory motion compensation during EP procedures by image-based 3-D lasso catheter model generation and tracking.  

PubMed

Radio-frequency catheter ablation of the pulmonary veins attached to the left atrium is usually carried out under fluoroscopy guidance. Two-dimensional X-ray navigation may involve overlay images derived from a static pre-operative 3-D volumetric data set to add anatomical details. However, respiratory motion may impair the utility of static overlay images for catheter navigation. We developed a system for image-based 3-D motion estimation and compensation as a solution to this problem for which no previous solution is yet known. It is based on 3-D catheter tracking involving 2-D/3-D registration. A biplane X-ray C-arm system is used to image a special circumferential (lasso) catheter from two directions. In the first step of the method, a 3-D model of the device is reconstructed. 3-D respiratory motion at the site of ablation is then estimated by tracking the reconstructed model in 3-D from biplane fluoroscopy. In our experiments, the circumferential catheter was tracked in 231 biplane fluoro frames (462 monoplane fluoro frames) with an average 2-D tracking error of 1.0 mm +/- 0.5 mm. PMID:20426012

Brost, Alexander; Liao, Rui; Hornegger, Joachim; Strobel, Norbert

2009-01-01

165

3D-imaging using micro-PIXE  

NASA Astrophysics Data System (ADS)

We have developed a 3D-imaging system using characteristic X-rays produced by proton micro-beam bombardment. The 3D-imaging system consists of a micro-beam and an X-ray CCD camera of 1 mega pixels (Hamamatsu photonics C8800X), and has a spatial resolution of 4 ?m by using characteristic Ti-K-X-rays (4.558 keV) produced by 3 MeV protons of beam spot size of ˜1 ?m. We applied this system, namely, a micron-CT to observe the inside of a living small ant's head of ˜1 mm diameter. An ant was inserted into a small polyimide tube the inside diameter and the wall thickness of which are 1000 and 25 ?m, respectively, and scanned by the micron-CT. Three dimensional images of the ant's heads were obtained with a spatial resolution of 4 ?m. It was found that, in accordance with the strong dependence on atomic number of photo ionization cross-sections, the mandibular gland of ant contains heavier elements, and moreover, the CT-image of living ant anaesthetized by chloroform is quite different from that of a dead ant dipped in formalin.

Ishii, K.; Matsuyama, S.; Watanabe, Y.; Kawamura, Y.; Yamaguchi, T.; Oyama, R.; Momose, G.; Ishizaki, A.; Yamazaki, H.; Kikuchi, Y.

2007-02-01

166

Retinal image analysis: Concepts, applications and potential  

Microsoft Academic Search

As digital imaging and computing power increasingly develop, so too does the potential to use these technologies in ophthalmology. Image processing, analysis and computer vision techniques are increasing in prominence in all fields of medical science, and are especially pertinent to modern ophthalmology, as it is heavily dependent on visually oriented signs. The retinal microvasculature is unique in that it

Niall Patton; Tariq M. Aslam; Thomas MacGillivray; Ian J. Deary; Baljean Dhillon; Robert H. Eikelboom; Kanagasingam Yogesan; Ian J. Constable

2006-01-01

167

Neural Network Based Retinal Image Analysis  

Microsoft Academic Search

Diabetic-retinopathy contributes to serious health problem in many parts of the world. With the motivation of the needs of the medical community system for early screening of diabetics and other diseases a computer aided diagnosis system is proposed. This work is aimed to develop an automated system to analyze the retinal images for important features of diabetic retinopathy using image

J. David; Rekha Krishnan

2008-01-01

168

Location of Optical Disc in Retinal Image  

Microsoft Academic Search

This paper proposes a method to automatically locate the optic disc in a retinal image. Our method of finding the optic disc is based on the properties of the optic disc using simple image processing algorithms which include multilevel thresholding, Morphological process detection of object roundness and circle detection by circle fitting method. The proposed method is able to recognize

D. Santhi; D. Manimegalai

2007-01-01

169

Digital imaging-based retinal photocoagulation system  

NASA Astrophysics Data System (ADS)

Researchers at the USAF Academy and the University of Texas are developing a computer-assisted retinal photocoagulation system for the treatment of retinal disorders (i.e. diabetic retinopathy, retinal tears). Currently, ophthalmologists manually place therapeutic retinal lesions, an acquired technique that is tiring for both the patient and physician. The computer-assisted system under development can rapidly and safely place multiple therapeutic lesions at desired locations on the retina in a matter of seconds. Separate prototype subsystems have been developed to control lesion depth during irradiation and lesion placement to compensate for retinal movement. Both subsystems have been successfully demonstrated in vivo on pigmented rabbits using an argon continuous wave laser. Two different design approaches are being pursued to combine the capabilities of both subsystems: a digital imaging-based system and a hybrid analog-digital system. This paper will focus on progress with the digital imaging-based prototype system. A separate paper on the hybrid analog-digital system, `Hybrid Retinal Photocoagulation System', is also presented in this session.

Barrett, Steven F.; Wright, Cameron H. G.; Oberg, Erik D.; Rockwell, Benjamin A.; Cain, Clarence P.; Rylander, Henry G., III; Welch, Ashley J.

1997-05-01

170

3D Wavelet Subbands Mixing for Image Denoising  

PubMed Central

A critical issue in image restoration is the problem of noise removal while keeping the integrity of relevant image information. The method proposed in this paper is a fully automatic 3D blockwise version of the nonlocal (NL) means filter with wavelet subbands mixing. The proposed wavelet subbands mixing is based on a multiresolution approach for improving the quality of image denoising filter. Quantitative validation was carried out on synthetic datasets generated with the BrainWeb simulator. The results show that our NL-means filter with wavelet subbands mixing outperforms the classical implementation of the NL-means filter in terms of denoising quality and computation time. Comparison with wellestablished methods, such as nonlinear diffusion filter and total variation minimization, shows that the proposed NL-means filter produces better denoising results. Finally, qualitative results on real data are presented.

Coupe, Pierrick; Hellier, Pierre; Prima, Sylvain; Kervrann, Charles; Barillot, Christian

2008-01-01

171

3D set partitioned embedded zero block coding algorithm for hyperspectral image compression  

NASA Astrophysics Data System (ADS)

In this paper, a three-dimensional Set Partitioned Embedded Zero Block Coding (3D SPEZBC) algorithm for hyperspectral image compression is proposed, which is motivated by the EZBC and SPECK algorithms. Experimental results show that the 3D SPEZBC algorithm obviously outperforms 3D SPECK, 3D SPIHT and AT-3D SPIHT, and is slightly better than JPEG2000-MC in the compression performances. Moreover, the 3D SPEZBC algorithm can save considerable memory requirement in comparison with 3D EZBC.

Hou, Ying; Liu, Guizhong

2007-11-01

172

Ultra-realistic 3-D imaging based on colour holography  

NASA Astrophysics Data System (ADS)

A review of recent progress in colour holography is provided with new applications. Colour holography recording techniques in silver-halide emulsions are discussed. Both analogue, mainly Denisyuk colour holograms, and digitally-printed colour holograms are described and their recent improvements. An alternative to silver-halide materials are the panchromatic photopolymer materials such as the DuPont and Bayer photopolymers which are covered. The light sources used to illuminate the recorded holograms are very important to obtain ultra-realistic 3-D images. In particular the new light sources based on RGB LEDs are described. They show improved image quality over today's commonly used halogen lights. Recent work in colour holography by holographers and companies in different countries around the world are included. To record and display ultra-realistic 3-D images with perfect colour rendering are highly dependent on the correct recording technique using the optimal recording laser wavelengths, the availability of improved panchromatic recording materials and combined with new display light sources.

Bjelkhagen, H. I.

2013-02-01

173

A 3D DCT architecture for compression of integral 3D images  

Microsoft Academic Search

A VLSI architecture for the three-dimensional discrete cosine transform (3D DCT) is proposed. The 3D DCT is decomposed into 1D DCTs computed in each of the three dimensions. The focus of this paper is in the design of the matrix transpose required prior to the computation of the final 1D DCT which corresponds to the third dimension. This matrix transpose

I. Jalloh; A. Aggoun; M. McCormick

2000-01-01

174

3D Multispectral Light Propagation Model For Subcutaneous Veins Imaging  

SciTech Connect

In this paper, we describe a new 3D light propagation model aimed at understanding the effects of various physiological properties on subcutaneous vein imaging. In particular, we build upon the well known MCML (Monte Carlo Multi Layer) code and present a tissue model that improves upon the current state-of-the-art by: incorporating physiological variation, such as melanin concentration, fat content, and layer thickness; including veins of varying depth and diameter; using curved surfaces from real arm shapes; and modeling the vessel wall interface. We describe our model, present results from the Monte Carlo modeling, and compare these results with those obtained with other Monte Carlo methods.

Paquit, Vincent C [ORNL; Price, Jeffery R [ORNL; Meriaudeau, Fabrice [ORNL; Tobin Jr, Kenneth William [ORNL

2008-01-01

175

3D laser optoacoustic ultrasonic imaging system for preclinical research  

NASA Astrophysics Data System (ADS)

In this work, we introduce a novel three-dimensional imaging system for in vivo high-resolution anatomical and functional whole-body visualization of small animal models developed for preclinical or other type of biomedical research. The system (LOUIS-3DM) combines a multi-wavelength optoacoustic and ultrawide-band laser ultrasound tomographies to obtain coregistered maps of tissue optical absorption and acoustic properties, displayed within the skin outline of the studied animal. The most promising applications of the LOUIS-3DM include 3D angiography, cancer research, and longitudinal studies of biological distribution of optoacoustic contrast agents (carbon nanotubes, metal plasmonic nanoparticles, etc.).

Ermilov, Sergey A.; Conjusteau, André; Hernandez, Travis; Su, Richard; Nadvoretskiy, Vyacheslav; Tsyboulski, Dmitri; Anis, Fatima; Anastasio, Mark A.; Oraevsky, Alexander A.

2013-03-01

176

Spectral Geometry Image: Image Based 3D Models for Digital Broadcasting Applications  

Microsoft Academic Search

The use of 3D models for progressive transmission and broadcasting applications is an interesting challenge due to the nature and complexity of such content. In this paper, a new image format for the representation of 3D progressive model is proposed. The powerful spectral analysis is combined with the state of art Ge- ometry Image(GI) to encode static 3D models into

Boon-Seng Chew; Lap-Pui Chau; Ying He; Dayong Wang; Steven C. H. Hoi

2011-01-01

177

Application of 3D surface imaging in breast cancer radiotherapy  

NASA Astrophysics Data System (ADS)

Purpose: Accurate dose delivery in deep-inspiration breath-hold (DIBH) radiotherapy for patients with breast cancer relies on precise treatment setup and monitoring of the depth of the breath hold. This study entailed performance evaluation of a 3D surface imaging system for image guidance in DIBH radiotherapy by comparison with cone-beam computed tomography (CBCT). Materials and Methods: Fifteen patients, treated with DIBH radiotherapy after breast-conserving surgery, were included. The performance of surface imaging was compared to the use of CBCT for setup verification. Retrospectively, breast surface registrations were performed for CBCT to planning CT as well as for a 3D surface, captured concurrently with CBCT, to planning CT. The resulting setup errors were compared with linear regression analysis. For the differences between setup errors, group mean, systematic and random errors were calculated. Furthermore, a residual error after registration (RRE) was assessed for both systems by investigating the root-mean-square distance between the planning CT surface and registered CBCT/captured surface. Results: Good correlation between setup errors was found: R2=0.82, 0.86, 0.82 in left-right, cranio-caudal and anteriorposterior direction, respectively. Systematic and random errors were <=0.16cm and <=0.13cm in all directions, respectively. RRE values for surface imaging and CBCT were on average 0.18 versus 0.19cm with a standard deviation of 0.10 and 0.09cm, respectively. Wilcoxon-signed-ranks testing showed that CBCT registrations resulted in higher RRE values than surface imaging registrations (p=0.003). Conclusion: This performance evaluation study shows very promising results

Alderliesten, Tanja; Sonke, Jan-Jakob; Betgen, Anja; Honnef, Joeri; van Vliet-Vroegindeweij, Corine; Remeijer, Peter

2012-02-01

178

Retinal image quality assessment using generic features  

NASA Astrophysics Data System (ADS)

Retinal image quality assessment is an important step in automated eye disease diagnosis. Diagnosis accuracy is highly dependent on the quality of retinal images, because poor image quality might prevent the observation of significant eye features and disease manifestations. A robust algorithm is therefore required in order to evaluate the quality of images in a large database. We developed an algorithm for retinal image quality assessment based on generic features that is independent from segmentation methods. It exploits the local sharpness and texture features by applying the cumulative probability of blur detection metric and run-length encoding algorithm, respectively. The quality features are combined to evaluate the image's suitability for diagnosis purposes. Based on the recommendations of medical experts and our experience, we compared a global and a local approach. A support vector machine with radial basis functions was used as a nonlinear classifier in order to classify images to gradable and ungradable groups. We applied our methodology to 65 images of size 2592×1944 pixels that had been graded by a medical expert. The expert evaluated 38 images as gradable and 27 as ungradable. The results indicate very good agreement between the proposed algorithm's predictions and the medical expert's judgment: the sensitivity and specificity for the local approach are respectively 92% and 94%. The algorithm demonstrates sufficient robustness to identify relevant images for automated diagnosis.

Fasih, Mahnaz; Langlois, J. M. Pierre; Ben Tahar, Houssem; Cheriet, Farida

2014-03-01

179

3-D visualization and animation technologies in anatomical imaging.  

PubMed

This paper explores a 3-D computer artist's approach to the creation of three-dimensional computer-generated imagery (CGI) derived from clinical scan data. Interpretation of scientific imagery, such as magnetic resonance imaging (MRI), is restricted to the eye of the trained medical practitioner in a clinical or scientific context. In the research work described here, MRI data are visualized and interpreted by a 3-D computer artist using the tools of the digital animator to navigate image complexity and widen interaction. In this process, the artefact moves across disciplines; it is no longer tethered to its diagnostic origins. It becomes an object that has visual attributes such as light, texture and composition, and a visual aesthetic of its own. The introduction of these visual attributes provides a platform for improved accessibility by a lay audience. The paper argues that this more artisan approach to clinical data visualization has a potential real-world application as a communicative tool for clinicians and patients during consultation. PMID:20002229

McGhee, John

2010-02-01

180

Computing 3D head orientation from a monocular image sequence  

NASA Astrophysics Data System (ADS)

An approach for estimating 3D head orientation in a monocular image sequence is proposed. The approach employs recently developed image-based parameterized tracking for face and face features to locate the area in which a sub- pixel parameterized shape estimation of the eye's boundary is performed. This involves tracking of five points (four at the eye corners and the fifth is the tip of the nose). We describe an approach that relies on the coarse structure of the face to compute orientation relative to the camera plane. Our approach employs projective invariance of the cross-ratios of the eye corners and anthropometric statistics to estimate the head yaw, roll and pitch. Analytical and experimental results are reported.

Horprasert, Thanarat; Yacoob, Yaser; Davis, Larry S.

1997-02-01

181

3D painting documentation: evaluation of conservation conditions with 3D imaging and ranging techniques  

NASA Astrophysics Data System (ADS)

The monitoring of paintings, both on canvas and wooden support, is a crucial issue for the preservation and conservation of this kind of artworks. Many environmental factors (e.g. humidity, temperature, illumination, etc.), as well as bad conservation practices (e.g. wrong restorations, inappropriate locations, etc.), can compromise the material conditions over time and deteriorate an artwork. The article presents an on-going project realized by a multidisciplinary team composed by the ENEA UTICT 3D GraphLab, the 3D Optical Metrology Unit of the Bruno Kessler Foundation and the Soprintendenza per i Beni Storico Artistici ed Etnoantropologici of Bologna (Italy). The goal of the project is the multi-temporal 3D documentation and monitoring of paintings - at the moment in bad conservation's situation - and the provision of some metrics to quantify the deformations and damages.

Abate, D.; Menna, F.; Remondino, F.; Gattari, M. G.

2014-06-01

182

High Resolution 3D Radar Imaging of Comet Interiors  

NASA Astrophysics Data System (ADS)

Knowing the interiors of comets and other primitive bodies is fundamental to our understanding of how planets formed. We have developed a Discovery-class mission formulation, Comet Radar Explorer (CORE), based on the use of previously flown planetary radar sounding techniques, with the goal of obtaining high resolution 3D images of the interior of a small primitive body. We focus on the Jupiter-Family Comets (JFCs) as these are among the most primitive bodies reachable by spacecraft. Scattered in from far beyond Neptune, they are ultimate targets of a cryogenic sample return mission according to the Decadal Survey. Other suitable targets include primitive NEOs, Main Belt Comets, and Jupiter Trojans. The approach is optimal for small icy bodies ~3-20 km diameter with spin periods faster than about 12 hours, since (a) navigation is relatively easy, (b) radar penetration is global for decameter wavelengths, and (c) repeated overlapping ground tracks are obtained. The science mission can be as short as ~1 month for a fast-rotating JFC. Bodies smaller than ~1 km can be globally imaged, but the navigation solutions are less accurate and the relative resolution is coarse. Larger comets are more interesting, but radar signal is unlikely to be reflected from depths greater than ~10 km. So, JFCs are excellent targets for a variety of reasons. We furthermore focus on the use of Solar Electric Propulsion (SEP) to rendezvous shortly after the comet's perihelion. This approach leaves us with ample power for science operations under dormant conditions beyond ~2-3 AU. This leads to a natural mission approach of distant observation, followed by closer inspection, terminated by a dedicated radar mapping orbit. Radar reflections are obtained from a polar orbit about the icy nucleus, which spins underneath. Echoes are obtained from a sounder operating at dual frequencies 5 and 15 MHz, with 1 and 10 MHz bandwidths respectively. The dense network of echoes is used to obtain global 3D images of interior structure to ~20 m, and to map dielectric properties (related to internal composition) to better than 200 m throughout. This is comparable in detail to modern 3D medical ultrasound, although we emphasize that the techniques are somewhat different. An interior mass distribution is obtained through spacecraft tracking, using data acquired during the close, quiet radar orbits. This is aligned with the radar-based images of the interior, and the shape model, to contribute to the multi-dimensional 3D global view. High-resolution visible imaging provides boundary conditions and geologic context to these interior views. An infrared spectroscopy and imaging campaign upon arrival reveals the time-evolving activity of the nucleus and the structure and composition of the inner coma, and the definition of surface units. CORE is designed to obtain a total view of a comet, from the coma to the active and evolving surface to the deep interior. Its primary science goal is to obtain clear images of internal structure and dielectric composition. These will reveal how the comet was formed, what it is made of, and how it 'works'. By making global yet detailed connections from interior to exterior, this knowledge will be an important complement to the Rosetta mission, and will lay the foundation for comet nucleus sample return by revealing the areas of shallow depth to 'bedrock', and relating accessible deposits to their originating provenances within the nucleus.

Asphaug, E. I.; Gim, Y.; Belton, M.; Brophy, J.; Weissman, P. R.; Heggy, E.

2012-12-01

183

Optic Disc Segmentation in Retinal Images  

Microsoft Academic Search

Abstract: Retinal images give unique diagnostic information not onlyabout eye disease but about other organs as well [1]. To give the physiciansa tool for objective quantitative assessment of the retina, automatedmethods have been developed. In this paper an automated methodfor the optic disc segmentation is presented. The method consists of 4steps: localization of the optic disc, nonlinearltering, Canny edge detectorand

Radim Chrástek; Matthias Wolf; Klaus Donath; Georg Michelson; Heinrich Niemann

2002-01-01

184

Complex Resistivity 3D Imaging for Ground Reinforcement Site  

NASA Astrophysics Data System (ADS)

Induced polarization (IP) method is used for mineral exploration and generally classified into two categories, time and frequency domain method. IP method in frequency domain measures amplitude and absolute phase to the transmitted currents, and is often called spectral induced polarization (SIP) when measurement is made for the wide-band frequencies. Our research group has been studying the modeling and inversion algorithms of complex resistivity method since several years ago and recently started to apply this method for various field applications. We already completed the development of 2/3D modeling and inversion program and developing another algorithm to use wide-band data altogether. Until now complex resistivity (CR) method was mainly used for the surface or tomographic survey of mineral exploration. Through the experience, we can find that the resistivity section from CR method is very similar with that of conventional resistivity method. Interpretation of the phase section is generally well matched with the geological information of survey area. But because most of survey area has very touch and complex terrain, 2D survey and interpretation are used generally. In this study, the case study of 3D CR survey conducted for the site where ground reinforcement was done to prevent the subsidence will be introduced. Data was acquired with the Zeta system, the complex resistivity measurement system produced by Zonge Co. using 8 frequencies from 0.125 to 16 Hz. 2D survey was conducted for total 6 lines with 5 m dipole spacing and 20 electrodes. Line length is 95 meter for every line. Among these 8 frequency data, data below 1 Hz was used considering its quality. With the 6 line data, 3D inversion was conducted. Firstly 2D interpretation was made with acquired data and its results were compared with those of resistivity survey. Resulting resistivity image sections of CR and resistivity method were very similar. Anomalies in phase image section showed good agreement with those identified by the 4D interpretation of resistivity monitoring data. These results in phase section come from the fact that cement mortar used as a grouting material has very strong IP property. With the 3D inversion anomalies were discriminated more clearly that was somewhat obscure in the 2D interpretation. And phase anomalies were also well matched with the 4D interpretation of resistivity monitoring data. Phase anomalies in 2D interpretation were extended deeper area and its boundary was not clear, but we clearly identified its lower boundary and location in the 3D inverted result. CR method is very effective if the target anomaly has strong IP property and can be used for various purposes. But it has some difficulties in data acquisition that it takes more time and efforts compared to normal resistivity survey. If these problems were to be solved, it would be a very effective and prominent method in some area. In this study, we only show the results from single frequency data but we could infer more information when all the multi-frequency data were used in inversion. We will continue to develop 3D multi-frequency inversion algorithm in near future.

Son, J.; Kim, J.; Park, S.

2012-12-01

185

3D Image Analysis of Geomaterials using Confocal Microscopy  

NASA Astrophysics Data System (ADS)

Confocal microscopy is one of the most significant advances in optical microscopy of the last century. It is widely used in biological sciences but its application to geomaterials lingers due to a number of technical problems. Potentially the technique can perform non-invasive testing on a laser illuminated sample that fluoresces using a unique optical sectioning capability that rejects out-of-focus light reaching the confocal aperture. Fluorescence in geomaterials is commonly induced using epoxy doped with a fluorochrome that is impregnated into the sample to enable discrimination of various features such as void space or material boundaries. However, for many geomaterials, this method cannot be used because they do not naturally fluoresce and because epoxy cannot be impregnated into inaccessible parts of the sample due to lack of permeability. As a result, the confocal images of most geomaterials that have not been pre-processed with extensive sample preparation techniques are of poor quality and lack the necessary image and edge contrast necessary to apply any commonly used segmentation techniques to conduct any quantitative study of its features such as vesicularity, internal structure, etc. In our present work, we are developing a methodology to conduct a quantitative 3D analysis of images of geomaterials collected using a confocal microscope with minimal amount of prior sample preparation and no addition of fluorescence. Two sample geomaterials, a volcanic melt sample and a crystal chip containing fluid inclusions are used to assess the feasibility of the method. A step-by-step process of image analysis includes application of image filtration to enhance the edges or material interfaces and is based on two segmentation techniques: geodesic active contours and region competition. Both techniques have been applied extensively to the analysis of medical MRI images to segment anatomical structures. Preliminary analysis suggests that there is distortion in the shapes of the segmented vesicles, vapor bubbles, and void spaces due to the optical measurements, so corrective actions are being explored. This will establish a practical and reliable framework for an adaptive 3D image processing technique for the analysis of geomaterials using confocal microscopy.

Mulukutla, G.; Proussevitch, A.; Sahagian, D.

2009-05-01

186

Method of Comparing 3-D Image Consistency and Quality Between Commercially Available 3-D Scanners.  

National Technical Information Service (NTIS)

With a number of 3-D scanners now available commercially, little work has been done to directly compare their capabilities. This study was designed to characterize differences between the Vitronic Vitus Pro scanner owned by TNO in the Netherlands and the ...

C. R. Harrison D. B. Burnsides

2003-01-01

187

Vector Acoustics, Vector Sensors, and 3D Underwater Imaging  

NASA Astrophysics Data System (ADS)

Vector acoustic data has two more dimensions of information than pressure data and may allow for 3D underwater imaging with much less data than with hydrophone data. The vector acoustic sensors measures the particle motions due to passing sound waves and, in conjunction with a collocated hydrophone, the direction of travel of the sound waves. When using a controlled source with known source and sensor locations, the reflection points of the sound field can be determined with a simple trigonometric calculation. I demonstrate this concept with an experiment that used an accelerometer based vector acoustic sensor in a water tank with a short-pulse source and passive scattering targets. The sensor consists of a three-axis accelerometer and a matched hydrophone. The sound source was a standard transducer driven by a short 7 kHz pulse. The sensor was suspended in a fixed location and the hydrophone was moved about the tank by a robotic arm to insonify the tank from many locations. Several floats were placed in the tank as acoustic targets at diagonal ranges of approximately one meter. The accelerometer data show the direct source wave as well as the target scattered waves and reflections from the nearby water surface, tank bottom and sides. Without resorting to the usual methods of seismic imaging, which in this case is only two dimensional and relied entirely on the use of a synthetic source aperture, the two targets, the tank walls, the tank bottom, and the water surface were imaged. A directional ambiguity inherent to vector sensors is removed by using collocated hydrophone data. Although this experiment was in a very simple environment, it suggests that 3-D seismic surveys may be achieved with vector sensors using the same logistics as a 2-D survey that uses conventional hydrophones. This work was supported by the Office of Naval Research, program element 61153N.

Lindwall, D.

2007-12-01

188

3D geometric analysis of the aorta in 3D MRA follow-up pediatric image data  

NASA Astrophysics Data System (ADS)

We introduce a new model-based approach for the segmentation of the thoracic aorta and its main branches from follow-up pediatric 3D MRA image data. For robust segmentation of vessels even in difficult cases (e.g., neighboring structures), we propose a new extended parametric cylinder model which requires only relatively few model parameters. The new model is used in conjunction with a two-step fitting scheme for refining the segmentation result yielding an accurate segmentation of the vascular shape. Moreover, we include a novel adaptive background masking scheme and we describe a spatial normalization scheme to align the segmentation results from follow-up examinations. We have evaluated our proposed approach using different 3D synthetic images and we have successfully applied the approach to follow-up pediatric 3D MRA image data.

Wörz, Stefan; Alrajab, Abdulsattar; Arnold, Raoul; Eichhorn, Joachim; von Tengg-Kobligk, Hendrik; Schenk, Jens-Peter; Rohr, Karl

2014-03-01

189

Automatic registration of multiple texel images (fused lidar/digital imagery) for 3D image creation  

NASA Astrophysics Data System (ADS)

Creation of 3D images through remote sensing is a topic of interest in many applications such as terrain / building modeling and automatic target recognition (ATR). Several photogrammetry-based methods have been proposed that derive 3D information from digital images from different perspectives, and lidar- based methods have been proposed that merge lidar point clouds and texture the merged point clouds with digital imagery. Image registra­ tion alone has difficulty with smooth regions with low contrast, whereas point cloud merging alone has difficulty with outliers and lack of proper convergence in the merging process. This paper presents a method to create 3D images that uses the unique properties of texel images (pixel­ fused lidar and digital imagery) to improve the quality and robustness of fused 3D images. The proposed method uses both image processing and point-cloud merging to combine texel images in an iterative technique. Since the digital image pixels and the lidar 3D points are fused at the sensor level, more accurate 3D images are generated because registration of image data automatically improves the merging of the point clouds, and vice versa. Examples illustrate the value of this method over other methods.

Budge, Scott E.; Badamikar, Neeraj

2013-05-01

190

Imaging the 3D geometry of pseudotachylyte-bearing faults  

NASA Astrophysics Data System (ADS)

Dynamic friction experiments in granitoid or gabbroic rocks that achieve earthquake slip velocities reveal significant weakening by melt-lubrication of the sliding surfaces. Extrapolation of these experimental results to seismic source depths (> 7 km) suggests that the slip weakening distance (Dw) over which this transition occurs is < 10 cm. The physics of this lubrication in the presence of a fluid (melt) is controlled by surface micro-topography. In order to characterize fault surface microroughness and its evolution during dynamic slip events on natural faults, we have undertaken an analysis of three-dimensional (3D) fault surface microtopography and its causes on a suite of pseudotachylyte-bearing fault strands from the Gole Larghe fault zone, Italy. The solidification of frictional melt soon after seismic slip ceases "freezes in" earthquake source geometries, however it also precludes the development of extensive fault surface exposures that have enabled direct studies of fault surface roughness. We have overcome this difficulty by imaging the intact 3D geometry of the fault using high-resolution X-ray computed tomography (CT). We collected a suite of 2-3.5 cm diameter cores (2-8 cm long) from individual faults within the Gole Larghe fault zone with a range of orientations (+/- 45 degrees from average strike) and slip magnitudes (0-1 m). Samples were scanned at the University of Texas High Resolution X-ray CT Facility, using an Xradia MicroCT scanner with a 70 kV X-ray source. Individual voxels (3D pixels) are ~36 ?m across. Fault geometry is thus imaged over ~4 orders of magnitude from the micron scale up to ~Dw. Pseudotachylyte-bearing fault zones are imaged as tabular bodies of intermediate X-ray attenuation crosscutting high attenuation biotite and low attenuation quartz and feldspar of the surrounding tonalite. We extract the fault surfaces (contact between the pseudotachylyte bearing fault zone and the wall rock) using integrated manual mapping, automated edge detection, and statistical evaluation. This approach results in a digital elevation model for each side of the fault zone that we use to quantify melt thickness and volume as well as surface microroughness and explore the relationship between these properties and the geometry, slip magnitude, and wall rock mineralogy of the fault.

Resor, Phil; Shervais, Katherine

2013-04-01

191

Radiometric modeling of a 3D imaging laser scanner  

NASA Astrophysics Data System (ADS)

Active imaging systems allow obtaining data in more than two dimensions. In addition to the spatial information, these systems are able to provide the intensity distribution of one scene. From this data channel a certain number of physic magnitudes that show some features of the illuminated surface can be recovered. The different behaviours of the scene elements about the directionality of the optical radiation, wavelength or polarization improve the ability to discriminate them. In this work, the capabilities of one 3D imaging laser scanner have been tested from both dimensional and radiometric points of view. To do this, a simple model of the observing system and the scene, in which only the directional propagation of the energy is taken into account, has been developed. Selected parameters corresponding to transmission, reception and optomechanical components of the active imaging system describe the full sensor. The surfaces of a non-complex scene have been divided into different elements with a defined geometry and directional reflectance. In order to measure the directional reflectance of several materials in the specific wavelength where the laser scanner works, a laboratory bench has been developed. The calculation of the received signal by the sensor has been carried out using several radiative transfer models. These models were validated by experiments in a laboratory with controlled conditions of illumination and reflectance. To do this, a certain number of images (angle, angle, range and intensity) were acquired by a commercial laser scanner using several standard targets calibrated in geometry and directional reflectance.

Ortiz, Sergio; Diaz-Caro, Jose; Pareja, Rosario

2005-10-01

192

High-resolution 3D coherent laser radar imaging  

NASA Astrophysics Data System (ADS)

The Super-resolution Sensor System (S3) program is an ambitious effort to exploit the maximum information a laser-based sensor can obtain. At Lockheed Martin Coherent Technologies (LMCT), we are developing methods of incorporating multi-function operation (3D imaging, vibrometry, polarimetry, aperture synthesis, etc.) into a single device. The waveforms will be matched to the requirements of both hardware (e.g., optical amplifiers, modulators) and the targets being imaged. The first successful demonstrations of this program have produced high-resolution, three-dimensional images at intermediate stand-off ranges. In addition, heavy camouflage penetration has been successfully demonstrated. The resolution of a ladar sensor scales with the bandwidth as dR = c/(2B), with a corresponding scaling of the range precision. Therefore, the ability to achieve large bandwidths is crucial to developing a high-resolution sensor. While there are many methods of achieving the benefit of large bandwidths while using lower bandwidth electronics (e.g., an FMCW implementation), the S3 system produces and detects the full waveform bandwidth, enabling a large set of adaptive waveforms for applications requiring large range search intervals (RSI) and short duration waveforms. This paper highlights the three-dimensional imaging and camo penetration.

Buck, Joseph; Malm, Andrew; Zakel, Andrew; Krause, Brian; Tiemann, Bruce

2007-05-01

193

Image appraisal for 2D and 3D electromagnetic inversion  

SciTech Connect

Linearized methods are presented for appraising image resolution and parameter accuracy in images generated with two and three dimensional non-linear electromagnetic inversion schemes. When direct matrix inversion is employed, the model resolution and model covariance matrices can be directly calculated. The columns of the model resolution matrix are shown to yield empirical estimates of the horizontal and vertical resolution throughout the imaging region. Plotting the square root of the diagonal of the model covariance matrix yields an estimate of how the estimated data noise maps into parameter error. When the conjugate gradient method is employed rather than a direct inversion technique (for example in 3D inversion), an iterative method can be applied to statistically estimate the model covariance matrix, as well as a regularization covariance matrix. The latter estimates the error in the inverted results caused by small variations in the regularization parameter. A method for calculating individual columns of the model resolution matrix using the conjugate gradient method is also developed. Examples of the image analysis techniques are provided on a synthetic cross well EM data set.

Alumbaugh, D.L.; Newman, G.A.

1998-04-01

194

Performance assessment of 3D surface imaging technique for medical imaging applications  

NASA Astrophysics Data System (ADS)

Recent development in optical 3D surface imaging technologies provide better ways to digitalize the 3D surface and its motion in real-time. The non-invasive 3D surface imaging approach has great potential for many medical imaging applications, such as motion monitoring of radiotherapy, pre/post evaluation of plastic surgery and dermatology, to name a few. Various commercial 3D surface imaging systems have appeared on the market with different dimension, speed and accuracy. For clinical applications, the accuracy, reproducibility and robustness across the widely heterogeneous skin color, tone, texture, shape properties, and ambient lighting is very crucial. Till now, a systematic approach for evaluating the performance of different 3D surface imaging systems still yet exist. In this paper, we present a systematic performance assessment approach to 3D surface imaging system assessment for medical applications. We use this assessment approach to exam a new real-time surface imaging system we developed, dubbed "Neo3D Camera", for image-guided radiotherapy (IGRT). The assessments include accuracy, field of view, coverage, repeatability, speed and sensitivity to environment, texture and color.

Li, Tuotuo; Geng, Jason; Li, Shidong

2013-03-01

195

3D Chemical and Elemental Imaging by STXM Spectrotomography  

SciTech Connect

Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

Wang, J.; Karunakaran, C.; Lu, Y.; Hormes, J. [Canadian Light Source Inc., University of Saskatchewan, Saskatoon, SK S7N 0X4 (Canada); Hitchcock, A. P. [BIMR, McMaster University, Hamilton, ON L8S 4M1 (Canada); Prange, A. [Microbiology and Food Hygiene, Niederrhein University of Applied Sciences, Moenchengladbach (Germany); Institute for Microbiology and Virology, University of Witten/Herdecke, Witten (Germany); Center for Advanced Microstructures and Devices (CAMD), Louisiana State University, Baton Rouge, LA (United States); Franz, B. [Microbiology and Food Hygiene, Niederrhein University of Applied Sciences, Moenchengladbach (Germany); Harkness, T. [College of Medicine, University of Saskatchewan, Saskatoon, SK S7N 5E5 (Canada); Obst, M. [Center for Applied Geoscience, Tuebingen University, Tuebingen (Germany)

2011-09-09

196

3D Chemical and Elemental Imaging by STXM Spectrotomography  

NASA Astrophysics Data System (ADS)

Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

Wang, J.; Hitchcock, A. P.; Karunakaran, C.; Prange, A.; Franz, B.; Harkness, T.; Lu, Y.; Obst, M.; Hormes, J.

2011-09-01

197

Ultra wide band millimeter wave holographic '3-D' imaging of concealed targets on mannequins.  

National Technical Information Service (NTIS)

Ultra wide band (chirp frequency) millimeter wave ''3-D'' holography is a unique technique for imaging concealed targets on human subjects with extremely high lateral and depth resolution. Recent ''3-D'' holographic images of full size mannequins with con...

H. D. Collins T. E. Hall R. P. Gribble

1994-01-01

198

Automated 3D segmentation of intraretinal layers from optic nerve head optical coherence tomography images  

NASA Astrophysics Data System (ADS)

Optical coherence tomography (OCT), being a noninvasive imaging modality, has begun to find vast use in the diagnosis and management of ocular diseases such as glaucoma, where the retinal nerve fiber layer (RNFL) has been known to thin. Furthermore, the recent availability of the considerably larger volumetric data with spectral-domain OCT has increased the need for new processing techniques. In this paper, we present an automated 3-D graph-theoretic approach for the segmentation of 7 surfaces (6 layers) of the retina from 3-D spectral-domain OCT images centered on the optic nerve head (ONH). The multiple surfaces are detected simultaneously through the computation of a minimum-cost closed set in a vertex-weighted graph constructed using edge/regional information, and subject to a priori determined varying surface interaction and smoothness constraints. The method also addresses the challenges posed by presence of the large blood vessels and the optic disc. The algorithm was compared to the average manual tracings of two observers on a total of 15 volumetric scans, and the border positioning error was found to be 7.25 +/- 1.08 ?m and 8.94 +/- 3.76 ?m for the normal and glaucomatous eyes, respectively. The RNFL thickness was also computed for 26 normal and 70 glaucomatous scans where the glaucomatous eyes showed a significant thinning (p < 0.01, mean thickness 73.7 +/- 32.7 ?m in normal eyes versus 60.4 +/- 25.2 ?m in glaucomatous eyes).

Antony, Bhavna J.; Abràmoff, Michael D.; Lee, Kyungmoo; Sonkova, Pavlina; Gupta, Priya; Kwon, Young; Niemeijer, Meindert; Hu, Zhihong; Garvin, Mona K.

2010-03-01

199

3D Slicer as an Image Computing Platform for the Quantitative Imaging Network  

PubMed Central

Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm, and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future directions that can further facilitate development and validation of imaging biomarkers using 3D Slicer.

Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V.; Pieper, Steve; Kikinis, Ron

2012-01-01

200

3D set partitioned embedded zero block coding algorithm for hyperspectral image compression  

Microsoft Academic Search

In this paper, a three-dimensional Set Partitioned Embedded Zero Block Coding (3D SPEZBC) algorithm for hyperspectral image compression is proposed, which is motivated by the EZBC and SPECK algorithms. Experimental results show that the 3D SPEZBC algorithm obviously outperforms 3D SPECK, 3D SPIHT and AT-3D SPIHT, and is slightly better than JPEG2000-MC in the compression performances. Moreover, the 3D SPEZBC

Ying Hou; Guizhong Liu

2007-01-01

201

Retinal imaging after corneal inlay implantation.  

PubMed

We report 2 cases of implantation with the Kamra corneal inlay to describe central and peripheral retinal visibility and the quality of optical coherence tomography (OCT) scans. Under pharmacological mydriasis, the central and peripheral retina was explored without disturbance by an experienced retinal ophthalmologist. Central color imaging was done without difficulty, and peripheral imaging was accurate despite a small bright shadow in every image. The quality of the OCT scans of the macular line, macular 3-dimensional cube, and macular radial protocols were 156.51, 77.49, and 84.35, respectively, in patient 1 and 106.66, 63.03, and 64.69, respectively, in patient 2 without artifact scanning. The inlay allowed normal visualization of the central and peripheral fundus, as well as good-quality central and peripheral imaging and OCT scans. PMID:21855770

Casas-Llera, Pilar; Ruiz-Moreno, José M; Alió, Jorge L

2011-09-01

202

3D imaging of semiconductor components by discrete laminography  

NASA Astrophysics Data System (ADS)

X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

Batenburg, K. J.; Palenstijn, W. J.; Sijbers, J.

2014-06-01

203

Unsupervised fuzzy segmentation of 3D magnetic resonance brain images  

NASA Astrophysics Data System (ADS)

Unsupervised fuzzy methods are proposed for segmentation of 3D Magnetic Resonance images of the brain. Fuzzy c-means (FCM) has shown promising results for segmentation of single slices. FCM has been investigated for volume segmentations, both by combining results of single slices and by segmenting the full volume. Different strategies and initializations have been tried. In particular, two approaches have been used: (1) a method by which, iteratively, the furthest sample is split off to form a new cluster center, and (2) the traditional FCM in which the membership grade matrix is initialized in some way. Results have been compared with volume segmentations by k-means and with two supervised methods, k-nearest neighbors and region growing. Results of individual segmentations are presented as well as comparisons on the application of the different methods to a number of tumor patient data sets.

Velthuizen, Robert P.; Hall, Lawrence O.; Clarke, Laurence P.; Bensaid, Amine M.; Arrington, J. A.; Silbiger, M. L.

1993-07-01

204

3D Zernike Moments and Zernike Affine Invariants for 3D Image Analysis and Recognition  

Microsoft Academic Search

Guided by the results of much research workdone in the past on the performance of 2D imagemoments and moment invariants in the presenceof noise, suggesting that by using orthogonal 2DZernike rather than regular geometrical momentsone gets many advantages regarding noise effects,information suppression at low radii and redundancy,we have worked out and introduce a completeset of 3D polynomials orthonormal within theunit

N. Canterakis

1999-01-01

205

A compression algorithm of hyperspectral remote sensing image based on 3-D Wavelet transform and fractal  

Microsoft Academic Search

In this paper, the 3-D wavelet-fractal coding was used to compress the hyperspectral remote sensing image. The classical eight kinds of affine transformations in 2-D fractal image compression were generalized to nineteen for the 3-D fractal image compression. Hyperspectral image date cube was first translated by 3-D wavelet and then the 3-D fractal compression coding was applied to lowest frequency

Pan Wei; Zou Yi; Ao Lu

2008-01-01

206

Needle placement for piriformis injection using 3-D imaging.  

PubMed

Piriformis syndrome is a pain syndrome originating in the buttock and is attributed to 6% - 8% of patients referred for the treatment of back and leg pain. The treatment for piriformis syndrome using fluoroscopy, computed tomography (CT), electromyography (EMG), and ultrasound (US) has become standard practice. The treatment of Piriformis Syndrome has evolved to include fluoroscopy and EMG with CT guidance. We present a case study of 5 successful piriformis injections using 3-D computer-assisted electromagnet needle tracking coupled with ultrasound. A 6-degree of freedom electromagnetic position tracker was attached to the ultrasound probe that allowed the system to detect the position and orientation of the probe in the magnetic field. The tracked ultrasound probe was used to find the posterior superior iliac spine. Subsequently, 3 points were captured to register the ultrasound image with the CT or magnetic resonance image scan. Moreover, after the registration was obtained, the navigation system visualized the tracked needle relative to the CT scan in real-time using 2 orthogonal multi-planar reconstructions centered at the tracked needle tip. Conversely, a recent study revealed that fluoroscopically guided injections had 30% accuracy compared to ultrasound guided injections, which tripled the accuracy percentage. This novel technique exhibited an accurate needle guidance injection precision of 98% while advancing to the piriformis muscle and avoiding the sciatic nerve. The mean (± SD) procedure time was 19.08 (± 4.9) minutes. This technique allows for electromagnetic instrument tip tracking with real-time 3-D guidance to the selected target. As with any new technique, a learning curve is expected; however, this technique could offer an alternative, minimizing radiation exposure. PMID:23703429

Clendenen, Steven R; Candler, Shawn A; Osborne, Michael D; Palmer, Scott C; Duench, Stephanie; Glynn, Laura; Ghazi, Salim M

2013-01-01

207

X-ray polycapillary characterization and 3D imaging properties  

NASA Astrophysics Data System (ADS)

Polycapillary optics are highly efficient devices for focusing high-energy photons and thermal neutrons. Here we will present our studies by modeling and simulating X-ray propagation through cylindrical polycapillary optics using PolyCAD, a ray-tracing original package developed by our group. PolyCAD is a CAD program designed for X-ray photon tracing in polycapillary optics. PolyCAD allows simulating any type of X-ray source like an X-ray tube of finite beam dimensions or an astrophysical object in combination with different kinds of polycapillary optics. Experimental data have been compared with theoretical predictions, in particular the focusing properties of a cylindrical lens have been visualized by collecting 3D images, and reconstructed using PolyCAD simulations. The acquired images put into evidence how the focal spot profiles, including intensity and widths, at different projection distances, agree with calculations. In the second part of this work, we present some characterization methodologies used for studying several kind of polycapillary optics. The procedure is divided principally in two kind of measurements: "angular measurements" for studying lens's transmission coefficient and focusing properties, and "CCD images" for characterizing lens's focal spot.

Hampai, Dariush; Cappuccio, Giorgio; Cibin, Giannantonio; Dabagov, Sultan B.; Sessa, Vito

2007-05-01

208

3D imaging studies of rigid-fiber sedimentation  

NASA Astrophysics Data System (ADS)

Fibers are industrially important particles that experience coupling between rotational and translational motion during sedimentation. This leads to helical trajectories that have yet to be accurately predicted or measured. Sedimentation experiments and hydrodynamic analysis were performed on 11 copper "fibers" of average length 10.3 mm and diameter 0.20 mm. Each fiber contained three linear but non-coplanar segments. Fiber dimensions were measured by imaging their 2D projections on three planes. The fibers were sequentially released into silicone oil contained in a transparent cylinder of square cross section. Identical, synchronized cameras were mounted to a moveable platform and imaged the cylinder from orthogonal directions. The cameras were fixed in position during the time that a fiber remained in the field of view. Subsequently, the cameras were controllably moved to the next lower field of view. The trajectories of descending fibers were followed over distances up to 250 mm. Custom software was written to extract fiber orientation and trajectory from the 3D images. Fibers with similar terminal velocity often had significantly different terminal angular velocities. Both were well-predicted by theory. The radius of the helical trajectory was hard to predict when angular velocity was high, probably reflecting uncertainties in fiber shape, initial velocity, and fluid conditions associated with launch. Nevertheless, lateral excursion of fibers during sedimentation was reasonably predicted by fiber curl and asymmetry, suggesting the possibility of sorting fibers according to their shape.

Vahey, David W.; Tozzi, Emilio J.; Scott, C. Tim; Klingenberg, Daniel J.

2011-01-01

209

3D Segmentation of Prostate Ultrasound images Using Wavelet Transform.  

PubMed

The current definitive diagnosis of prostate cancer is transrectal ultrasound (TRUS) guided biopsy. However, the current procedure is limited by using 2D biopsy tools to target 3D biopsy locations. This paper presents a new method for automatic segmentation of the prostate in three-dimensional transrectal ultrasound images, by extracting texture features and by statistically matching geometrical shape of the prostate. A set of Wavelet-based support vector machines (W-SVMs) are located and trained at different regions of the prostate surface. The WSVMs capture texture priors of ultrasound images for classification of the prostate and non-prostate tissues in different zones around the prostate boundary. In the segmentation procedure, these W-SVMs are trained in three sagittal, coronal, and transverse planes. The pre-trained W-SVMs are employed to tentatively label each voxel around the surface of the model as a prostate or non-prostate voxel by the texture matching. The labeled voxels in three planes after post-processing is overlaid on a prostate probability model. The probability prostate model is created using 10 segmented prostate data. Consequently, each voxel has four labels: sagittal, coronal, and transverse planes and one probability label. By defining a weight function for each labeling in each region, each voxel is labeled as a prostate or non-prostate voxel. Experimental results by using real patient data show the good performance of the proposed model in segmenting the prostate from ultrasound images. PMID:22468205

Akbari, Hamed; Yang, Xiaofeng; Halig, Luma V; Fei, Baowei

2011-01-01

210

Multispectral retinal image analysis: a novel non-invasive tool for retinal imaging  

PubMed Central

Purpose To develop a non-invasive method for quantification of blood and pigment distributions across the posterior pole of the fundus from multispectral images using a computer-generated reflectance model of the fundus. Methods A computer model was developed to simulate light interaction with the fundus at different wavelengths. The distribution of macular pigment (MP) and retinal haemoglobins in the fundus was obtained by comparing the model predictions with multispectral image data at each pixel. Fundus images were acquired from 16 healthy subjects from various ethnic backgrounds and parametric maps showing the distribution of MP and of retinal haemoglobins throughout the posterior pole were computed. Results The relative distributions of MP and retinal haemoglobins in the subjects were successfully derived from multispectral images acquired at wavelengths 507, 525, 552, 585, 596, and 611?nm, providing certain conditions were met and eye movement between exposures was minimal. Recovery of other fundus pigments was not feasible and further development of the imaging technique and refinement of the software are necessary to understand the full potential of multispectral retinal image analysis. Conclusion The distributions of MP and retinal haemoglobins obtained in this preliminary investigation are in good agreement with published data on normal subjects. The ongoing development of the imaging system should allow for absolute parameter values to be computed. A further study will investigate subjects with known pathologies to determine the effectiveness of the method as a screening and diagnostic tool.

Calcagni, A; Gibson, J M; Styles, I B; Claridge, E; Orihuela-Espina, F

2011-01-01

211

Multimodal Imaging in Hereditary Retinal Diseases  

PubMed Central

Introduction. In this retrospective study we evaluated the multimodal visualization of retinal genetic diseases to better understand their natural course. Material and Methods. We reviewed the charts of 70 consecutive patients with different genetic retinal pathologies who had previously undergone multimodal imaging analyses. Genomic DNA was extracted from peripheral blood and genotyped at the known locus for the different diseases. Results. The medical records of 3 families of a 4-generation pedigree affected by North Carolina macular dystrophy were reviewed. A total of 8 patients with Stargardt disease were evaluated for their two main defining clinical characteristics, yellow subretinal flecks and central atrophy. Nine male patients with a previous diagnosis of choroideremia and eleven female carriers were evaluated. Fourteen patients with Best vitelliform macular dystrophy and 6 family members with autosomal recessive bestrophinopathy were included. Seven patients with enhanced s-cone syndrome were ascertained. Lastly, we included 3 unrelated patients with fundus albipunctatus. Conclusions. In hereditary retinal diseases, clinical examination is often not sufficient for evaluating the patient's condition. Retinal imaging then becomes important in making the diagnosis, in monitoring the progression of disease, and as a surrogate outcome measure of the efficacy of an intervention.

Morara, Mariachiara; Veronese, Chiara; Nucci, Paolo; Ciardella, Antonio P.

2013-01-01

212

Adaptive optics retinal imaging: emerging clinical applications.  

PubMed

The human retina is a uniquely accessible tissue. Tools like scanning laser ophthalmoscopy and spectral domain-optical coherence tomography provide clinicians with remarkably clear pictures of the living retina. Although the anterior optics of the eye permit such non-invasive visualization of the retina and associated pathology, the same optics induce significant aberrations that obviate cellular-resolution imaging in most cases. Adaptive optics (AO) imaging systems use active optical elements to compensate for aberrations in the optical path between the object and the camera. When applied to the human eye, AO allows direct visualization of individual rod and cone photoreceptor cells, retinal pigment epithelium cells, and white blood cells. AO imaging has changed the way vision scientists and ophthalmologists see the retina, helping to clarify our understanding of retinal structure, function, and the etiology of various retinal pathologies. Here, we review some of the advances that were made possible with AO imaging of the human retina and discuss applications and future prospects for clinical imaging. PMID:21057346

Godara, Pooja; Dubis, Adam M; Roorda, Austin; Duncan, Jacque L; Carroll, Joseph

2010-12-01

213

Scale Matching of 3D Point Clouds by Finding Keyscales with Spin Images  

Microsoft Academic Search

In this paper we propose a method for matching the scales of 3D point clouds. 3D point sets of the same scene obtained by 3D reconstruction techniques usually differ in scales. To match scales, we propose a keyscale that characterizes the scale of a given 3D point cloud. By performing PCA of spin images over different scales, a keyscale is

Toru Tamaki; Shunsuke Tanigawa; Yuji Ueno; Bisser Raytchev; Kazufumi Kaneda

2010-01-01

214

ROIC for gated 3D imaging LADAR receiver  

NASA Astrophysics Data System (ADS)

Time of flight laser range finding, deep space communications and scanning video imaging are three applications requiring very low noise optical receivers to achieve detection of fast and weak optical signal. HgCdTe electrons initiated avalanche photodiodes (e-APDs) in linear multiplication mode is the detector of choice thanks to its high quantum efficiency, high gain at low bias, high bandwidth and low noise factor. In this project, a readout integrated circuit of hybrid e-APD focal plane array (FPA) with 100um pitch for 3D-LADAR was designed for gated optical receiver. The ROIC works at 77K, including unit cell circuit, column-level circuit, timing control, bias circuit and output driver. The unit cell circuit is a key component, which consists of preamplifier, correlated double Sampling (CDS), bias circuit and timing control module. Specially, the preamplifier used the capacitor feedback transimpedance amplifier (CTIA) structure which has two capacitors to offer switchable capacitance for passive/active dual mode imaging. The main circuit of column-level circuit is a precision Multiply-by-Two circuit which is implemented by switched-capacitor circuit. Switched-capacitor circuit is quite suitable for the signal processing of readout integrated circuit (ROIC) due to the working characteristics. The output driver uses a simply unity-gain buffer. Because the signal is amplified in column-level circuit, the amplifier in unity-gain buffer uses a rail-rail amplifier. In active imaging mode, the integration time is 80ns. Integrating current from 200nA to 4uA, this circuit shows the nonlinearity is less than 1%. In passive imaging mode, the integration time is 150ns. Integrating current from 1nA to 20nA shows the nonlinearity less than 1%.

Chen, Guoqiang; Zhang, Junling; Wang, Pan; Zhou, Jie; Gao, Lei; Ding, Ruijun

2013-09-01

215

3D Imaging of Enzymes Working in Situ.  

PubMed

Today, development of slowly digestible food with positive health impact and production of biofuels is a matter of intense research. The latter is achieved via enzymatic hydrolysis of starch or biomass such as lignocellulose. Free label imaging, using UV autofluorescence, provides a great tool to follow one single enzyme when acting on a non-UV-fluorescent substrate. In this article, we report synchrotron DUV fluorescence in 3-dimensional imaging to visualize in situ the diffusion of enzymes on solid substrate. The degradation pathway of single starch granules by two amylases optimized for biofuel production and industrial starch hydrolysis was followed by tryptophan autofluorescence (excitation at 280 nm, emission filter at 350 nm). The new setup has been specially designed and developed for a 3D representation of the enzyme-substrate interaction during hydrolysis. Thus, this tool is particularly effective for improving knowledge and understanding of enzymatic hydrolysis of solid substrates such as starch and lignocellulosic biomass. It could open up the way to new routes in the field of green chemistry and sustainable development, that is, in biotechnology, biorefining, or biofuels. PMID:24796213

Jamme, F; Bourquin, D; Tawil, G; Viksø-Nielsen, A; Buléon, A; Réfrégiers, M

2014-06-01

216

Automated segmentation of outer retinal layers in macular OCT images of patients with retinitis pigmentosa.  

PubMed

To provide a tool for quantifying the effects of retinitis pigmentosa (RP) seen on spectral domain optical coherence tomography images, an automated layer segmentation algorithm was developed. This algorithm, based on dual-gradient information and a shortest path search strategy, delineates the inner limiting membrane and three outer retinal boundaries in optical coherence tomography images from RP patients. In addition, an automated inner segment (IS)/outer segment (OS) contour detection method based on the segmentation results is proposed to quantify the locus of points at which the OS thickness goes to zero in a 3D volume scan. The segmentation algorithm and the IS/OS contour were validated with manual segmentation data. The segmentation and IS/OS contour results on repeated measures showed good within-day repeatability, while the results on data acquired on average 22.5 months afterward demonstrated a possible means to follow disease progression. In particular, the automatically generated IS/OS contour provided a possible objective structural marker for RP progression. PMID:21991543

Yang, Qi; Reisman, Charles A; Chan, Kinpui; Ramachandran, Rithambara; Raza, Ali; Hood, Donald C

2011-09-01

217

Automated segmentation of outer retinal layers in macular OCT images of patients with retinitis pigmentosa  

PubMed Central

To provide a tool for quantifying the effects of retinitis pigmentosa (RP) seen on spectral domain optical coherence tomography images, an automated layer segmentation algorithm was developed. This algorithm, based on dual-gradient information and a shortest path search strategy, delineates the inner limiting membrane and three outer retinal boundaries in optical coherence tomography images from RP patients. In addition, an automated inner segment (IS)/outer segment (OS) contour detection method based on the segmentation results is proposed to quantify the locus of points at which the OS thickness goes to zero in a 3D volume scan. The segmentation algorithm and the IS/OS contour were validated with manual segmentation data. The segmentation and IS/OS contour results on repeated measures showed good within-day repeatability, while the results on data acquired on average 22.5 months afterward demonstrated a possible means to follow disease progression. In particular, the automatically generated IS/OS contour provided a possible objective structural marker for RP progression.

Yang, Qi; Reisman, Charles A.; Chan, Kinpui; Ramachandran, Rithambara; Raza, Ali; Hood, Donald C.

2011-01-01

218

3D image processing architecture for camera phones  

NASA Astrophysics Data System (ADS)

Putting high quality and easy-to-use 3D technology into the hands of regular consumers has become a recent challenge as interest in 3D technology has grown. Making 3D technology appealing to the average user requires that it be made fully automatic and foolproof. Designing a fully automatic 3D capture and display system requires: 1) identifying critical 3D technology issues like camera positioning, disparity control rationale, and screen geometry dependency, 2) designing methodology to automatically control them. Implementing 3D capture functionality on phone cameras necessitates designing algorithms to fit within the processing capabilities of the device. Various constraints like sensor position tolerances, sensor 3A tolerances, post-processing, 3D video resolution and frame rate should be carefully considered for their influence on 3D experience. Issues with migrating functions such as zoom and pan from the 2D usage model (both during capture and display) to 3D needs to be resolved to insure the highest level of user experience. It is also very important that the 3D usage scenario (including interactions between the user and the capture/display device) is carefully considered. Finally, both the processing power of the device and the practicality of the scheme needs to be taken into account while designing the calibration and processing methodology.

Atanassov, Kalin; Ramachandra, Vikas; Goma, Sergio R.; Aleksic, Milivoje

2011-01-01

219

Retrospective Illumination Correction of Retinal Images  

PubMed Central

A method for correction of nonhomogenous illumination based on optimization of parameters of B-spline shading model with respect to Shannon's entropy is presented. The evaluation of Shannon's entropy is based on Parzen windowing method (Mangin, 2000) with the spline-based shading model. This allows us to express the derivatives of the entropy criterion analytically, which enables efficient use of gradient-based optimization algorithms. Seven different gradient- and nongradient-based optimization algorithms were initially tested on a set of 40 simulated retinal images, generated by a model of the respective image acquisition system. Among the tested optimizers, the gradient-based optimizer with varying step has shown to have the fastest convergence while providing the best precision. The final algorithm proved to be able of suppressing approximately 70% of the artificially introduced non-homogenous illumination. To assess the practical utility of the method, it was qualitatively tested on a set of 336 real retinal images; it proved the ability of eliminating the illumination inhomogeneity substantially in most of cases. The application field of this method is especially in preprocessing of retinal images, as preparation for reliable segmentation or registration.

Kubecka, Libor; Jan, Jiri; Kolar, Radim

2010-01-01

220

3D quantitative Fourier analysis of second harmonic generation microscopy images of collagen structure in cartilage  

NASA Astrophysics Data System (ADS)

One of the main advantages of nonlinear microscopy is that it provides 3D imaging capability. Second harmonic generation is widely used to image the 3D structure of collagen fibers, and several works have highlighted the modification of the collagen fiber fabric in important diseases. By using an ellipsoidal specific fitting technique on the Fourier transformed image, we show, using both synthetic images and SHG images from cartilage, that the 3D direction of the collagen fibers can be robustly determined.

Romijn, Elisabeth I.; Lilledahl, Magnus B.

2013-02-01

221

A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images  

NASA Astrophysics Data System (ADS)

Glaucoma is an ocular disease characterized by distinctive changes in the optic nerve head (ONH) and visual field. Glaucoma can strike without symptoms and causes blindness if it remains without treatment. Therefore, early disease detection is important so that treatment can be initiated and blindness prevented. In this context, important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, an essential element for glaucoma detection and monitoring. 3D spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, has been commonly used to discriminate glaucomatous from healthy subjects. In this paper, we present a new framework for detection of glaucoma progression using 3D SD-OCT images. In contrast to previous works that the retinal nerve fiber layer (RNFL) thickness measurement provided by commercially available spectral-domain optical coherence tomograph, we consider the whole 3D volume for change detection. To integrate a priori knowledge and in particular the spatial voxel dependency in the change detection map, we propose the use of the Markov Random Field to handle a such dependency. To accommodate the presence of false positive detection, the estimated change detection map is then used to classify a 3D SDOCT image into the "non-progressing" and "progressing" glaucoma classes, based on a fuzzy logic classifier. We compared the diagnostic performance of the proposed framework to existing methods of progression detection.

Belghith, Akram; Bowd, Christopher; Weinreb, Robert N.; Zangwill, Linda M.

2014-03-01

222

Vessel segmentation in retinal images  

NASA Astrophysics Data System (ADS)

Detection of the papilla region and vessel detection on images of the retina are problems that can be solved with pattern recognition techniques. Topographic images, as provided e.g. by the HRT device, as well as fundus images can be used as source for the detection. It is of diagnostic importance to separate vessels inside the papilla area from those outside this area. Therefore, detection of the papilla is important also for vessel segmentation. In this contribution we present state of the art methods for automatic disk segmentation and compare their results. Vessels detected with matched filters (wavelets, derivatives of the Gaussian, etc.) are shown as well as vessel segmentation using image morphology. We present our own method for vessel segmentation based on a special matched filter followed by image morphology. In this contribution we argue for a new matched filter that is suited for large vessels in HRT images.

Paulus, Dietrich; Chastel, Serge; Feldmann, Tobias

2005-04-01

223

Image-Based 3d Reconstruction and Analysis for Orthodontia  

NASA Astrophysics Data System (ADS)

Among the main tasks of orthodontia are analysis of teeth arches and treatment planning for providing correct position for every tooth. The treatment plan is based on measurement of teeth parameters and designing perfect teeth arch curve which teeth are to create after treatment. The most common technique for teeth moving uses standard brackets which put on teeth and a wire of given shape which is clamped by these brackets for producing necessary forces to every tooth for moving it in given direction. The disadvantages of standard bracket technique are low accuracy of tooth dimensions measurements and problems with applying standard approach for wide variety of complex orthodontic cases. The image-based technique for orthodontic planning, treatment and documenting aimed at overcoming these disadvantages is proposed. The proposed approach provides performing accurate measurements of teeth parameters needed for adequate planning, designing correct teeth position and monitoring treatment process. The developed technique applies photogrammetric means for teeth arch 3D model generation, brackets position determination and teeth shifting analysis.

Knyaz, V. A.

2012-08-01

224

3D Shear wave imaging: A simulation and experimental study  

Microsoft Academic Search

The wave equation describing shear wave propaga- tion in three-dimensional (3-D) viscoelastic media is solved nu- merically with a finite differences time domain (FDTD) method. Solutions are simulated in terms of scatterer velocity waves and verified via comparison to 3-D experimentally acquired wave fields in a heterogenous hydrogel phantom. The numerical algorithm is used as a tool to study wave

Marko Orescanin; Yue Wang; Michael F. Insana

2010-01-01

225

Multimodality Image Fusion for 3-D Model Building with Applications.  

National Technical Information Service (NTIS)

In this investigation, the authors propose a methodology for 3-D model building through the fusion of multimodality data provided from space- borne and/or air-borne sensors. A 3-D model of a target area can be built using different data types (e.g., Lands...

A. A. Farag

2004-01-01

226

3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine  

NASA Astrophysics Data System (ADS)

3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.

Hamamoto, Kazuhiko; Sato, Motoyoshi

227

Segmented images and 3D images for studying the anatomical structures in MRIs  

NASA Astrophysics Data System (ADS)

For identifying the pathological findings in MRIs, the anatomical structures in MRIs should be identified in advance. For studying the anatomical structures in MRIs, an education al tool that includes the horizontal, coronal, sagittal MRIs of entire body, corresponding segmented images, 3D images, and browsing software is necessary. Such an educational tool, however, is hard to obtain. Therefore, in this research, such an educational tool which helps medical students and doctors study the anatomical structures in MRIs was made as follows. A healthy, young Korean male adult with standard body shape was selected. Six hundred thirteen horizontal MRIs of the entire body were scanned and inputted to the personal computer. Sixty anatomical structures in the horizontal MRIs were segmented to make horizontal segmented images. Coronal, sagittal MRIs and coronal, sagittal segmented images were made. 3D images of anatomical structures in the segmented images were reconstructed by surface rendering method. Browsing software of the MRIs, segmented images, and 3D images was composed. This educational tool that includes horizontal, coronal, sagittal MRIs of entire body, corresponding segmented images, 3D images, and browsing software is expected to help medical students and doctors study anatomical structures in MRIs.

Lee, Yong Sook; Chung, Min Suk; Cho, Jae Hyun

2004-05-01

228

Retinal detachment repair - series (image)  

MedlinePLUS

The retina is the internal layer of the eye that receives and transmits images that have passed through and ... associated with a tear or hole in the retina through which the internal fluids of the eye ...

229

DLP-based structured light 3D imaging technologies and applications  

NASA Astrophysics Data System (ADS)

In this paper, we provide a thorough overview of recent advances in 3D surface imaging technologies. We focus particularly on non-contact 3D surface measurement techniques based on structured illumination. The high-speed and high-resolution pattern projection capability offered by the digital light processing (DLP) technology, together with the recent advances in imaging sensor technologies, may enable new generation systems for 3D surface measurement applications that provide much better functionality and performance than existing ones, in terms of speed, accuracy, resolution, size, cost, and ease of use. Performance indexes of 3D imaging systems in general are discussed and various 3D surface imaging schemes are categorized, illustrated, and compared. Calibration techniques are also discussed since they play critical roles in achieving the required precision. Benefits and challenges of using DLP technology in 3D imaging applications are discussed. Numerous applications of 3D technologies are discussed with several examples.

Geng, Jason

2011-03-01

230

Digital Tracking and Control of Retinal Images.  

National Technical Information Service (NTIS)

Laser induced retinal lesions are used to treat a variety of eye diseases such as diabetic retinopathy and retinal tears or breaks. Both the location and size of the retinal lesions are critical for effective treatment and minimal complications. Currently...

S. F. Barrett

1993-01-01

231

Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration  

NASA Astrophysics Data System (ADS)

The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

2012-02-01

232

Prostate boundary segmentation from 3D ultrasound images.  

PubMed

Segmenting, or outlining the prostate boundary is an important task in the management of patients with prostate cancer. In this paper, an algorithm is described for semiautomatic segmentation of the prostate from 3D ultrasound images. The algorithm uses model-based initialization and mesh refinement using an efficient deformable model. Initialization requires the user to select only six points from which the outline of the prostate is estimated using shape information. The estimated outline is then automatically deformed to better fit the prostate boundary. An editing tool allows the user to edit the boundary in problematic regions and then deform the model again to improve the final results. The algorithm requires less than 1 min on a Pentium III 400 MHz PC. The accuracy of the algorithm was assessed by comparing the algorithm results, obtained from both local and global analysis, to the manual segmentations on six prostates. The local difference was mapped on the surface of the algorithm boundary to produce a visual representation. Global error analysis showed that the average difference between manual and algorithm boundaries was -0.20 +/- 0.28 mm, the average absolute difference was 1.19 +/- 0.14 mm, the average maximum difference was 7.01 +/- 1.04 mm, and the average volume difference was 7.16% +/- 3.45%. Variability in manual and algorithm segmentation was also assessed: Visual representations of local variability were generated by mapping variability on the segmentation mesh. The mean variability in manual segmentation was 0.98 mm and in algorithm segmentation was 0.63 mm and the differences of about 51.5% of the points comprising the average algorithm boundary are insignificant (P < or = 0.01) to the manual average boundary. PMID:12906182

Hu, Ning; Downey, Dónal B; Fenster, Aaron; Ladak, Hanif M

2003-07-01

233

Focus image feedback-controlled 3D laser microstructuring  

Microsoft Academic Search

The availability of reliable ultrafast laser systems and their unique properties for material processing are the basis for new lithographic methods in the sector of micro- and nanofabrication processes such as two-photon 3D-lithography. Beside its flexibility, one of the most powerful features of this technology is the true 3D structuring capability, which allows fabrication with higher efficiency and with higher

Volker Schmidt; Ladislav Kuna; Georg Jakopic; Ernst Wildling; Gregor Langer; Günther Leising

2005-01-01

234

Adaptive Optics Retinal Imaging: Emerging Clinical Applications  

PubMed Central

The human retina is a uniquely accessible tissue. Tools like scanning laser ophthalmoscopy (SLO) and spectral domain optical coherence tomography (SD-OCT) provide clinicians with remarkably clear pictures of the living retina. While the anterior optics of the eye permit such non-invasive visualization of the retina and associated pathology, these same optics induce significant aberrations that in most cases obviate cellular-resolution imaging. Adaptive optics (AO) imaging systems use active optical elements to compensate for aberrations in the optical path between the object and the camera. Applied to the human eye, AO allows direct visualization of individual rod and cone photoreceptor cells, RPE cells, and white blood cells. AO imaging has changed the way vision scientists and ophthalmologists see the retina, helping to clarify our understanding of retinal structure, function, and the etiology of various retinal pathologies. Here we review some of the advances made possible with AO imaging of the human retina, and discuss applications and future prospects for clinical imaging.

Godara, Pooja; Dubis, Adam M.; Roorda, Austin; Duncan, Jacque L.; Carroll, Joseph

2010-01-01

235

3D holographic display with enlarged image using a concave reflecting mirror  

NASA Astrophysics Data System (ADS)

We propose a method to enlarge the 3D image in holographic display using a concave reflecting mirror based on the optical reversibility theorem. The holograms are computed using the look-up table (LUT) method, and the common data of the 3D objects are compressed to reduce the memory usage of LUT. Optical experiments are performed and the results show that 3D image can be magnified without any distortion in a shortened image distance, and the memory usage of LUT is reduced. Keywords: computer holography; holographic display; magnification of 3D image size; distortion of the image; compensation of the distortion.

Jia, Jia; Wang, Yongtian; Liu, Juan; Li, Xin; Pan, Yijie

2012-11-01

236

Automated segmentation by pixel classification of retinal layers in ophthalmic OCT images.  

PubMed

Current OCT devices provide three-dimensional (3D) in-vivo images of the human retina. The resulting very large data sets are difficult to manually assess. Automated segmentation is required to automatically process the data and produce images that are clinically useful and easy to interpret. In this paper, we present a method to segment the retinal layers in these images. Instead of using complex heuristics to define each layer, simple features are defined and machine learning classifiers are trained based on manually labeled examples. When applied to new data, these classifiers produce labels for every pixel. After regularization of the 3D labeled volume to produce a surface, this results in consistent, three-dimensionally segmented layers that match known retinal morphology. Six labels were defined, corresponding to the following layers: Vitreous, retinal nerve fiber layer (RNFL), ganglion cell layer & inner plexiform layer, inner nuclear layer & outer plexiform layer, photoreceptors & retinal pigment epithelium and choroid. For both normal and glaucomatous eyes that were imaged with a Spectralis (Heidelberg Engineering) OCT system, the five resulting interfaces were compared between automatic and manual segmentation. RMS errors for the top and bottom of the retina were between 4 and 6 ?m, while the errors for intra-retinal interfaces were between 6 and 15 ?m. The resulting total retinal thickness maps corresponded with known retinal morphology. RNFL thickness maps were compared to GDx (Carl Zeiss Meditec) thickness maps. Both maps were mostly consistent but local defects were better visualized in OCT-derived thickness maps. PMID:21698034

Vermeer, K A; van der Schoot, J; Lemij, H G; de Boer, J F

2011-06-01

237

Adaptive Optics Technology for High-Resolution Retinal Imaging  

PubMed Central

Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effects of optical aberrations. The direct visualization of the photoreceptor cells, capillaries and nerve fiber bundles represents the major benefit of adding AO to retinal imaging. Adaptive optics is opening a new frontier for clinical research in ophthalmology, providing new information on the early pathological changes of the retinal microstructures in various retinal diseases. We have reviewed AO technology for retinal imaging, providing information on the core components of an AO retinal camera. The most commonly used wavefront sensing and correcting elements are discussed. Furthermore, we discuss current applications of AO imaging to a population of healthy adults and to the most frequent causes of blindness, including diabetic retinopathy, age-related macular degeneration and glaucoma. We conclude our work with a discussion on future clinical prospects for AO retinal imaging.

Lombardo, Marco; Serrao, Sebastiano; Devaney, Nicholas; Parravano, Mariacristina; Lombardo, Giuseppe

2013-01-01

238

Comparison of the depth of an optic nerve head obtained using stereo retinal images and HRT  

NASA Astrophysics Data System (ADS)

The analysis of the optic nerve head (ONH) in the retinal fundus is important for the early detection of glaucoma. In this study, we investigate an automatic reconstruction method for producing the 3-D structure of the ONH from a stereo retinal image pair; the depth value of the ONH measured by using this method was compared with the measurement results determined from the Heidelberg Retina Tomograph (HRT). We propose a technique to obtain the depth value from the stereo image pair, which mainly consists of four steps: (1) cutout of the ONH region from the retinal images, (2) registration of the stereo pair, (3) disparity detection, and (4) depth calculation. In order to evaluate the accuracy of this technique, the shape of the depression of an eyeball phantom that had a circular dent as generated from the stereo image pair and used to model the ONH was compared with a physically measured quantity. The measurement results obtained when the eyeball phantom was used were approximately consistent. The depth of the ONH obtained using the stereo retinal images was in accordance with the results obtained using the HRT. These results indicate that the stereo retinal images could be useful for assessing the depth of the ONH for the diagnosis of glaucoma.

Nakagawa, Toshiaki; Hayashi, Yoshinori; Hatanaka, Yuji; Aoyama, Akira; Hara, Takeshi; Kakogawa, Masakatsu; Fujita, Hiroshi; Yamamoto, Tetsuya

2007-03-01

239

Automatic target recognition using 3D passive sensing and imaging with independent component analysis  

Microsoft Academic Search

We present an overview of a method using Independent Component Analysis (ICA) and 3D Integral Imaging (II) technique to recognize 3D objects at different orientations. This method has been successfully applied to the recognition and classification of 3D scenes.

Cuong M. Do; Raul Martínez-Cuenca; Bahram Javidi

2009-01-01

240

Automatic target recognition using 3D passive sensing and imaging with independent component analysis  

NASA Astrophysics Data System (ADS)

We present an overview of a method using Independent Component Analysis (ICA) and 3D Integral Imaging (II) technique to recognize 3D objects at different orientations. This method has been successfully applied to the recognition and classification of 3D scenes.

Do, Cuong M.; Martínez-Cuenca, Raul; Javidi, Bahram

2009-04-01

241

Laser point cloud diluting and refined 3D reconstruction fusing with digital images  

Microsoft Academic Search

This paper shows a method to combine the imaged-based modeling technique and Laser scanning data to rebuild a realistic 3D model. Firstly use the image pair to build a relative 3D model of the object, and then register the relative model to the Laser coordinate system. Project the Laser points to one of the images and extract the feature lines

Jie Liu; Jianqing Zhang

2007-01-01

242

3D image reconstruction and human body tracking using stereo vision and Kinect technology  

Microsoft Academic Search

Kinect is a recent technology used for motion detection and human body tracking designed for a video game console. In this study, we explore two different types of 3D image reconstruction methods to achieve a new method for faster and higher quality 3D images. Generating depth perception information using high quality stereo image textures is computationally heavy and inefficient. On

Weidi Jia; Won-Jae Yi; Jafar Saniie; Erdal Oruklu

2012-01-01

243

DXSoil, a library for 3D image analysis in soil science  

Microsoft Academic Search

A comprehensive series of routines has been developed to extract structural and topological information from 3D images of porous media. The main application aims at feeding a pore network approach to simulate unsaturated hydraulic properties from soil core images. Beyond the application example, the successive algorithms presented in the paper allow, from any 3D object image, the extraction of the

Jean-fran-cois Delerue; Edith Perrier

2002-01-01

244

Quantitative 3-D imaging topogrammetry for telemedicine applications  

NASA Technical Reports Server (NTRS)

The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with precision micro-sewing machines, splice neural connections with laser welds, micro-bore through constricted vessels, and computer combine ultrasound, microradiography, and 3-D mini-borescopes to quickly assess and trace vascular problems in situ. The spatial relationships between organs, robotic arms, and end-effector diagnostic, manipulative, and surgical instruments would be constantly monitored by the robot 'brain' using inputs from its multiple 3-D quantitative 'eyes' remote sensing, as well as by contact and proximity force measuring devices. Methods to create accurate and quantitative 3-D topograms at continuous video data rates are described.

Altschuler, Bruce R.

1994-01-01

245

3D INDUSTRIAL RECONSTRUCTION BY FITTING CSG MODELS TO A COMBINATION OF IMAGES AND POINT CLOUDS  

Microsoft Academic Search

We present a method for 3D reconstruction of industrial sites using a combination of images and point clouds with a motivation of achieving higher levels of automation, precision, and reliability. Recent advances in 3D scanning technologies have made possible rapid and cost-effective acquisition of dense point clouds for 3D reconstruction. As the point clouds provide explicit 3D information, they have

Tahir Rabbani; Frank van den Heuvel

246

Estimating Density Gradients and Drivers from 3D Ionospheric Imaging  

NASA Astrophysics Data System (ADS)

The transition regions at the edges of the ionospheric storm-enhanced density (SED) are important for a detailed understanding of the mid-latitude physical processes occurring during major magnetic storms. At the boundary, the density gradients are evidence of the drivers that link the larger processes of the SED, with its connection to the plasmasphere and prompt-penetration electric fields, to the smaller irregularities that result in scintillations. For this reason, we present our estimates of both the plasma variation with horizontal and vertical spatial scale of 10 - 100 km and the plasma motion within and along the edges of the SED. To estimate the density gradients, we use Ionospheric Data Assimilation Four-Dimensional (IDA4D), a mature data assimilation algorithm that has been developed over several years and applied to investigations of polar cap patches and space weather storms [Bust and Crowley, 2007; Bust et al., 2007]. We use the density specification produced by IDA4D with a new tool for deducing ionospheric drivers from 3D time-evolving electron density maps, called Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE technique has been tested on simulated data from TIMEGCM-ASPEN and on IDA4D-based density estimates with ongoing validation from Arecibo ISR measurements [Datta-Barua et al., 2009a; 2009b]. We investigate the SED that formed during the geomagnetic super storm of November 20, 2003. We run IDA4D at low-resolution continent-wide, and then re-run it at high (~10 km horizontal and ~5-20 km vertical) resolution locally along the boundary of the SED, where density gradients are expected to be highest. We input the high-resolution estimates of electron density to EMPIRE to estimate the ExB drifts and field-aligned plasma velocities along the boundaries of the SED. We expect that these drivers contribute to the density structuring observed along the SED during the storm. Bust, G. S. and G. Crowley (2007), Tracking of polar cap patches using data assimilation, J. Geophys. Res., 112, A05307, doi:10.1029/2005JA011597. Bust, G. S., G. Crowley, T. W. Garner, T. L. Gaussiran II, R. W. Meggs, C. N. Mitchell, P. S. J. Spencer, P. Yin, and B. Zapfe (2007) ,Four Dimensional GPS Imaging of Space-Weather Storms, Space Weather, 5, S02003, doi:10.1029/2006SW000237. Datta-Barua, S., G. S. Bust, G. Crowley, and N. Curtis (2009a), Neutral wind estimation from 4-D ionospheric electron density images, J. Geophys. Res., 114, A06317, doi:10.1029/2008JA014004. Datta-Barua, S., G. Bust, and G. Crowley (2009b), "Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE)," presented at CEDAR, Santa Fe, New Mexico, July 1.

Datta-Barua, S.; Bust, G. S.; Curtis, N.; Reynolds, A.; Crowley, G.

2009-12-01

247

Millimetre Wave 3D Imaging for Industrial Applications  

Microsoft Academic Search

Within the next few years, improvements in graphic rendering and the application of millimetre wave radar technology will, together, allow operators of industrial processes in poor visibility environments to control their machines remotely. At the ACFR, we have been developing the 3D radar systems and displays to make this a reality in the mining industry. This paper discusses the principles

Graham Brooker; Ross Hennessey; Mark Bishop; Craig Lobsey; Andrew Maclean

2007-01-01

248

Corrosion evaluation by thermal image processing and 3D modelling  

Microsoft Academic Search

Quantitative transient IR thermography has been applied to the characterization of hidden corrosion in metals. A dedicated 3D numerical model of heat transfer has been used to solve the direct thermal problem and to simulate the test. Theoretical modelling allows the verification of limits of the ID solution and the derivation of coefficients which take heat diffusion into account. An

Ermanno Grinzato; Vladimir Vavilov

1998-01-01

249

QBISM: Extending a DBMS to Support 3D Medical Images  

Microsoft Academic Search

Describes the design and implementation of QBlSM (Query By Interactive, Spatial Multimedia), a prototype for querying and visualizing 3D spatial data. The first application is in an area in medical research, in particular, Functional Brain Mapping. The system is built on top of the Starburst DBMS extended to handle spatial data types, specifically, scalar fields and arbitrary regions of space

Manish Arya; William F. Cody; Christos Faloutsos; Joel E. Richardson; Arthur Togax

1994-01-01

250

Peripapillary retinal nerve fiber layer thickness distribution in Chinese with myopia measured by 3D-optical coherence tomography  

PubMed Central

AIM To assess the effect of myopia on the thickness of retinal nerve fiber layer (RNFL) measured by 3D optical coherence tomography (3D-OCT) in a group of nonglaucomatous Chinese subjects. METHODS Two hundred and fifty-eight eyes of 258 healthy Chinese myopic individuals were recruited and four groups were classified according to their spherical equivalent (SE): low myopia (n=42, -0.5D3D-OCT. The RNFL thicknesses among four sample groups were performed by one-way analysis of variance (one-way ANOVA) and least significant difference test (LSD test). Correlations between RNFL thickness and axial length/spherical equivalent were performed by linear regression analysis. RESULTS The overall RNFL parameters shown significant differences between groups excluding 7, 9, 10, 11 o'clock hour thickness. The RNFL thickness of superior, nasal, inferior, average and 1, 2, 3, 4, 5, 6, 12 o'clock sectors were decreased with the increasing axial length and higher degree of myopia. In contrast, as axial length and the degree of myopia increased, the temporal and 8, 9 o'clock sectors thicknesses were increased. A considerable proportion of myopic eyes were classified as outside the normal limits. Six o'clock was the most notable of the total, which 43.4% were outside the normal limits. CONCLUSION On the measurement of RNFL, the characteristics of RNFL with the change of the degree of myopia were observed. As the degree of myopia increases, the RNFL thickness measured by 3D-OCT including the average and superior, nasal, inferior sectors decreases. And due to the change of RNFL thickness, it should be considered when using OCT to access for the damage of glaucoma especially people with myopia.

Zhao, Jing-Jing; Zhuang, Wen-Juan; Yang, Xue-Qiu; Li, Shan-Shan; Xiang, Wei

2013-01-01

251

Image analysis for microelectronic retinal prosthesis.  

PubMed

By way of extracellular, stimulating electrodes, a microelectronic retinal prosthesis aims to render discrete, luminous spots-so-called phosphenes-in the visual field, thereby providing a phosphene image (PI) as a rudimentary remediation of profound blindness. As part thereof, a digital camera, or some other photosensitive array, captures frames, frames are analyzed, and phosphenes are actuated accordingly by way of modulated charge injections. Here, we present a method that allows the assessment of image analysis schemes for integration with a prosthetic device, that is, the means of converting the captured image (high resolution) to modulated charge injections (low resolution). We use the mutual-information function to quantify the amount of information conveyed to the PI observer (device implantee), while accounting for the statistics of visual stimuli. We demonstrate an effective scheme involving overlapping, Gaussian kernels, and discuss extensions of the method to account for shortterm visual memory in observers, and their perceptual errors of omission and commission. PMID:18232379

Hallum, L E; Cloherty, S L; Lovell, N H

2008-01-01

252

Automatic arteriovenous crossing phenomenon detection on retinal fundus images  

Microsoft Academic Search

Arteriolosclerosis is one cause of acquired blindness. Retinal fundus image examination is useful for early detection of arteriolosclerosis. In order to diagnose the presence of arteriolosclerosis, the physicians find the silver-wire arteries, the copper-wire arteries and arteriovenous crossing phenomenon on retinal fundus images. The focus of this study was to develop the automated detection method of the arteriovenous crossing phenomenon

Yuji Hatanaka; Chisako Muramatsu; Takeshi Hara; Hiroshi Fujita

2011-01-01

253

Three-dimensional pointwise comparison of human retinal optical property at 845 and 1060 nm using optical frequency domain imaging  

NASA Astrophysics Data System (ADS)

To compare the optical properties of the human retina, 3-D volumetric images of the same eye are acquired with two nearly identical optical coherence tomography (OCT) systems at center wavelengths of 845 and 1060 nm using optical frequency domain imaging (OFDI). To characterize the contrast of individual tissue layers in the retina at these two wavelengths, the 3-D volumetric data sets are carefully spatially matched. The relative scattering intensities from different layers such as the nerve fiber, photoreceptor, pigment epithelium, and choroid are measured and a quantitative comparison is presented. OCT retinal imaging at 1060 nm is found to have a significantly better depth penetration but a reduced contrast between the retinal nerve fiber, the ganglion cell, and the inner plexiform layers compared to the OCT retinal imaging at 845 nm.

Chen, Yueli; Burnes, Daina L.; de Bruin, Martijn; Mujat, Mircea; de Boer, Johannes F.

2009-03-01

254

Deformable M-Reps for 3D Medical Image Segmentation  

Microsoft Academic Search

M-reps (formerly called DSLs) are a multiscale medial means for modeling and rendering 3D solid geometry. They are particularly well suited to model anatomic objects and in particular to capture prior geometric information effectively in deformable models segmentation approaches. The representation is based on figural models, which define objects at coarse scale by a hierarchy of figures—each figure generally a

Stephen M. Pizer; P. Thomas Fletcher; Sarang C. Joshi; Andrew Thall; James Z. Chen; Yonatan Fridman; Daniel S. Fritsch; A. Graham Gash; John M. Glotzer; Michael R. Jiroutek; Conglin Lu; Keith E. Muller; Gregg Tracton; Paul A. Yushkevich; Edward L. Chaney

2003-01-01

255

3D Lunar Terrain Reconstruction from Apollo Images  

Microsoft Academic Search

Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to\\u000a return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core\\u000a aspects of this system: (1)

Michael J. Broxton; Ara V. Nefian; Zachary Moratto; Taemin Kim; Michael Lundy; Aleksandr V. Segal

2009-01-01

256

Silhouette-based 3-D model reconstruction from multiple images  

Microsoft Academic Search

Abstract: The goal of this study is to investigate the reconstructionof 3D graphical models of real objects in a controlledimaging environment and present the work done inour group based on silhouette-based reconstruction. Althoughmany parts of the whole system have been wellknownin the literature and in practice, the main contributionof the paper is that it describes a complete, end-to-endsystem explained in

Adem Yasar Mülayim; Ulas Yilmaz; Volkan Atalay

2003-01-01

257

Simulation of a new 3D imaging sensor for identifying difficult military targets  

Microsoft Academic Search

This paper reports the successful application of automatic target recognition and identification (ATR\\/I) algorithms to simulated 3D imagery of 'difficult' military targets. QinetiQ and Selex S&AS are engaged in a joint programme to build a new 3D laser imaging sensor for UK MOD. The sensor is a 3D flash system giving an image containing range and intensity information suitable for

Christophe Harvey; Jonathan Wood; Peter Randall; Graham Watson; Gordon Smith

2008-01-01

258

Hyperspectral image lossy-to-lossless compression using the 3D Embedded Zeroblock Coding alogrithm  

Microsoft Academic Search

In this paper, we propose a hyperspectral image lossy-to-lossless compression coder based on the Three-Dimensional Embedded ZeroBlock Coding (3D EZBC) algorithm. This coder adopts the three-dimensional integer wavelet packet transform with unitary scaling to decorrelate and the 3D EZBC algorithm without motion compensation to process bitplane zeroblock coding. For hyperspectral image compression using the 3D EZBC algorithm, the lossy-to-lossless compression

Ying Hou; Guizhong Liu

2008-01-01

259

3D elasticity imaging on an open-chest dog heart  

Microsoft Academic Search

Myocardial ischemia and infarction alter myocardial viability and contractility. We have hypothesized that contractility changes can be detected by ultrasound strain imaging. Current ultrasound strain imaging methods are mainly 1D and 2D. However, heart motion is complex and 3D. Previous studies showed that 3D speckle tracking on a left ventricular (LV) phantom and a 3D LV simulation reduced low dimensional

Congxian Jia; Theodore J. Kolias; J. M. Rubin; Ping Yan; A. J. Sinusas; D. P. Dione; J. S. Duncan; Qifeng Wei; K. Thiele; Lingyun Huang; Sheng-Wen Huang; M. O'Donnell

2009-01-01

260

3D image reconstruction algorithms for cryo-electron-microscopy images of virus particles  

NASA Astrophysics Data System (ADS)

A statistical model for the object and the complete image formation process in cryo electron microscopy of viruses is presented. Using this model, maximum likelihood reconstructions of the 3D structure of viruses are computed using the expectation maximization algorithm and an example based on Cowpea mosaic virus is provided.

Doerschuk, Peter C.; Johnson, John E.

2000-11-01

261

3D image display of fetal ultrasonic images by thin shell  

NASA Astrophysics Data System (ADS)

Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.

Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen

1999-05-01

262

3D city site model extraction through point cloud generated from stereo images  

Microsoft Academic Search

It is a grand challenge to automatically extract 3D city site models from imagery. In the past three decades, researchers have used radiometric and spectral properties of 3D buildings and houses to extract them in digital imagery with limited success. This is because their radiometric and spectral properties vary considerably from image to image, from sensor to sensor, and from

Bingcai Zhang; William Smith

2011-01-01

263

Extensible visualization and analysis for multidimensional images using Vaa3D.  

PubMed

Open-Source 3D Visualization-Assisted Analysis (Vaa3D) is a software platform for the visualization and analysis of large-scale multidimensional images. In this protocol we describe how to use several popular features of Vaa3D, including (i) multidimensional image visualization, (ii) 3D image object generation and quantitative measurement, (iii) 3D image comparison, fusion and management, (iv) visualization of heterogeneous images and respective surface objects and (v) extension of Vaa3D functions using its plug-in interface. We also briefly demonstrate how to integrate these functions for complicated applications of microscopic image visualization and quantitative analysis using three exemplar pipelines, including an automated pipeline for image filtering, segmentation and surface generation; an automated pipeline for 3D image stitching; and an automated pipeline for neuron morphology reconstruction, quantification and comparison. Once a user is familiar with Vaa3D, visualization usually runs in real time and analysis takes less than a few minutes for a simple data set. PMID:24385149

Peng, Hanchuan; Bria, Alessandro; Zhou, Zhi; Iannello, Giulio; Long, Fuhui

2014-01-01

264

3D Elasticity imaging using principal stretches on an open-chest dog heart  

Microsoft Academic Search

Ultrasound strain imaging has demonstrated its ability to quantitatively assess myocardial viability and contractility altered by myocardial ischemia. However, current ultrasound strain imaging methods still use lower dimensional methods to monitor 3D heart motion. Some 3-D tracking algorithms have also been developed recently in different groups. Quantitative analysis using current methods depends on ultrasound probe orientation and selection of a

Congxian Jia; Ping Yan; Albert J. Sinusas; Donald P. Dione; Ben A. Lin; Qifeng Wei; Karl Thiele; Theodore J. Kolias; Jonathan M. Rubin; Lingyun Huang; James S. Duncan; Matthew O'Donnell

2010-01-01

265

2D Transducer Array for High-Speed 3D Imaging System.  

National Technical Information Service (NTIS)

3D imaging system was developed utilizing a combination of sparse 2D arrays, the synthetic aperture focusing technique, and coded excitation of a solution for high-speed 3D ultrasound imaging, since it could formulate multiple transmitting beams (as well ...

N. Okada M. Sato C. Ishihara Y. Tamura

2003-01-01

266

Registration vs. Reconstruction: Incorporating Structural Constraint in Building 3-D Models from 2-D Microscopy Images  

Microsoft Academic Search

Registration is the key step for 3-D reconstruction of mi- croanatomical structures from large number of microscopy images of biomedical samples. However, in most current approaches, 3-D structural information is not incorporated in the registration process. We present a novel approach by integrating structural constraints into the reconstruction pipeline. In stead of registering each image to its neighbors, we transform

Lee Cooper; Kun Huang; Ashish Sharma; Kishore Mosaliganti; Tony Pan; Antony Trimboli; Michael Ostrowski

267

Fully 3D PET image reconstruction using a Fourier preconditioned conjugate-gradient algorithm  

Microsoft Academic Search

Since the data sixes in fully 3D PET imaging are very large, iterative image reconstruction algorithms must converge in very few iterations to be useful. One can improve the convergence rate of the conjugate-gradient (CG) algorithm by incorporating preconditioning operators that approximate the inverse of the Hessian of the objective function. If the 3D cylindrical PET geometry were not truncated

Jeffrey A. Fessler; Edward P. Ficaro

1996-01-01

268

Radiology Lab 0: Introduction to 2D and 3D Imaging  

NSDL National Science Digital Library

This is a self-directed learning module to introduce students to basic concepts of imaging technology as well as to give students practice going between 2D and 3D imaging using everyday objects.Annotated: true

Shaffer, Kitt

2008-10-02

269

Vessel Cross-Sectional Diameter Measurement on Color Retinal Image  

Microsoft Academic Search

Vessel cross-sectional diameter is an important feature for analyzing retinal vascular changes. In automated retinal image\\u000a analysis, the measurement of vascular width is a complex process as most of the vessels are few pixels wide or suffering from\\u000a lack of contrast. In this paper, we propose a new method to measure the retinal blood vessel diameter which can be used

Alauddin Bhuiyan; Baikunth Nath; Joselíto J. Chua; Ramamohanarao Kotagiri

2008-01-01

270

Enhancing retinal images by extracting structural information  

NASA Astrophysics Data System (ADS)

High-resolution imaging of the retina has significant importance for science: physics and optics, biology, and medicine. The enhancement of images with poor contrast and the detection of faint structures require objective methods for assessing perceptual image quality. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce a framework for quality assessment based on the degradation of structural information. We implemented a new processing technique on a long sequence of retinal images of subjects with normal vision. We were able to perform a precise shift-and-add at the sub-pixel level in order to resolve the structures of the size of single cells in the living human retina. Last, we quantified the restoration reliability of the distorted images using an improved quality assessment. To that purpose, we used the single image restoration method based on the ergodic principle, which has originated in solar astronomy, to deconvolve aberrations after adaptive optics compensation.

Molodij, G.; Ribak, E. N.; Glanc, M.; Chenegros, G.

2014-02-01

271

3D visualization of myocardial motion and blood flow using cine-MR images  

Microsoft Academic Search

Describes a three-dimensional (3D) reconstruction and presentation technique to visualize myocardial motion and blood flow using 1 cine-magnetic resonance (MR) image. First, myocardium and blood were extracted on a cine-MR image with certain threshold gray values. Second, with successive two slices of cine-MR image, some slices were interpolated and inserted in the two slices to reconstruct a 3D image. Final,

O. Oshiro; A. Matani; K. Chihara; T. Mikami; A. Kitabatake

1996-01-01

272

Sparse multipass 3D SAR imaging: applications to the GOTCHA data set  

Microsoft Academic Search

Typically in SAR imaging, there is insufficient data to form well-resolved three-dimensional (3D) images using traditional Fourier image reconstruction; furthermore, scattering centers do not persist over wide-angles. In this work, we examine 3D non-coherent wide-angle imaging on the GOTCHA Air Force Research Laboratory (AFRL) data set; this data set consists of multipass complete circular aperture radar data from a scene

Christian D. Austin; Emre Ertin; Randolph L. Moses

2009-01-01

273

Distinguishing Among 3-D Distributions for Brain Image Data Classification  

Microsoft Academic Search

To facilitate the process of discovering brain structure-function associations from image and clinical data, we have developed classification tools for brain image data that are based on measures of dissimilarity between probability distributions. We propose statistical as well as non-statistical methods for classifying three dimensional probability distributions of regions of interest (ROIs) in brain images. The statistical methods are based

A. Lazarevic; D. Pokrajac; V. Megalooikonomou; Z. Obradovic

2001-01-01

274

3D pupil plane imaging of opaque targets  

Microsoft Academic Search

Correlography is a technique that allows image formation from non-imaged speckle patterns via their relationship to the autocorrelation of the scene. Algorithms designed to form images from this type of data represent a particular type of phase retrieval algorithm since the autocorrelation function is related to the Fourier magnitude of the scene but not the Fourier phase. Methods for forming

Stephen C. Cain

2010-01-01

275

Classification of left and right eye retinal images  

NASA Astrophysics Data System (ADS)

Retinal image analysis is used by clinicians to diagnose and identify, if any, pathologies present in a patient's eye. The developments and applications of computer-aided diagnosis (CAD) systems in medical imaging have been rapidly increasing over the years. In this paper, we propose a system to classify left and right eye retinal images automatically. This paper describes our two-pronged approach to classify left and right retinal images by using the position of the central retinal vessel within the optic disc, and by the location of the macula with respect to the optic nerve head. We present a framework to automatically identify the locations of the key anatomical structures of the eye- macula, optic disc, central retinal vessels within the optic disc and the ISNT regions. A SVM model for left and right eye retinal image classification is trained based on the features from the detection and segmentation. An advantage of this is that other image processing algorithms can be focused on regions where diseases or pathologies and more likely to occur, thereby increasing the efficiency and accuracy of the retinal CAD system/pathology detection. We have tested our system on 102 retinal images, consisting of 51 left and right images each and achieved and accuracy of 94.1176%. The high experimental accuracy and robustness of this system demonstrates that there is potential for this system to be integrated and applied with other retinal CAD system, such as ARGALI, for a priori information in automatic mass screening and diagnosis of retinal diseases.

Tan, Ngan Meng; Liu, Jiang; Wong, Damon W. K.; Zhang, Zhuo; Lu, Shijian; Lim, Joo Hwee; Li, Huiqi; Wong, Tien Yin

2010-03-01

276

3D Image Segmentation Applied to Solar RHD Simulations  

NASA Astrophysics Data System (ADS)

3D simulation models based on Magneto-hydrodynamics (MHD) and Radiation-hydrodynamics (RHD) equations give insight into the evolution of magnetic fields and convective motions in the solar atmosphere. The analysis of huge amount of data require the development of automated segmentation algorithms. A newly developed 3D segmentation algorithm will be introduced in order to extract and trace convective downflows and is applied to the numerical simulation code ANTARES. The algorithm segments strong downflow velocities resulting in tube-like structures which enables us to analyze the motions with respect to variations of physical parameters over height as well as their evolution with time. Analysis of the segmented structures shows that narrower parts tend to have higher velocities. High temporal variations in the lower model photosphere indicate less stable structures over time in this layer. The mean temperature within the downflow is cooler than in the horizontally averaged simulation box. The analysis of the behavior of vortex flows demonstrates a constant high vorticity within the segment and a linear dependency to the vertical velocity. It appears that vortex flows are strongly present within dominant convective downflows.

Lemmerer, B.; Utz, D.; Hanslmeier, A.; Veronig, A.; Grimm-Strele, H.; Thonhofer, S.; Muthsam, H.

277

Reconstruction of 3d Digital Image of Weepingforsythia Pollen  

NASA Astrophysics Data System (ADS)

Confocal microscopy, which is a major advance upon normal light microscopy, has been used in a number of scientific fields. By confocal microscopy techniques, cells and tissues can be visualized deeply, and three-dimensional images created. Compared with conventional microscopes, confocal microscope improves the resolution of images by eliminating out-of-focus light. Moreover, confocal microscope has a higher level of sensitivity due to highly sensitive light detectors and the ability to accumulate images captured over time. In present studies, a series of Weeping Forsythia pollen digital images (35 images in total) were acquired with confocal microscope, and the three-dimensional digital image of the pollen reconstructed with confocal microscope. Our results indicate that it's a very easy job to analysis threedimensional digital image of the pollen with confocal microscope and the probe Acridine orange (AO).

Liu, Dongwu; Chen, Zhiwei; Xu, Hongzhi; Liu, Wenqi; Wang, Lina

278

IMPROMPTU: a system for automatic 3D medical image-analysis.  

PubMed

The utility of three-dimensional (3D) medical imaging is hampered by difficulties in extracting anatomical regions and making measurements in 3D images. Presently, a user is generally forced to use time-consuming, subjective, manual methods, such as slice tracing and region painting, to define regions of interest. Automatic image-analysis methods can ameliorate the difficulties of manual methods. This paper describes a graphical user interface (GUI) system for constructing automatic image-analysis processes for 3D medical-imaging applications. The system, referred to as IMPROMPTU, provides a user-friendly environment for prototyping, testing and executing complex image-analysis processes. IMPROMPTU can stand alone or it can interact with an existing graphics-based 3D medical image-analysis package (VIDA), giving a strong environment for 3D image-analysis, consisting of tools for visualization, manual interaction, and automatic processing. IMPROMPTU links to a large library of 1D, 2D, and 3D image-processing functions, referred to as VIPLIB, but a user can easily link in custom-made functions. 3D applications of the system are given for left-ventricular chamber, myocardial, and upper-airway extractions. PMID:7736412

Sundaramoorthy, G; Hoford, J D; Hoffman, E A; Higgins, W E

1995-01-01

279

Coupling 2D\\/3D registration method and statistical model to perform 3D reconstruction from partial x-rays images data  

Microsoft Academic Search

3D reconstructions of the spine from a frontal and sagittal radiographs is extremely challenging. The overlying features of soft tissues and air cavities interfere with image processing. It is also difficult to obtain information that is accurate enough to reconstruct complete 3D models. To overcome these problems, the proposed method efficiently combines the partial information contained in two images from

T. Cresson; R. Chav; D. Branchaud; L. Humbert; B. Godbout; B. Aubert; W. Skalli; J. A. De Guise

2009-01-01

280

Photon counting passive 3D image sensing for automatic target recognition  

NASA Astrophysics Data System (ADS)

In this paper, we propose photon counting three-dimensional (3D) passive sensing and object recognition using integral imaging. The application of this approach to 3D automatic target recognition (ATR) is investigated using both linear and nonlinear matched filters. We find there is significant potential of the proposed system for 3D sensing and recognition with a low number of photons. The discrimination capability of the proposed system is quantified in terms of discrimination ratio, Fisher ratio, and receiver operating characteristic (ROC) curves. To the best of our knowledge, this is the first report on photon counting 3D passive sensing and ATR with integral imaging.

Yeom, Seokwon; Javidi, Bahram; Watson, Edward

2005-11-01

281

Photon counting passive 3D image sensing for automatic target recognition.  

PubMed

In this paper, we propose photon counting three-dimensional (3D) passive sensing and object recognition using integral imaging. The application of this approach to 3D automatic target recognition (ATR) is investigated using both linear and nonlinear matched filters. We find there is significant potential of the proposed system for 3D sensing and recognition with a low number of photons. The discrimination capability of the proposed system is quantified in terms of discrimination ratio, Fisher ratio, and receiver operating characteristic (ROC) curves. To the best of our knowledge, this is the first report on photon counting 3D passive sensing and ATR with integral imaging. PMID:19503132

Yeom, Seokwon; Javidi, Bahram; Watson, Edward

2005-11-14

282

ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images  

NASA Technical Reports Server (NTRS)

ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

2005-01-01

283

Enhanced imaging colonoscopy facilitates dense motion-based 3D reconstruction.  

PubMed

We propose a novel approach for estimating a dense 3D model of neoplasia in colonoscopy using enhanced imaging endoscopy modalities. Estimating a dense 3D model of neoplasia is important to make 3D measurements and to classify the superficial lesions in standard frameworks such as the Paris classification. However, it is challenging to obtain decent dense 3D models using computer vision techniques such as Structure-from-Motion due to the lack of texture in conventional (white light) colonoscopy. Therefore, we propose to use enhanced imaging endoscopy modalities such as Narrow Band Imaging and chromoendoscopy to facilitate the 3D reconstruction process. Thanks to the use of these enhanced endoscopy techniques, visualization is improved, resulting in more reliable feature tracks and 3D reconstruction results. We first build a sparse 3D model of neoplasia using Structure-from-Motion from enhanced endoscopy imagery. Then, the sparse reconstruction is densified using a Multi-View Stereo approach, and finally the dense 3D point cloud is transformed into a mesh by means of Poisson surface reconstruction. The obtained dense 3D models facilitate classification of neoplasia in the Paris classification, in which the 3D size and the shape of the neoplasia play a major role in the diagnosis. PMID:24111442

Alcantarilla, Pablo F; Bartoli, Adrien; Chadebecq, Francois; Tilmant, Christophe; Lepilliez, Vincent

2013-01-01

284

3-D Target Location from Stereoscopic SAR Images  

SciTech Connect

SAR range-Doppler images are inherently 2-dimensional. Targets with a height offset lay over onto offset range and azimuth locations. Just which image locations are laid upon depends on the imaging geometry, including depression angle, squint angle, and target bearing. This is the well known layover phenomenon. Images formed with different aperture geometries will exhibit different layover characteristics. These differences can be exploited to ascertain target height information, in a stereoscopic manner. Depending on the imaging geometries, height accuracy can be on the order of horizontal position accuracies, thereby rivaling the best IFSAR capabilities in fine resolution SAR images. All that is required for this to work are two distinct passes with suitably different geometries from any plain old SAR.

DOERRY,ARMIN W.

1999-10-01

285

Real-time 3D image-guided HIFU therapy  

Microsoft Academic Search

Real-time three-dimensional ultrasound imaging (4D US) was utilized to monitor the treatment site during high-intensity focused ultrasound (HIFU) treatment. To obtain real-time monitoring during HIFU sonication, a 4D US imaging system and HIFU were synchronized and interference on the US image adjusted so that the region of interest was visible during treatment. The system was tested using tissue mimicking phantom

Ali Ziadloo; Shahram Vaezy

2008-01-01

286

3D integral imaging using diffractive Fresnel lens arrays.  

PubMed

We present experimental results with binary amplitude Fresnel lens arrays and binary phase Fresnel lens arrays used to implement integral imaging systems. Their optical performance is compared with high quality refractive microlens arrays and pinhole arrays in terms of image quality, color distortion and contrast. Additionally, we show the first experimental results of lens arrays with different focal lengths in integral imaging, and discuss their ability to simultaneously increase both the depth of focus and the field of view. PMID:19488356

Hain, Mathias; von Spiegel, Wolff; Schmiedchen, Marc; Tschudi, Theo; Javidi, Bahram

2005-01-10

287

Image quality of a cone beam O-arm 3D imaging system  

NASA Astrophysics Data System (ADS)

The O-arm is a cone beam imaging system designed primarily to support orthopedic surgery and is also used for image-guided and vascular surgery. Using a gantry that can be opened or closed, the O-arm can function as a 2-dimensional (2D) fluoroscopy device or collect 3-dimensional (3D) volumetric imaging data like a CT system. Clinical applications of the O-arm in spine surgical procedures, assessment of pedicle screw position, and kyphoplasty procedures show that the O-arm 3D mode provides enhanced imaging information compared to radiographs or fluoroscopy alone. In this study, the image quality of an O-arm system was quantitatively evaluated. A 20 cm diameter CATPHAN 424 phantom was scanned using the pre-programmed head protocols: small/medium (120 kVp, 100 mAs), large (120 kVp, 128 mAs), and extra-large (120 kVp, 160 mAs) in 3D mode. High resolution reconstruction mode (512×512×0.83 mm) was used to reconstruct images for the analysis of low and high contrast resolution, and noise power spectrum. MTF was measured using the point spread function. The results show that the O-arm image is uniform but with a noise pattern which cannot be removed by simply increasing the mAs. The high contrast resolution of the O-arm system was approximately 9 lp/cm. The system has a 10% MTF at 0.45 mm. The low-contrast resolution cannot be decided due to the noise pattern. For surgery where locations of a structure are emphasized over a survey of all image details, the image quality of the O-arm is well accepted clinically.

Zhang, Jie; Weir, Victor; Lin, Jingying; Hsiung, Hsiang; Ritenour, E. Russell

2009-02-01

288

THE USE OF PANORAMIC IMAGES FOR 3-D ARCHAEOLOGICAL SURVEY  

Microsoft Academic Search

Panoramic images are efficiently used for documenting archaeological sites and objects. In our paper we present a new approach in developing the use of panoramic images for archaeological survey. The work is part of the Finnish Jabal Haroun Project, in Petra, Jordan. The primary motivation has been in developing a procedure for field invention, in which photogrammetric documentation could be

Henrik Haggrén; Hanne Junnilainen; Jaakko Järvinen; Terhi Nuutinen; Mika Laventob; Mika Huotarib

289

An objective comparison of 3-D image interpolation methods  

Microsoft Academic Search

To aid in the display, manipulation, and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation. Traditional techniques consist of direct interpolation of the grey values. When user interaction is called for in image segmentation, as a consequence of these interpolation methods, the user needs to segment a

George J. Grevera; Jayaram K. Udupa

1998-01-01

290

Multithreaded real-time 3D image processing software architecture and implementation  

NASA Astrophysics Data System (ADS)

Recently, 3D displays and videos have generated a lot of interest in the consumer electronics industry. To make 3D capture and playback popular and practical, a user friendly playback interface is desirable. Towards this end, we built a real time software 3D video player. The 3D video player displays user captured 3D videos, provides for various 3D specific image processing functions and ensures a pleasant viewing experience. Moreover, the player enables user interactivity by providing digital zoom and pan functionalities. This real time 3D player was implemented on the GPU using CUDA and OpenGL. The player provides user interactive 3D video playback. Stereo images are first read by the player from a fast drive and rectified. Further processing of the images determines the optimal convergence point in the 3D scene to reduce eye strain. The rationale for this convergence point selection takes into account scene depth and display geometry. The first step in this processing chain is identifying keypoints by detecting vertical edges within the left image. Regions surrounding reliable keypoints are then located on the right image through the use of block matching. The difference in the positions between the corresponding regions in the left and right images are then used to calculate disparity. The extrema of the disparity histogram gives the scene disparity range. The left and right images are shifted based upon the calculated range, in order to place the desired region of the 3D scene at convergence. All the above computations are performed on one CPU thread which calls CUDA functions. Image upsampling and shifting is performed in response to user zoom and pan. The player also consists of a CPU display thread, which uses OpenGL rendering (quad buffers). This also gathers user input for digital zoom and pan and sends them to the processing thread.

Ramachandra, Vikas; Atanassov, Kalin; Aleksic, Milivoje; Goma, Sergio R.

2011-02-01

291

The Mathematical Foundations of 3D Compton Scatter Emission Imaging  

PubMed Central

The mathematical principles of tomographic imaging using detected (unscattered) X- or gamma-rays are based on the two-dimensional Radon transform and many of its variants. In this paper, we show that two new generalizations, called conical Radon transforms, are related to three-dimensional imaging processes based on detected Compton scattered radiation. The first class of conical Radon transform has been introduced recently to support imaging principles of collimated detector systems. The second class is new and is closely related to the Compton camera imaging principles and invertible under special conditions. As they are poised to play a major role in future designs of biomedical imaging systems, we present an account of their most important properties which may be relevant for active researchers in the field.

Truong, T. T.; Nguyen, M. K.; Zaidi, H.

2007-01-01

292

Light sheet adaptive optics microscope for 3D live imaging  

NASA Astrophysics Data System (ADS)

We report on the incorporation of adaptive optics (AO) into the imaging arm of a selective plane illumination microscope (SPIM). SPIM has recently emerged as an important tool for life science research due to its ability to deliver high-speed, optically sectioned, time-lapse microscope images from deep within in vivo selected samples. SPIM provides a very interesting system for the incorporation of AO as the illumination and imaging paths are decoupled and AO may be useful in both paths. In this paper, we will report the use of AO applied to the imaging path of a SPIM, demonstrating significant improvement in image quality of a live GFP-labeled transgenic zebrafish embryo heart using a modal, wavefront sensorless approach and a heart synchronization method. These experimental results are linked to a computational model showing that significant aberrations are produced by the tube holding the sample in addition to the aberration from the biological sample itself.

Bourgenot, C.; Taylor, J. M.; Saunter, C. D.; Girkin, J. M.; Love, G. D.

2013-02-01

293

3D lidar imaging for detecting and understanding plant responses and canopy structure.  

PubMed

Understanding and diagnosing plant responses to stress will benefit greatly from three-dimensional (3D) measurement and analysis of plant properties because plant responses are strongly related to their 3D structures. Light detection and ranging (lidar) has recently emerged as a powerful tool for direct 3D measurement of plant structure. Here the use of 3D lidar imaging to estimate plant properties such as canopy height, canopy structure, carbon stock, and species is demonstrated, and plant growth and shape responses are assessed by reviewing the development of lidar systems and their applications from the leaf level to canopy remote sensing. In addition, the recent creation of accurate 3D lidar images combined with natural colour, chlorophyll fluorescence, photochemical reflectance index, and leaf temperature images is demonstrated, thereby providing information on responses of pigments, photosynthesis, transpiration, stomatal opening, and shape to environmental stresses; these data can be integrated with 3D images of the plants using computer graphics techniques. Future lidar applications that provide more accurate dynamic estimation of various plant properties should improve our understanding of plant responses to stress and of interactions between plants and their environment. Moreover, combining 3D lidar with other passive and active imaging techniques will potentially improve the accuracy of airborne and satellite remote sensing, and make it possible to analyse 3D information on ecophysiological responses and levels of various substances in agricultural and ecological applications and in observations of the global biosphere. PMID:17030540

Omasa, Kenji; Hosoi, Fumiki; Konishi, Atsumi

2007-01-01

294

3D modeling and parameterization of the left ventricle in echocardiographic images using deformable superquadrics  

Microsoft Academic Search

This paper proposes a new method for the quantitative analysis of the mobility of the left ventricle LV, based on a 3D dynamic model. One superquadric is used as the 3D global model. For data acquisition, an electromechanical device that allows acquiring the volume of images using 60 cross-sections in a rotational cylindrical 3D symmetry is adapted to an echograph.

A. Bosnjak; V. Burdin; V. Torrealba; G. Montilla; B. Solaiman; C. Roux

2001-01-01

295

Denoising for 3-D Photon-Limited Imaging Data Using Nonseparable Filterbanks  

Microsoft Academic Search

In this paper, we present a novel frame-based denoising algorithm for photon-limited 3D images. We first construct a new 3D nonseparable filterbank by adding elements to an existing frame in a structurally stable way. In contrast with the traditional 3D separable wavelet system, the new filterbank is capable of using edge information in multiple directions. We then propose a data-adaptive

Alberto Santamaría-pang; Teodor Stefan Bildea; Tan Shan; Ioannis A. Kakadiaris

2008-01-01

296

Photon counting passive 3D image sensing for automatic target recognition  

Microsoft Academic Search

In this paper, we propose photon counting three-dimensional (3D) passive sensing and object recognition using integral imaging. The application of this approach to 3D automatic target recognition (ATR) is investigated using both linear and nonlinear matched filters. We find there is significant potential of the proposed system for 3D sensing and recognition with a low number of photons. The discrimination

Seokwon Yeom; Bahram Javidi; Edward Watson

2005-01-01

297

Hyperspectral Image Lossless Compression Using the 3D Set Partitioned Embedded Zero Block Coding Alogrithm  

Microsoft Academic Search

In this paper, we propose a hyperspectral image lossless compression coder based on three-dimensional set partitioned embedded zero block coding (3D SPEZBC) algorithm. This coder adopts the 3D integer wavelet packet transform to decorrelate and the set-based partitioning zero block coding to process bitplane coding. It not only provides the same excellent coding performances as the 3D EZBC algorithm, but

Ying Hou; Guizhong Liu

2008-01-01

298

A second generation 3D integrated feature-extracting image sensor  

Microsoft Academic Search

This paper presents a second generation 3D integrated feature-extracting CMOS image sensor. This 64×96 pixel vision sensor was designed and fabricated using a 0.18 µm 3D FDSOI process. Each pixel implements a photodiode and computation circuits on three individual tiers, which are vertically stacked and connected through the 3D inter-tier vias. The photodiode is sited on the top tier with

Xiangyu Zhang; Shoushun Chen; Eugenio Culurciello

2011-01-01

299

Coherence-Based 3-D and Spectral Imaging and Laser-Scanning Microscopy  

Microsoft Academic Search

The basics of three-dimensional (3-D) and spectral imaging techniques that are based on the detection of coherence functions and other related techniques are reviewed. The principle of the 3-D source retrieval is based on understanding the propagation law of optical random field through the free space. The 3-D and spectral information are retrieved from the cross-spectral density function of optical

KAZUYOSHI ITOH; WATARU WATANABE; HIDENOBU ARIMOTO; KEISUKE ISOBE

2006-01-01

300

Fast multicolor 3D imaging using aberration-corrected multifocus microscopy.  

PubMed

Conventional acquisition of three-dimensional (3D) microscopy data requires sequential z scanning and is often too slow to capture biological events. We report an aberration-corrected multifocus microscopy method capable of producing an instant focal stack of nine 2D images. Appended to an epifluorescence microscope, the multifocus system enables high-resolution 3D imaging in multiple colors with single-molecule sensitivity, at speeds limited by the camera readout time of a single image. PMID:23223154

Abrahamsson, Sara; Chen, Jiji; Hajj, Bassam; Stallinga, Sjoerd; Katsov, Alexander Y; Wisniewski, Jan; Mizuguchi, Gaku; Soule, Pierre; Mueller, Florian; Dugast Darzacq, Claire; Darzacq, Xavier; Wu, Carl; Bargmann, Cornelia I; Agard, David A; Dahan, Maxime; Gustafsson, Mats G L

2013-01-01

301

3d medical image segmentation approach based on multi-label front propagation  

Microsoft Academic Search

Many practical applications in the field of medical image processing require robust and valid 3D image segmentation results. In this paper, we present a semi-automatic iterative segmentation approach for 3D medical image by combining a 2D boundary tracking algorithm and a boundary mapping process. Upon each of the consecutive slice, the boundary tracking process is accomplished in an alternate procedure

Hua Li; Abderrahim Elmoataz; Jalal Fadili; Su Ruan; Barbara Romaniuk

2004-01-01

302

On Limits of Embedding in 3D Images Based on 2D Watson's Model  

NASA Astrophysics Data System (ADS)

We extend the Watson image quality metric to 3D images through the concept of integral imaging. In the Watson's model, perceptual thresholds for changes to the DCT coefficients of a 2D image are given for information hiding. These thresholds are estimated in a way that the resulting distortion in the 2D image remains undetectable by the human eyes. In this paper, the same perceptual thresholds are estimated for a 3D scene in the integral imaging method. These thresholds are obtained based on the Watson's model using the relation between 2D elemental images and resulting 3D image. The proposed model is evaluated through subjective tests in a typical image steganography scheme.

Kavehvash, Zahra; Ghaemmaghami, Shahrokh

303

Image processing and 3D visualization in forensic pathologic examination  

NASA Astrophysics Data System (ADS)

The use of image processing is becoming increasingly important in the evaluation of violent crime. While much work has been done in the use of these techniques for forensic purposes outside of forensic pathology, its use in the pathologic examination of wounding has been limited. We are investigating the use of image processing and three-dimensional visualization in the analysis of patterned injuries and tissue damage. While image processing will never replace classical understanding and interpretation of how injuries develop and evolve, it can be a useful tool in helping an observer notice features in an image, may help provide correlation of surface to deep tissue injury, and provide a mechanism for the development of a metric for analyzing how likely it may be that a given object may have caused a given wound. We are also exploring methods of acquiring three-dimensional data for such measurements, which is the subject of a second paper.

Oliver, William R.; Altschuler, Bruce R.

1996-02-01

304

Real Time 3-D Ultrasonic Diagnostic Imager for Battlefield Application.  

National Technical Information Service (NTIS)

This report presents progress achieved through the second 12 months of Phase II of a two phase technology development program. Developments include: a new ROIC specifically designed for ultrasound imaging; a 128 x 128 Transducer Hybrid Assembly (THA), tra...

T. White

1997-01-01

305

A new approach towards image based virtual 3D city modeling by using close range photogrammetry  

NASA Astrophysics Data System (ADS)

3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country and high resolution satellite images are costly. In this study, proposed method is based on only simple video recording of area. Thus this proposed method is suitable for 3D city modeling. Photo-realistic, scalable, geo-referenced virtual 3D city model is useful for various kinds of applications such as for planning in navigation, tourism, disasters management, transportations, municipality, urban and environmental managements, real-estate industry. Thus this study will provide a good roadmap for geomatics community to create photo-realistic virtual 3D city model by using close range photogrammetry.

Singh, S. P.; Jain, K.; Mandla, V. R.

2014-05-01

306

Analytic 3D image reconstruction using all detected events  

Microsoft Academic Search

The authors present the results of testing a previously presented algorithm for three-dimensional image reconstruction that uses all gamma-ray coincidence events detected by a PET volume-imaging scanner. By using two iterations of an analytic filter-backprojection method, the algorithm is not constrained by the requirement of a spatially invariant detector point spread function, which limits normal analytic techniques. Removing this constraint

P. E. Kinahan; J. G. Rogers

1989-01-01

307

Determining 3D Flow Fields via Multi-camera Light Field Imaging  

PubMed Central

In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture 1. Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet.

Truscott, Tadd T.; Belden, Jesse; Nielson, Joseph R.; Daily, David J.; Thomson, Scott L.

2013-01-01

308

Determining 3D flow fields via multi-camera light field imaging.  

PubMed

In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture (1). Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet. PMID:23486112

Truscott, Tadd T; Belden, Jesse; Nielson, Joseph R; Daily, David J; Thomson, Scott L

2013-01-01

309

3D-Holoscopic Imaging: A New Dimension to Enhance Imaging in Minimally Invasive Therapy in Urologic Oncology  

PubMed Central

Abstract Background and Purpose Existing imaging modalities of urologic pathology are limited by three-dimensional (3D) representation on a two-dimensional screen. We present 3D-holoscopic imaging as a novel method of representing Digital Imaging and Communications in Medicine data images taken from CT and MRI to produce 3D-holographic representations of anatomy without special eyewear in natural light. 3D-holoscopic technology produces images that are true optical models. This technology is based on physical principles with duplication of light fields. The 3D content is captured in real time with the content viewed by multiple viewers independently of their position, without 3D eyewear. Methods We display 3D-holoscopic anatomy relevant to minimally invasive urologic surgery without the need for 3D eyewear. Results The results have demonstrated that medical 3D-holoscopic content can be displayed on commercially available multiview auto-stereoscopic display. Conclusion The next step is validation studies comparing 3D-Holoscopic imaging with conventional imaging.

Aggoun, Amar; Swash, Mohammad; Grange, Philippe C.R.; Challacombe, Benjamin; Dasgupta, Prokar

2013-01-01

310

A Workstation for Interactive Display and Quantitative Analysis of 3-D and 4-D Biomedical Images  

PubMed Central

The capability to extract objective and quantitatively accurate information from 3-D radiographic biomedical images has not kept pace with the capabilities to produce the images themselves. This is rather an ironic paradox, since on the one hand the new 3-D and 4-D imaging capabilities promise significant potential for providing greater specificity and sensitivity (i.e., precise objective discrimination and accurate quantitative measurement of body tissue characteristics and function) in clinical diagnostic and basic investigative imaging procedures than ever possible before, but on the other hand, the momentous advances in computer and associated electronic imaging technology which have made these 3-D imaging capabilities possible have not been concomitantly developed for full exploitation of these capabilities. Therefore, we have developed a powerful new microcomputer-based system which permits detailed investigations and evaluation of 3-D and 4-D (dynamic 3-D) biomedical images. The system comprises a special workstation to which all the information in a large 3-D image data base is accessible for rapid display, manipulation, and measurement. The system provides important capabilities for simultaneously representing and analyzing both structural and functional data and their relationships in various organs of the body. This paper provides a detailed description of this system, as well as some of the rationale, background, theoretical concepts, and practical considerations related to system implementation. ImagesFigure 5Figure 7Figure 8Figure 9Figure 10Figure 11Figure 12Figure 13Figure 14Figure 15Figure 16

Robb, R.A.; Heffeman, P.B.; Camp, J.J.; Hanson, D.P.

1986-01-01

311

Optimization of the open-loop liquid crystal adaptive optics retinal imaging system  

NASA Astrophysics Data System (ADS)

An open-loop adaptive optics (AO) system for retinal imaging was constructed using a liquid crystal spatial light modulator (LC-SLM) as the wavefront compensator. Due to the dispersion of the LC-SLM, there was only one illumination source for both aberration detection and retinal imaging in this system. To increase the field of view (FOV) for retinal imaging, a modified mechanical shutter was integrated into the illumination channel to control the size of the illumination spot on the fundus. The AO loop was operated in a pulsing mode, and the fundus was illuminated twice by two laser impulses in a single AO correction loop. As a result, the FOV for retinal imaging was increased to 1.7-deg without compromising the aberration detection accuracy. The correction precision of the open-loop AO system was evaluated in a closed-loop configuration; the residual error is approximately 0.0909? (root-mean-square, RMS), and the Strehl ratio ranges to 0.7217. Two subjects with differing rates of myopia (-3D and -5D) were tested. High-resolution images of capillaries and photoreceptors were obtained.

Kong, Ningning; Li, Chao; Xia, Mingliang; Li, Dayu; Qi, Yue; Xuan, Li

2012-02-01

312

3D imaging from theory to practice: the Mona Lisa story  

NASA Astrophysics Data System (ADS)

The warped poplar panel and the technique developed by Leonardo to paint the Mona Lisa present a unique research and engineering challenge for the design of a complete optical 3D imaging system. This paper discusses the solution developed to precisely measure in 3D the world's most famous painting despite its highly contrasted paint surface and reflective varnish. The discussion focuses on the opto-mechanical design and the complete portable 3D imaging system used for this unique occasion. The challenges associated with obtaining 3D color images at a resolution of 0.05 mm and a depth precision of 0.01 mm are illustrated by exploring the virtual 3D model of the Mona Lisa.

Blais, Francois; Cournoyer, Luc; Beraldin, J.-Angelo; Picard, Michel

2008-08-01

313

Intensity-based segmentation and visualization of cells in 3D microscopic images using the GPU  

NASA Astrophysics Data System (ADS)

3D microscopy images contain abundant astronomical data, rendering 3D microscopy image processing time-consuming and laborious on a central processing unit (CPU). To solve these problems, many people crop a region of interest (ROI) of the input image to a small size. Although this reduces cost and time, there are drawbacks at the image processing level, e.g., the selected ROI strongly depends on the user and there is a loss in original image information. To mitigate these problems, we developed a 3D microscopy image processing tool on a graphics processing unit (GPU). Our tool provides efficient and various automatic thresholding methods to achieve intensity-based segmentation of 3D microscopy images. Users can select the algorithm to be applied. Further, the image processing tool provides visualization of segmented volume data and can set the scale, transportation, etc. using a keyboard and mouse. However, the 3D objects visualized fast still need to be analyzed to obtain information for biologists. To analyze 3D microscopic images, we need quantitative data of the images. Therefore, we label the segmented 3D objects within all 3D microscopic images and obtain quantitative information on each labeled object. This information can use the classification feature. A user can select the object to be analyzed. Our tool allows the selected object to be displayed on a new window, and hence, more details of the object can be observed. Finally, we validate the effectiveness of our tool by comparing the CPU and GPU processing times by matching the specification and configuration.

Kang, Mi-Sun; Lee, Jeong-Eom; Jeon, Woong-ki; Choi, Heung-Kook; Kim, Myoung-Hee

2013-02-01

314

A wide-field microscopy technique using a linear image sensor for obtaining 3D images  

Microsoft Academic Search

Three-dimensional (3D) microscopy is a huge field that includes several microscopy techniques. Among these techniques the confocal microscopy is widely known and used due to its improved axial resolution or optical sectioning ability. However its high cost and image acquisition time as well as low signal-to-noise ratio are important drawbacks. Other techniques use wide-field structured illumination to get depth information

Milton P. Macedo; Antonio J. Barata; Ana G. Fernandes; Carlos M. Correia

2005-01-01

315

Cultural Relic 3D Reconstruction from Digital Images and Laser Point Clouds  

Microsoft Academic Search

This paper proposes a method to combine the digital images and Laser point clouds to reconstruct the 3D model of the archaic glockenspiel. All the stations of the Laser point clouds are connected according to the ICP arithmetic. Then image matching is used to register the high resolution digital images and the Laser synchronous images to gain the corresponding texture

Jie Liu; Jianqing Zhang; Jia Xu

2008-01-01

316

An Optimized Blockwise Non Local Means Denoising Filter for 3D Magnetic Resonance Images  

Microsoft Academic Search

ó A critical issue in image restoration is the problem of noise removal while keeping the integrity of relevant image information. Denoising is a crucial step to increase image quality and to improve the performance of all the tasks needed for quantitative imaging analysis. The method proposed in this paper is based on a 3D optimized blockwise version of the

Pierrick Coupe; Pierre Yger; Sylvain Prima; Pierre Hellier; Charles Kervrann; Christian Barillot

2008-01-01

317

Space Radar Image Isla Isabela in 3-D  

NASA Technical Reports Server (NTRS)

This is a three-dimensional view of Isabela, one of the Galapagos Islands located off the western coast of Ecuador, South America. This view was constructed by overlaying a Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) image on a digital elevation map produced by TOPSAR, a prototype airborne interferometric radar which produces simultaneous image and elevation data. The vertical scale in this image is exaggerated by a factor of 1.87. The SIR-C/X-SAR image was taken on the 40th orbit of space shuttle Endeavour. The image is centered at about 0.5 degree south latitude and 91 degrees west longitude and covers an area of 75 by 60 kilometers (47 by 37 miles). The radar incidence angle at the center of the image is about 20 degrees. The western Galapagos Islands, which lie about 1,200 kilometers (750 miles)west of Ecuador in the eastern Pacific, have six active volcanoes similar to the volcanoes found in Hawaii and reflect the volcanic processes that occur where the ocean floor is created. Since the time of Charles Darwin's visit to the area in 1835, there have been more than 60 recorded eruptions on these volcanoes. This SIR-C/X-SAR image of Alcedo and Sierra Negra volcanoes shows the rougher lava flows as bright features, while ash deposits and smooth pahoehoe lava flows appear dark. Vertical exaggeration of relief is a common tool scientists use to detect relationships between structure (for example, faults, and fractures) and topography. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI).

1999-01-01

318

Radar Imaging of Spheres in 3D using MUSIC  

SciTech Connect

We have shown that multiple spheres can be imaged by linear and planar EM arrays using only one component of polarization. The imaging approach involves calculating the SVD of the scattering response matrix, selecting a subset of singular values that represents noise, and evaluating the MUSIC functional. The noise threshold applied to the spectrum of singular values for optimal performance is typically around 1%. The resulting signal subspace includes more than one singular value per sphere. The presence of reflections from the ground improves height localization, even for a linear array parallel to the ground. However, the interference between direct and reflected energy modulates the field, creating periodic nulls that can obscure targets in typical images. These nulls are largely eliminated by normalizing the MUSIC functional with the broadside beam pattern of the array. The resulting images show excellent localization for 1 and 2 spheres. The performance for the 3 sphere configurations are complicated by shadowing effects and the greater range of the 3rd sphere in case 2. Two of the three spheres are easily located by MUSIC but the third is difficult to distinguish from other local maxima of the complex imaging functional. Improvement is seen when the linear array is replace with a planar array, which increases the effective aperture height. Further analysis of the singular values and their relationship to modes of scattering from the spheres, as well as better ways to exploit polarization, should improve performance. Work along these lines is currently being pursued by the authors.

Chambers, D H; Berryman, J G

2003-01-21

319

3D image-based scatter estimation and correction for multi-detector CT imaging  

NASA Astrophysics Data System (ADS)

The aim of this work is to implement and evaluate a 3D image-based approach for the estimation of scattered radiation in multi-detector CT. Based on a reconstructed CT image volume, the scattered radiation contribution is calculated in 3D fan-beam geometry in the framework of an extended point-scatter kernel (PSK) model of scattered radiation. The PSK model is based on the calculation of elemental scatter contributions propagating the rays from the focal spot to the detector across the object for defined interaction points on a 3D fan beam grid. Each interaction point in 3D leads to an individual elemental 2D scatter distribution on the detector. The sum of all elemental contributions represents the total scatter intensity distribution on the detector. Our proposed extended PSK depends on the scattering angle (defined by the interaction point and the considered detector channel) and the line integral between the interaction point on a 3D fan beam ray and the intersection of the same ray with the detector. The PSK comprises single- and multiple scattering as well as the angular selectivity characteristics of the anti-scatter grid on detector. Our point-scatter kernels were obtained from a low-noise Monte-Carlo simulation of water-equivalent spheres with different radii for a particular CT scanner geometry. The model allows obtaining noise-free scatter intensity distribution estimates with a lower computational load compared to Monte-Carlo methods. In this work, we give a description of the algorithm and the proposed PSK. Furthermore, we compare resulting scatter intensity distributions (obtained for numerical phantoms) to Monte-Carlo results.

Petersilka, M.; Allmendinger, T.; Stierstorfer, K.

2014-03-01

320

Spatial Mutual Information as Similarity Measure for 3-D Brain Image Registration  

PubMed Central

Information theoretic-based similarity measures, in particular mutual information, are widely used for intermodal/intersubject 3-D brain image registration. However, conventional mutual information does not consider spatial dependency between adjacent voxels in images, thus reducing its efficacy as a similarity measure in image registration. This paper first presents a review of the existing attempts to incorporate spatial dependency into the computation of mutual information (MI). Then, a recently introduced spatially dependent similarity measure, named spatial MI, is extended to 3-D brain image registration. This extension also eliminates its artifact for translational misregistration. Finally, the effectiveness of the proposed 3-D spatial MI as a similarity measure is compared with three existing MI measures by applying controlled levels of noise degradation to 3-D simulated brain images.

RAZLIGHI, QOLAMREZA R.; KEHTARNAVAZ, NASSER

2014-01-01

321

Dynamic reconstruction and rendering of 3D tomosynthesis images  

NASA Astrophysics Data System (ADS)

Dynamic Reconstruction and Rendering (DRR) is a fast and flexible tomosynthesis image reconstruction and display implementation. By leveraging the computational efficiency gains afforded by off-the-shelf GPU hardware, tomosynthesis reconstruction can be performed on demand at real-time, user-interactive frame rates. Dynamic multiplanar reconstructions allow the user to adjust reconstruction and display parameters interactively, including axial sampling, slice location, plane tilt, magnification, and filter selection. Reconstruction on-demand allows tomosynthesis images to be viewed as true three-dimensional data rather than just a stack of two-dimensional images. The speed and dynamic rendering capabilities of DRR can improve diagnostic accuracy and lead to more efficient clinical workflows.

Kuo, Johnny; Ringer, Peter A.; Fallows, Steven G.; Bakic, Predrag R.; Maidment, Andrew D. A.; Ng, Susan

2011-03-01

322

Finite Element Methods for Active Contour Models and Balloons for 2D and 3D Images  

Microsoft Academic Search

The use of energy-minimizing curves, known as "snakes" to extract features of interest in images has been introduced by Kass, Witkin and Terzopoulos [23]. A balloon model was introduced in [12] as a way to generalize and solve some of the problems encountered with the original method. We present a 3D generalization of the balloon model as a 3D deformable

Laurent D. Cohen; Isaac Cohen

1991-01-01

323

Fully 3D Monte Carlo image reconstruction in SPECT using functional regions  

Microsoft Academic Search

Image reconstruction in single photon emission computed tomography is affected by physical effects such as photon attenuation, Compton scatter and detector response. These effects can be compensated for by modeling the corresponding spread of photons in 3D within the system matrix used for tomographic reconstruction. The fully 3D Monte Carlo (F3DMC) reconstruction technique consists in calculating this system matrix using

Ziad El Bitar; Delphine Lazaro; Christopher Coello; Vincent Breton; David Hill; Irène Buvat

2006-01-01

324

Effective 3D object detection and regression using probabilistic segmentation features in CT images  

Microsoft Academic Search

D object detection and importance regression\\/ranking are at the core for semantically interpreting 3D medical im- ages of computer aided diagnosis (CAD). In this paper, we propose effective image segmentation features and a novel multiple instance regression method for solving the above challenges. We perform supervised learning based seg- mentation algorithm on numerous lesion candidates (as 3D VOIs: Volumes Of

Le Lu; Jinbo Bi; Matthias Wolf; Marcos Salganicoff

2011-01-01

325

New applications for the touchscreen in 2D and 3D medical imaging workstations  

Microsoft Academic Search

We present a new interface technique which augments a 3D user interface based on the physical manipulation of tools, or props, with a touchscreen. This hybrid interface intuitively and seamlessly combines 3D input with more traditional 2D input in the same user interface. Example 2D interface tasks of interest include selecting patient images from a database, browsing through axial, coronal,

Ken Hinckley; John C. Goble; Randy Pausch; Neal F. Kassell

1995-01-01

326

New Applications for Touchscreen in 2D and 3D Medical Imaging Workstations  

Microsoft Academic Search

We present a new interface technique which augments a 3D user interface based on the physical manipulation of tools, or props, with a touchscreen. This hybrid interface intuitively and seamlessly combines 3D input with more traditional 2D input in the same user interface. Example 2D interface tasks of interest include selecting patient images from a database, browsing through axial, coronal,

John C. Goble; Ken Hinckley; Neal F. Kassell; Randy Pausch

1995-01-01

327

3D Modeling of High Numerical Aperture Imaging in Thin Films.  

National Technical Information Service (NTIS)

A modelling technique is described which is used to explore three dimensional (3D) image irradiance distributions formed by high numerical aperture (NA is greater than 0.5) lenses in homogeneous, linear films. This work uses a 3D modelling approach that i...

D. G. Flagello T. Milster

1992-01-01

328

Fast 3D Iterative Reconstruction of PET Images Using PC Graphics Hardware  

Microsoft Academic Search

Using iterative reconstruction algorithms in 3D positron emission tomography (PET) studies produce images with superior quality, however the run time is too long for these algorithms to be used routinely, especially for dynamic studies. Recently several new hardware architectures are available to speedup 3D iterative reconstructions, including the graphics processing unit (GPU), which is very attractive due to its fast

Bing Bai; Anne M Smith

2006-01-01

329

GIST: an interactive, GPU-based level set segmentation tool for 3D medical images  

Microsoft Academic Search

While level sets have demonstrated a great potential for 3D medical image segmentation, their usefulness has been limited by two problems. First, 3D level sets are relatively slow to compute. Second, their formulation usually entails several free parameters which can be very difficult to correctly tune for specific applications. The second problem is compounded by the first. This paper describes

Joshua E. Cates; Aaron E. Lefohn; Ross T. Whitaker

2004-01-01

330

AUTOMATIC MULTI-IMAGE PHOTO-TEXTURING OF 3D SURFACE MODELS OBTAINED WITH LASER SCANNING  

Microsoft Academic Search

The basic photogrammetric deliverable in heritage conservation is orthophotography (and other suitable raster projections) - closely followed today by a growing demand for photo-textured 3D surface models. The fundamental limitation of conventional photogram- metric software is twofold: it can handle neither fully 3D surface descriptions nor the question of image visibility. As a consequence, software which ignores both surface and

L. Grammatikopoulos; I. Kalisperakis; G. Karras; T. Kokkinos; E. Petsa

2004-01-01

331

Snapshot 3D optical coherence tomography system using image mappingspectrometry  

PubMed Central

A snapshot 3-Dimensional Optical Coherence Tomography system was developed using Image MappingSpectrometry. This system can give depth information (Z) at different spatial positions (XY) withinone camera integration time to potentially reduce motion artifact and enhance throughput. Thecurrent (x,y,?) datacube of (85×356×117) provides a 3Dvisualization of sample with 400 ?m depth and 13.4?m in transverse resolution. Axial resolution of 16.0?m can also be achieved in this proof-of-concept system. We present ananalysis of the theoretical constraints which will guide development of future systems withincreased imaging depth and improved axial and lateral resolutions.

Nguyen, Thuc-Uyen; Pierce, Mark C; Higgins, Laura; Tkaczyk, Tomasz S

2013-01-01

332

3-D capacitance density imaging of fluidized bed  

DOEpatents

A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved.

Fasching, George E. (653 Vista Pl., Morgantown, WV 26505)

1990-01-01

333

3D CARS image reconstruction and pattern recognition on SHG images  

NASA Astrophysics Data System (ADS)

Nonlinear optical imaging techniques based e.g. on coherent anti-Stokes Raman scattering (CARS) or second-harmonic generation (SHG) show great potential for in-vivo investigations of tissue. While the microspectroscopic imaging tools are established, automized data evaluation, i.e. image pattern recognition and automized image classification, of nonlinear optical images still bares great possibilities for future developments towards an objective clinical diagnosis. This contribution details the capability of nonlinear microscopy for both 3D visualization of human tissues and automated discrimination between healthy and diseased patterns using ex-vivo human skin samples. By means of CARS image alignment we show how to obtain a quasi-3D model of a skin biopsy, which allows us to trace the tissue structure in different projections. Furthermore, the potential of automated pattern and organization recognition to distinguish between healthy and keloidal skin tissue is discussed. A first classification algorithm employs the intrinsic geometrical features of collagen, which can be efficiently visualized by SHG microscopy. The shape of the collagen pattern allows conclusions about the physiological state of the skin, as the typical wavy collagen structure of healthy skin is disturbed e.g. in keloid formation. Based on the different collagen patterns a quantitative score characterizing the collagen waviness - and hence reflecting the physiological state of the tissue - is obtained. Further, two additional scoring methods for collagen organization, respectively based on a statistical analysis of the mutual organization of fibers and on FFT, are presented.

Medyukhina, Anna; Vogler, Nadine; Latka, Ines; Dietzek, Benjamin; Cicchi, Riccardo; Pavone, Francesco S.; Popp, Jürgen

2012-05-01

334

Determining 3-D motion and structure from image sequences  

NASA Technical Reports Server (NTRS)

A method of determining three-dimensional motion and structure from two image frames is presented. The method requires eight point correspondences between the two frames, from which motion and structure parameters are determined by solving a set of eight linear equations and a singular value decomposition of a 3x3 matrix. It is shown that the solution thus obtained is unique.

Huang, T. S.

1982-01-01

335

Recursive 3-D motion estimation from a monocular image sequence  

Microsoft Academic Search

Consideration is given to the design and application of a recursive algorithm to a sequence of images of a moving object to estimate both its structure and kinematics. The object is assumed to be rigid, and its motion is assumed to be smooth in the sense that it can be modeled by retaining an arbitrary number of terms in the

T. J. Broida; S. Chandrashekhar; R. Chellappa

1990-01-01

336

Task-specific evaluation of 3D image interpolation techniques  

NASA Astrophysics Data System (ADS)

Image interpolation is an important operation that is widely used in medical imaging, image processing, and computer graphics. A variety of interpolation methods are available in the literature. However, their systematic evaluation is lacking. At a previous meeting, we presented a framework for the task independent comparison of interpolation methods based on a variety of medical image data pertaining to different parts of the human body taken from different modalities. In this new work, we present an objective, task-specific framework for evaluating interpolation techniques. The task considered is how the interpolation methods influence the accuracy of quantification of the total volume of lesions in the brain of Multiple Sclerosis (MS) patients. Sixty lesion detection experiments coming from ten patient studies, two subsampling techniques and the original data, and 3 interpolation methods is presented along with a statistical analysis of the results. This work comprises a systematic framework for the task-specific comparison of interpolation methods. Specifically, the influence of three interpolation methods in MS lesion quantification is compared.

Grevera, George J.; Udupa, Jayaram K.; Miki, Yukio

1998-06-01

337

Space Radar Image of Kilauea, Hawaii in 3-D  

NASA Technical Reports Server (NTRS)

This is a three-dimensional perspective view of a false-color image of the eastern part of the Big Island of Hawaii. It was produced using all three radar frequencies -- X-band, C-band and L-band -- from the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) flying on the space shuttle Endeavour, overlaid on a U.S. Geological Survey digital elevation map. Visible in the center of the image in blue are the summit crater (Kilauea Caldera) which contains the smaller Halemaumau Crater, and the line of collapse craters below them that form the Chain of Craters Road. The image was acquired on April 12, 1994 during orbit 52 of the space shuttle. The area shown is approximately 34 by 57 kilometers (21 by 35 miles) with the top of the image pointing toward northwest. The image is centered at about 155.25 degrees west longitude and 19.5 degrees north latitude. The false colors are created by displaying three radar channels of different frequency. Red areas correspond to high backscatter at L-HV polarization, while green areas exhibit high backscatter at C-HV polarization. Finally, blue shows high return at X-VV polarization. Using this color scheme, the rain forest appears bright on the image, while the green areas correspond to lower vegetation. The lava flows have different colors depending on their types and are easily recognizable due to their shapes. The flows at the top of the image originated from the Mauna Loa volcano. Kilauea volcano has been almost continuously active for more than the last 11 years. Field teams that were on the ground specifically to support these radar observations report that there was vigorous surface activity about 400 meters (one-quartermile) inland from the coast. A moving lava flow about 200 meters (650 feet) in length was observed at the time of the shuttle overflight, raising the possibility that subsequent images taken during this mission will show changes in the landscape. Currently, most of the lava that is erupted travels the 8 kilometers (5 miles) from the Pu'u O'o crater (the active vent) just outside this image to the coast through a series of lava tubes, but in the past there have been many large lava flows that have traveled this distance, destroying houses and parts of the Hawaii Volcanoes National Park. This SIR-C/X-SAR image shows two types of lava flows that are common to Hawaiian volcanoes. Pahoehoe lava flows are relatively smooth, and appear very dark blue because much of the radar energy is reflected away from the radar. In contrast other lava flows are relatively rough and bounce much of the radar energy back to the radar, making that part of the image bright blue. This radar image is valuable because it allows scientists to study an evolving lava flow field from the Pu'u O'o vent. Much of the area on the northeast side (right) of the volcano is covered with tropical rain forest, and because trees reflect a lot of the radar energy, the forest appears bright in this radar scene. The linear feature running from Kilauea Crater to the right of the image is Highway 11leading to the city of Hilo which is located just beyond the right edge of this image. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA)

1999-01-01

338

Comparison of Bootstrap Resampling Methods for 3-D PET Imaging  

Microsoft Academic Search

Two groups of bootstrap methods have been proposed to estimate the statistical properties of positron emission tomography (PET) images by generating multiple statistically equivalent data sets from few data samples. The first group generates resampled data based on a parametric approach assuming that data from which resampling is performed follows a Poisson distribution while the second group consists of nonparametric

Carole Lartizien; Jean-Baptiste Aubin; Irène Buvat

2010-01-01

339

Nondestructive imaging of stem cell in 3D scaffold  

NASA Astrophysics Data System (ADS)

We have developed a line-scanning angled fluorescent laminar optical tomography (LS-aFLOT) system. This system enables three-dimensional imaging of fluorescent-labeled stem cell distribution within engineered tissue scaffold over a several-millimeter field-of-view.

Chen, Chao-Wei; Yeatts, Andrew B.; Fisher, John P.; Chen, Yu

2012-05-01

340

On the imaging of slip boundaries using 3D elastography  

Microsoft Academic Search

Slip elastography is a new branch of elastography which incorporates shear strain imaging and force estimation, with a view to detecting and characterizing slip boundaries between tumors and their surroundings. This paper introduces the principles of slip elastography. It is hypothesized that apparent shear strains may arise due to shear motion across a slip boundary. This is investigated through FEM

Leo J. Garcia; Christopher Uff; Jérémie Fromageau; Jeffrey C. Bamber

2009-01-01

341

3D GRASE PROPELLER: Improved Image Acquisition Technique for Arterial Spin Labeling Perfusion Imaging  

PubMed Central

Arterial spin labeling (ASL) is a non-invasive technique that can quantitatively measure cerebral blood flow (CBF). While traditionally ASL employs 2D EPI or spiral acquisition trajectories, single-shot 3D GRASE is gaining popularity in ASL due to inherent SNR advantage and spatial coverage. However, a major limitation of 3D GRASE is through-plane blurring caused by T2 decay. A novel technique combining 3D GRASE and a PROPELLER trajectory (3DGP) is presented to minimize through-plane blurring without sacrificing perfusion sensitivity or increasing total scan time. Full brain perfusion images were acquired at a 3×3×5mm3 nominal voxel size with Q2TIPS-FAIR as the ASL preparation sequence. Data from 5 healthy subjects was acquired on a GE 1.5T scanner in less than 4 minutes per subject. While showing good agreement in CBF quantification with 3D GRASE, 3DGP demonstrated reduced through-plane blurring, improved anatomical details, high repeatability and robustness against motion, making it suitable for routine clinical use.

Tan, Huan; Hoge, W. Scott; Hamilton, Craig A.; Gunther, Matthias; Kraft, Robert A.

2014-01-01

342

3D imaging of cone photoreceptors over extended time periods using optical coherence tomography with adaptive optics  

NASA Astrophysics Data System (ADS)

Optical coherence tomography with adaptive optics (AO-OCT) is a highly sensitive, noninvasive method for 3D imaging of the microscopic retina. The purpose of this study is to advance AO-OCT technology by enabling repeated imaging of cone photoreceptors over extended periods of time (days). This sort of longitudinal imaging permits monitoring of 3D cone dynamics in both normal and diseased eyes, in particular the physiological processes of disc renewal and phagocytosis, which are disrupted by retinal diseases such as age related macular degeneration and retinitis pigmentosa. For this study, the existing AO-OCT system at Indiana underwent several major hardware and software improvements to optimize system performance for 4D cone imaging. First, ultrahigh speed imaging was realized using a Basler Sprint camera. Second, a light source with adjustable spectrum was realized by integration of an Integral laser (Femto Lasers, ?c=800nm, ??=160nm) and spectral filters in the source arm. For cone imaging, we used a bandpass filter with ?c=809nm and ??=81nm (2.6 ?m nominal axial resolution in tissue, and 167 KHz A-line rate using 1,408 px), which reduced the impact of eye motion compared to previous AO-OCT implementations. Third, eye motion artifacts were further reduced by custom ImageJ plugins that registered (axially and laterally) the volume videos. In two subjects, cone photoreceptors were imaged and tracked over a ten day period and their reflectance and outer segment (OS) lengths measured. High-speed imaging and image registration/dewarping were found to reduce eye motion to a fraction of a cone width (1 ?m root mean square). The pattern of reflections in the cones was found to change dramatically and occurred on a spatial scale well below the resolution of clinical instruments. Normalized reflectance of connecting cilia (CC) and OS posterior tip (PT) of an exemplary cone was 54+/-4, 47+/-4, 48+/-6, 50+/-5, 56+/-1% and 46+/-4, 53+/-4, 52+/-6, 50+/-5, 44+/-1% for days #1,3,6,8,10 respectively. OS length of the same cone was 28.9, 26.4, 26.4, 30.6, and 28.1 ?m for days #1,3,6,8,10 respectively. It is plausible these changes are an optical correlate of the natural process of OS renewal and shedding.

Kocaoglu, Omer P.; Lee, Sangyeol; Jonnal, Ravi S.; Wang, Qiang; Herde, Ashley E.; Besecker, Jason; Gao, Weihua; Miller, Donald T.

2011-02-01

343

2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.  

PubMed

3D imaging has a significant impact on many challenges in life sciences, because biology is a 3-dimensional phenomenon. Current 3D imaging-technologies (various types MRI, PET, SPECT) are labeled, i.e. they trace the localization of a specific compound in the body. In contrast, 3D MALDI mass spectrometry-imaging (MALDI-MSI) is a label-free method imaging the spatial distribution of molecular compounds. It complements 3D imaging labeled methods, immunohistochemistry, and genetics-based methods. However, 3D MALDI-MSI cannot tap its full potential due to the lack of statistical methods for analysis and interpretation of large and complex 3D datasets. To overcome this, we established a complete and robust 3D MALDI-MSI pipeline combined with efficient computational data analysis methods for 3D edge preserving image denoising, 3D spatial segmentation as well as finding colocalized m/z values, which will be reviewed here in detail. Furthermore, we explain, why the integration and correlation of the MALDI imaging data with other imaging modalities allows to enhance the interpretation of the molecular data and provides visualization of molecular patterns that may otherwise not be apparent. Therefore, a 3D data acquisition workflow is described generating a set of 3 different dimensional images representing the same anatomies. First, an in-vitro MRI measurement is performed which results in a three-dimensional image modality representing the 3D structure of the measured object. After sectioning the 3D object into N consecutive slices, all N slices are scanned using an optical digital scanner, enabling for performing the MS measurements. Scanning the individual sections results into low-resolution images, which define the base coordinate system for the whole pipeline. The scanned images conclude the information from the spatial (MRI) and the mass spectrometric (MALDI-MSI) dimension and are used for the spatial three-dimensional reconstruction of the object performed by image registration techniques. Different strategies for automatic serial image registration applied to MS datasets are outlined in detail. The third image modality is histology driven, i.e. a digital scan of the histological stained slices in high-resolution. After fusion of reconstructed scan images and MRI the slice-related coordinates of the mass spectra can be propagated into 3D-space. After image registration of scan images and histological stained images, the anatomical information from histology is fused with the mass spectra from MALDI-MSI. As a result of the described pipeline we have a set of 3 dimensional images representing the same anatomies, i.e. the reconstructed slice scans, the spectral images as well as corresponding clustering results, and the acquired MRI. Great emphasis is put on the fact that the co-registered MRI providing anatomical details improves the interpretation of 3D MALDI images. The ability to relate mass spectrometry derived molecular information with in vivo and in vitro imaging has potentially important implications. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. PMID:23467008

Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

2014-01-01

344

Space Radar Image of Long Valley, California - 3D view  

NASA Technical Reports Server (NTRS)

This is a three-dimensional perspective view of Long Valley, California by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This view was constructed by overlaying a color composite SIR-C image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle and, which then, are compared to obtain elevation information. The data were acquired on April 13, 1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR radar instrument. The color composite radar image was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is off the image to the left. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.V.(DLR), the major partner in science, operations and data processing of X-SAR.

1994-01-01

345

Thermal Plasma Imager (TPI): An Imaging Thermal Ion Mass and 3-D Velocity Analyzer  

NASA Astrophysics Data System (ADS)

The Thermal Plasma Imager (TPI) is an imaging thermal ion mass and 3-dimensional (3-D) velocity analyzer. It is designed to measure the instantaneous mass composition and detailed, mass-resolved, 3-dimensional, velocity distributions of thermal-energy (0.5-50 eV/q) ions on a 3-axis stabilized spacecraft. It consists of a pair of semi-toroidal deflection and fast-switching time-of-flight (TOF) electrodes, a hemispherical electrostatic analyzer (HEA), and a micro-channel plate (MCP) detector. It uses the TOF electrodes to clock the flight times of individual incident ions, and the HEA to focus ions of a given energy-per-charge and incident angle (elevation and azimuth) onto a single point on the MCP. The TOF/HEA combination produces an instantaneous and mass-resolved "image" of a 2-D cone of the 3-D velocity distribution for each ion species, and combines a sequence of concentric 2-D conical samples into a 3-D distribution covering 360° in azimuth and 120° in elevation. It is currently under development for the Enhanced Polar Outflow Probe (e-POP) and Planet-C Venus missions. It is an improved, "3-dimensional" version of the SS520-2 Thermal Suprathermal Analyzer (TSA), which samples ions in its entrance aperture plane and uses the spacecraft spin to achieve 3-D ion sampling. In this paper, we present its detailed design characteristics and prototype instrument performance, and compare these with the ion velocity measurement performances from its 2-D TSA predecessor on SS520-2.

Yau, A. W.; Amerl, P. V.; King, E. P.; Miyake, W.; Abe, T.

2003-04-01

346

3D fiber tractography with susceptibility tensor imaging  

Microsoft Academic Search

Gradient-echo MRI has revealed anisotropic magnetic susceptibility in the brain white matter. This magnetic susceptibility anisotropy can be measured and characterized with susceptibility tensor imaging (STI). In this study, a method of fiber tractography based on STI is proposed and demonstrated in the mouse brain. STI experiments of perfusion-fixed mouse brains were conducted at 7.0T. The magnetic susceptibility tensor was

Chunlei Liu; Wei Li; Bing Wu; Yi Jiang; G. Allan Johnson

347

QBISM: A Prototype 3-D Medical Image Database System  

Microsoft Academic Search

this paper.However, these automatic or semi-automatic warping algorithms are extremely important for thisapplication. It is precisely this technology that permits anatomic structure-based access to acquiredmedical images as well as comparisons among studies, even of different patients, as long as they havebeen warped to the same atlas. Furthermore, it enables the database to grow, and be queryable,without time-consuming manual segmentation of

Manish Arya; William F. Cody; Christos Faloutsos; Joel E. Richardson; Arthur Toya

1993-01-01

348

Automated spectroscopic imaging of oxygen saturation in human retinal vessels  

NASA Astrophysics Data System (ADS)

A new automatic visualization procedure for the oxygen saturation imaging from multi-spectral imaging of human retinal vessels has been proposed. Two-wavelength retinal fundus images at 545 and 560 nm, which were oxygen insensitive and oxygen sensitive, respectively, were captured by CCD cameras simultaneously through a beam splitter and interference filters. We applied a morphological processing technique to presume a distribution of incident light including the vessel parts and an optical density (OD) image of each wavelength image. And the OD ratio (OD560/OD545) image was calculated as a relative indicator of oxygen saturation. Furthermore, processing of line convergence index filter was adopted to identify the retinal vessels. Clear difference between retinal arteries and veins was observed in the automated imaging method. In addition, the decrease of oxygen saturation in the retinal artery without breathing could be monitored by the ODR. This method is possible to be applied to real-time monitoring for oxygen saturation of retinal vessels.

Nakamura, D.; Sueda, S.; Matsuoka, N.; Yoshinaga, Y.; Enaida, H.; Okada, T.; Ishibashi, T.

2009-02-01

349

Intra-retinal layer segmentation in optical coherence tomography images.  

PubMed

Retinal layer thickness, evaluated as a function of spatial position from optical coherence tomography (OCT) images is an important diagnostics marker for many retinal diseases. However, due to factors such as speckle noise, low image contrast, irregularly shaped morphological features such as retinal detachments, macular holes, and drusen, accurate segmentation of individual retinal layers is difficult. To address this issue, a computer method for retinal layer segmentation from OCT images is presented. An efficient two-step kernel-based optimization scheme is employed to first identify the approximate locations of the individual layers, which are then refined to obtain accurate segmentation results for the individual layers. The performance of the algorithm was tested on a set of retinal images acquired in-vivo from healthy and diseased rodent models with a high speed, high resolution OCT system. Experimental results show that the proposed approach provides accurate segmentation for OCT images affected by speckle noise, even in sub-optimal conditions of low image contrast and presence of irregularly shaped structural features in the OCT images. PMID:20052083

Mishra, Akshaya; Wong, Alexander; Bizheva, Kostadinka; Clausi, David A

2009-12-21

350

A high-level 3D visualization API for Java and ImageJ  

PubMed Central

Background Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Results Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Conclusions Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

2010-01-01

351

Automatic 3D ultrasound calibration for image guided therapy using intramodality image registration.  

PubMed

Many real time ultrasound (US) guided therapies can benefit from management of motion-induced anatomical changes with respect to a previously acquired computerized anatomy model. Spatial calibration is a prerequisite to transforming US image information to the reference frame of the anatomy model. We present a new method for calibrating 3D US volumes using intramodality image registration, derived from the 'hand-eye' calibration technique. The method is fully automated by implementing data rejection based on sensor displacements, automatic registration over overlapping image regions, and a self-consistency error metric evaluated continuously during calibration. We also present a novel method for validating US calibrations based on measurement of physical phantom displacements within US images. Both calibration and validation can be performed on arbitrary phantoms. Results indicate that normalized mutual information and localized cross correlation produce the most accurate 3D US registrations for calibration. Volumetric image alignment is more accurate and reproducible than point selection for validating the calibrations, yielding <1.5 mm root mean square error, a significant improvement relative to previously reported hand-eye US calibration results. Comparison of two different phantoms for calibration and for validation revealed significant differences for validation (p = 0.003) but not for calibration (p = 0.795). PMID:24099806

Schlosser, Jeffrey; Kirmizibayrak, Can; Shamdasani, Vijay; Metz, Steve; Hristov, Dimitre

2013-11-01

352

Frequency Domain Beamformer for a 3-D Sediment Volume Imaging Synthetic Aperture Sonar.  

National Technical Information Service (NTIS)

A frequency domain beamforming approach is described for 3-D sediment volume imaging synthetic aperture sonars (SAS). The beamformer, designed for systems with receiver arrays oriented transverse to the vehicle, performs standard delay and sum processing ...

D. D. Sternlicht J. R. Magoon M. A. Nelson

2010-01-01

353

A novel approach for constructing a 3D model based on registering a mono image on a 3D model, applicable in Digital Earth  

Microsoft Academic Search

The effect of Digital Earth on our life is vital. Developing and updating Geospatial data in Digital Earth is also essential. This paper presents the application of a new approach of image registration in Digital Earth. The approach was developed based on registering a mono photograph on a master 3D model. The result is a 3D vector model, which can

Amir Saeed Homainejad

2011-01-01

354

A novel approach for constructing a 3D model based on registering a mono image on a 3D model, applicable in Digital Earth  

Microsoft Academic Search

The effect of Digital Earth on our life is vital. Developing and updating Geospatial data in Digital Earth is also essential. This paper presents the application of a new approach of image registration in Digital Earth. The approach was developed based on registering a mono photograph on a master 3D model. The result is a 3D vector model, which can

Amir Saeed Homainejad

2012-01-01

355

3D object recognition using kernel construction of phase wrapped images  

NASA Astrophysics Data System (ADS)

Kernel methods are effective machine learning techniques for many image based pattern recognition problems. Incorporating 3D information is useful in such applications. The optical profilometries and interforometric techniques provide 3D information in an implicit form. Typically phase unwrapping process, which is often hindered by the presence of noises, spots of low intensity modulation, and instability of the solutions, is applied to retrieve the proper depth information. In certain applications such as pattern recognition problems, the goal is to classify the 3D objects in the image, rather than to simply display or reconstruct them. In this paper we present a technique for constructing kernels on the measured data directly without explicit phase unwrapping. Such a kernel will naturally incorporate the 3D depth information and can be used to improve the systems involving 3D object analysis and classification.

Zhang, Hong; Su, Hongjun

2011-04-01

356

Use of enhancement algorithm to suppress reflections in 3-D reconstructed capsule endoscopy images  

PubMed Central

In capsule endoscopy (CE), there is research to develop hardware that enables ‘‘real’’ three-dimensional (3-D) video. However, it should not be forgotten that ‘‘true’’ 3-D requires dual video images. Inclusion of two cameras within the shell of a capsule endoscope though might be unwieldy at present. Therefore, in an attempt to approximate a 3-D reconstruction of the digestive tract surface, a software that recovers information-using gradual variation of shading-from monocular two-dimensional CE images has been proposed. Light reflections on the surface of the digestive tract are still a significant problem. Therefore, a phantom model and simulator has been constructed in an attempt to check the validity of a highlight suppression algorithm. Our results confirm that 3-D representation software performs better with simultaneous application of a highlight reduction algorithm. Furthermore, 3-D representation follows a good approximation of the real distance to the lumen surface.

Koulaouzidis, Anastasios; Karargyris, Alexandros

2013-01-01

357

3D-RID - Micromaching for Radiation Imaging Detectors  

NASA Astrophysics Data System (ADS)

Recent advances in the technology of micro-machining have enabled novel topologies for radiation detector design to be proposed. This paper will describe some of the work that is being carried out to develope detector structures with enhanced detection efficiency for x-rays, reduced cross-talk and edgeless operation. All of these are essential characteristics of detectors that are needed to make large area arrays (20×40 cm) for x-ray imaging with better performance than the flat panels currently available. Other configurations such as scintillator filled structures and structures filled with other materials will also be described.

O'Shea, V.

2004-07-01

358

Anesthesiology training using 3D imaging and virtual reality  

NASA Astrophysics Data System (ADS)

Current training for regional nerve block procedures by anesthesiology residents requires expert supervision and the use of cadavers; both of which are relatively expensive commodities in today's cost-conscious medical environment. We are developing methods to augment and eventually replace these training procedures with real-time and realistic computer visualizations and manipulations of the anatomical structures involved in anesthesiology procedures, such as nerve plexus injections (e.g., celiac blocks). The initial work is focused on visualizations: both static images and rotational renderings. From the initial results, a coherent paradigm for virtual patient and scene representation will be developed.

Blezek, Daniel J.; Robb, Richard A.; Camp, Jon J.; Nauss, Lee A.

1996-04-01

359

In vivo Human 3D Cardiac Fibre Architecture: Reconstruction Using Curvilinear Interpolation of Diffusion Tensor Images  

Microsoft Academic Search

\\u000a \\u000a In vivo imaging of the cardiac 3D fibre architecture is still a challenge, but it would have many clinical applications, for instance\\u000a to better understand pathologies and to follow up remodelling after therapy. Recently, cardiac MRI enabled the acquisition\\u000a of Diffusion Tensor images (DTI) of 2D slices. We propose a method for the complete 3D reconstruction of cardiac fibre architecture

Nicolas Toussaint; Maxime Sermesant; Christian T. Stoeck; Sebastian Kozerke; Philip G. Batchelor

2010-01-01

360

Reducing Non-Uniqueness in Satellite Gravity Inversion using 3D Object Oriented Image Analysis Techniques  

NASA Astrophysics Data System (ADS)

Non-uniqueness of satellite gravity interpretation has been usually reduced by using a priori information from various sources, e.g. seismic tomography models. The reduction in non-uniqueness has been based on velocity-density conversion formulas or user interpretation for 3D subsurface structures (objects) in seismic tomography models. However, these processes introduce additional uncertainty through the conversion relations due to the dependency on the other physical parameters such as temperature and pressure, or through the bias in the interpretation due to user choices and experience. In this research, a new methodology is introduced to extract the 3D subsurface structures from 3D geophysical data using a state-of-art 3D Object Oriented Image Analysis (OOA) technique. 3D OOA is tested using a set of synthetic models that simulate the real situation in the study area of this research. Then, 3D OOA is used to extract 3D subsurface objects from a real 3D seismic tomography model. The extracted 3D objects are used to reconstruct a forward model and its response is compared with the measured satellite gravity. Finally, the result of the forward modelling, based on the extracted 3D objects, is used to constrain the inversion process of satellite gravity data. Through this work, a new object-based approach is introduced to interpret and extract the 3D subsurface objects from 3D geophysical data. This can be used to constrain modelling and inversion of potential field data using the extracted 3D subsurface structures from other methods. In summary, a new approach is introduced to constrain inversion of satellite gravity measurements and enhance interpretation capabilities.

Fadel, I.; van der Meijde, M.; Kerle, N.

2013-12-01

361

Elastically deforming 3D atlas to match anatomical brain images.  

PubMed

To evaluate our system for elastically deforming a three-dimensional atlas to match anatomical brain images, six deformed versions of an atlas were generated. The deformed atlases were created by elastically mapping an anatomical brain atlas onto different MR brain image volumes. The mapping matches the edges of the ventricles and the surface of the brain; the resultant deformations are propagated through the atlas volume, deforming the remainder of the structures in the process. The atlas was then elastically matched to its deformed versions. The accuracy of the resultant matches was evaluated by determining the correspondence of 32 cortical and subcortical structures. The system on average matched the centroid of a structure to within 1 mm of its true position and fit a structure to within 11% of its true volume. The overlap between the matched and true structures, defined by the ratio between the volume of their intersection and the volume of their union, averaged 66%. When the gray-white interface was included for matching, the mean overlap improved to 78%; each structure was matched to within 0.6 mm of its true position and fit to within 6% of its true volume. Preliminary studies were also made to determine the effect of the compliance of the atlas on the resultant match. PMID:8454749

Gee, J C; Reivich, M; Bajcsy, R

1993-01-01

362

Integration of virtual and real scenes within an integral 3D imaging environment  

NASA Astrophysics Data System (ADS)

The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

2002-09-01

363

High-resolution 3D coherent laser radar imaging  

NASA Astrophysics Data System (ADS)

High range-resolution active imaging requires high-bandwidth transmitters and receivers. At Lockheed Martin Coherent Technologies (LMCT), we are developing both linear Frequency Modulated Continuous Wave (FMCW) and short pulse laser radar sensors to supply the needed bandwidth. FMCW waveforms are advantageous in many applications, since target returns can be optically demodulated, mitigating the need for high-speed detectors and receiver electronics, enabling the use of much lower bandwidth cameras. However, some of the penalties paid for these transceivers include a finite range search interval (RSI) and the requirement for slow chirp or long-duration waveforms, owing to the relatively slow sample frequency of the cameras used in the receiver. For applications requiring larger RSI's and short duration waveforms, LMCT is also developing high bandwidth pulsed ladar waveforms and receivers. This paper will include discussion of these two methods, their tradeoffs and sample imagery collected at LMCT.

Krause, Brian; Gatt, Philip; Embry, Carl; Buck, Joseph

2006-06-01

364

High-resolution digital 3D imaging of large structures  

NASA Astrophysics Data System (ADS)

This talk summarizes the conclusions of a few of these laser scanning experiments on remote sites and the potential of the technology for imaging applications. Parameters to be considered for these types of activities are related to the design of a large volume of view laser scanner, such as the depth of field, the ambient light interference (especially for outdoors) and, the scanning strategies. The first case reviewed is an inspection application performed in a coal- burning power station located in Alberta, Canada. The second case is the digitizing of the ODS (Orbiter Docking System) at the Kennedy Space Center in Florida and, the third case is the digitizing of a large sculpture located outside of the Canadian Museum of Civilisation in Ottawa-Hull, Canada.

Rioux, Marc; Beraldin, J.-Angelo; Godin, Guy; Blais, Francois; Cournoyer, Luc

1997-03-01

365

Terahertz Lasers Reveal Information for 3D Images  

NASA Technical Reports Server (NTRS)

After taking off her shoes and jacket, she places them in a bin. She then takes her laptop out of its case and places it in a separate bin. As the items move through the x-ray machine, the woman waits for a sign from security personnel to pass through the metal detector. Today, she was lucky; she did not encounter any delays. The man behind her, however, was asked to step inside a large circular tube, raise his hands above his head, and have his whole body scanned. If you have ever witnessed a full-body scan at the airport, you may have witnessed terahertz imaging. Terahertz wavelengths are located between microwave and infrared on the electromagnetic spectrum. When exposed to these wavelengths, certain materials such as clothing, thin metal, sheet rock, and insulation become transparent. At airports, terahertz radiation can illuminate guns, knives, or explosives hidden underneath a passenger s clothing. At NASA s Kennedy Space Center, terahertz wavelengths have assisted in the inspection of materials like insulating foam on the external tanks of the now-retired space shuttle. "The foam we used on the external tank was a little denser than Styrofoam, but not much," says Robert Youngquist, a physicist at Kennedy. The problem, he explains, was that "we lost a space shuttle by having a chunk of foam fall off from the external fuel tank and hit the orbiter." To uncover any potential defects in the foam covering, such as voids or air pockets, that could keep the material from staying in place, NASA employed terahertz imaging to see through the foam. For many years, the technique ensured the integrity of the material on the external tanks.

2013-01-01

366

3D segmentation and image annotation for quantitative diagnosis in lung CT images with pulmonary lesions  

NASA Astrophysics Data System (ADS)

Pulmonary nodules and ground glass opacities are highly significant findings in high-resolution computed tomography (HRCT) of patients with pulmonary lesion. The appearances of pulmonary nodules and ground glass opacities show a relationship with different lung diseases. According to corresponding characteristic of lesion, pertinent segment methods and quantitative analysis are helpful for control and treat diseases at an earlier and potentially more curable stage. Currently, most of the studies have focused on two-dimensional quantitative analysis of these kinds of deceases. Compared to two-dimensional images, three-dimensional quantitative analysis can take full advantage of isotropic image data acquired by using thin slicing HRCT in space and has better quantitative precision for clinical diagnosis. This presentation designs a computer-aided diagnosis component to segment 3D disease areas of nodules and ground glass opacities in lung CT images, and use AIML (Annotation and image makeup language) to annotate the segmented 3D pulmonary lesions with information of quantitative measurement which may provide more features and information to the radiologists in clinical diagnosis.

Li, Suo; Zhu, Yanjie; Sun, Jianyong; Zhang, Jianguo

2013-03-01

367

The application of camera calibration in range-gated 3D imaging technology  

NASA Astrophysics Data System (ADS)

Range-gated laser imaging technology was proposed in 1966 by LF Gillespiethe in U.S. Army Night Vision Laboratory(NVL). Using pulse laser and intensified charge-coupled device(ICCD) as light source and detector respectively, range-gated laser imaging technology can realize space-slice imaging while restraining the atmospheric backs-catter, and in turn detect the target effectively, by controlling the delay between the laser pulse and strobe. Owing to the constraints of the development of key components such as narrow pulse laser and gated imaging devices, the research has been progressed slowly in the next few decades. Until the beginning of this century, as the hardware technology continues to mature, this technology has developed rapidly in fields such as night vision, underwater imaging, biomedical imaging, three-dimensional imaging, especially range-gated three-dimensional(3-D) laser imaging field purposing of access to target spatial information. 3-D reconstruction is the processing of restoration of 3-D objects visible surface geometric structure from three-dimensional(2-D) image. Range-gated laser imaging technology can achieve gated imaging of slice space to form a slice image, and in turn provide the distance information corresponding to the slice image. But to inverse the information of 3-D space, we need to obtain the imaging visual field of system, that is, the focal length of the system. Then based on the distance information of the space slice, the spatial information of each unit space corresponding to each pixel can be inversed. Camera calibration is an indispensable step in 3-D reconstruction, including analysis of the internal structure of camera parameters and the external parameters . In order to meet the technical requirements of the range-gated 3-D imaging, this paper intends to study the calibration of the zoom lens system. After summarizing the camera calibration technique comprehensively, a classic calibration method based on line is selected. One-to-one correspondence between visual filed and focal length of system is obtained and offers effective visual field information for the matching of imaging filed and illumination filed in range-gated 3-D imaging technology. On the basis of the experimental results, combined with the depth of field theory, the application of camera calibration in range-gated 3-D imaging technology is futher studied.

Liu, Xiao-quan; Wang, Xian-wei; Zhou, Yan

2013-09-01

368

Stochastic Tracking of 3D Human Figures Using 2D Image Motion  

Microsoft Academic Search

A probabilistic method for tracking 3D articulated human figures in monocular image sequences is presented. Within a Bayesian framework, we define a generative model of image appearance, a robust likelihood function based on image graylevel differences, and a prior pro- bability distribution over pose and joint angles that models how humans move. The posterior probability distribution over model parameters is

Hedvig Sidenbladh; Michael J. Black; David J. Fleet

2000-01-01

369

MAPPPING DIGITAL IMAGE TEXTURE ONTO 3D MODEL FROM LIDAR DATA  

Microsoft Academic Search

In this paper, an experiment system is developed to settle the problem about how to mapping a digital image onto 3D model from LIDAR data. Firstly, it chooses the corresponding points between point cloud and digital image, then, uses these corresponding points to calculate the exterior and interior orientation elements and systematic error corrections of image. For the purpose of

Chunmei Hu; Yanmin Wang; Wentao Yu

370

Lossless compression of hyperspectral images based on 3D context prediction  

Microsoft Academic Search

Prediction algorithms play an important role in lossless compression of hyperspectral images. However, conventional lossless compression algorithms based on prediction are usually inefficient in exploiting correlation in hyperspectral images. In this paper, a new algorithm for lossless compression of hyperspectral images based on 3D context prediction is proposed. The proposed algorithm consists of three parts to exploit the high spectral

Lin Bai; Mingyi He; Yuchao Dai

2008-01-01

371

Real-time 3D surface modeling for image based relighting  

Microsoft Academic Search

This paper proposes to obtain 3D modeling of object surface and to create a relit image in real-time. The proposed algorithm uses a single camera and synchronized stereo lights to capture the light field images by turning on and off the lights alternatively. The light field images from the controlled lighting approximate the reflectance models on object surfaces. The surface

Sae-Woon Ryu; Sang Hwa Lee; Jong-Il Park

2009-01-01

372

A Kalman filter approach for denoising and deblurring 3-D microscopy images.  

PubMed

This paper proposes a new method for removing noise and blurring from 3D microscopy images. The main contribution is the definition of a space-variant generating model of a 3-D signal, which is capable to stochastically describe a wide class of 3-D images. Unlike other approaches, the space-variant structure allows the model to consider the information on edge locations, if available. A suitable description of the image acquisition process, including blurring and noise, is then associated to the model. A state-space realization is finally derived, which is amenable to the application of standard Kalman filter as an image restoration algorithm. The so obtained method is able to remove, at each spatial step, both blur and noise, via a linear minimum variance recursive one-shot procedure, which does not require the simultaneous processing of the whole image. Numerical results on synthetic and real microscopy images confirm the merit of the approach. PMID:24122555

Conte, Francesco; Germani, Alfredo; Iannello, Giulio

2013-12-01

373

FPGA-based real-time anisotropic diffusion filtering of 3D ultrasound images  

NASA Astrophysics Data System (ADS)

Three-dimensional ultrasonic imaging, especially the emerging real-time version of it, is particularly valuable in medical applications such as echocardiography, obstetrics and surgical navigation. A known problem with ultrasound images is their high level of speckle noise. Anisotropic diffusion filtering has been shown to be effective in enhancing the visual quality of 3D ultrasound images and as preprocessing prior to advanced image processing. However, due to its arithmetic complexity and the sheer size of 3D ultrasound images, it is not possible to perform online, real-time anisotropic diffusion filtering using standard software implementations. We present an FPGA-based architecture that allows performing anisotropic diffusion filtering of 3D images at acquisition rates, thus enabling the use of this filtering technique in real-time applications, such as visualization, registration and volume rendering.

Castro-Pareja, Carlos R.; Dandekar, Omkar S.; Shekhar, Raj

2005-02-01

374

Retinal image restoration by means of blind deconvolution  

NASA Astrophysics Data System (ADS)

Retinal imaging plays a key role in the diagnosis and management of ophthalmologic disorders, such as diabetic retinopathy, glaucoma, and age-related macular degeneration. Because of the acquisition process, retinal images often suffer from blurring and uneven illumination. This problem may seriously affect disease diagnosis and progression assessment. Here we present a method for color retinal image restoration by means of multichannel blind deconvolution. The method is applied to a pair of retinal images acquired within a lapse of time, ranging from several minutes to months. It consists of a series of preprocessing steps to adjust the images so they comply with the considered degradation model, followed by the estimation of the point-spread function and, ultimately, image deconvolution. The preprocessing is mainly composed of image registration, uneven illumination compensation, and segmentation of areas with structural changes. In addition, we have developed a procedure for the detection and visualization of structural changes. This enables the identification of subtle developments in the retina not caused by variation in illumination or blur. The method was tested on synthetic and real images. Encouraging experimental results show that the method is capable of significant restoration of degraded retinal images.

Marrugo, Andrés G.; Šorel, Michal; Šroubek, Filip; Millán, María S.

2011-11-01

375

Retinal image restoration by means of blind deconvolution.  

PubMed

Retinal imaging plays a key role in the diagnosis and management of ophthalmologic disorders, such as diabetic retinopathy, glaucoma, and age-related macular degeneration. Because of the acquisition process, retinal images often suffer from blurring and uneven illumination. This problem may seriously affect disease diagnosis and progression assessment. Here we present a method for color retinal image restoration by means of multichannel blind deconvolution. The method is applied to a pair of retinal images acquired within a lapse of time, ranging from several minutes to months. It consists of a series of preprocessing steps to adjust the images so they comply with the considered degradation model, followed by the estimation of the point-spread function and, ultimately, image deconvolution. The preprocessing is mainly composed of image registration, uneven illumination compensation, and segmentation of areas with structural changes. In addition, we have developed a procedure for the detection and visualization of structural changes. This enables the identification of subtle developments in the retina not caused by variation in illumination or blur. The method was tested on synthetic and real images. Encouraging experimental results show that the method is capable of significant restoration of degraded retinal images. PMID:22112121

Marrugo, Andrés G; Sorel, Michal; Sroubek, Filip; Millán, María S

2011-11-01

376

A Molecular Image-directed, 3D Ultrasound-guided Biopsy System for the Prostate.  

PubMed

Systematic transrectal ultrasound (TRUS)-guided biopsy is the standard method for a definitive diagnosis of prostate cancer. However, this biopsy approach uses two-dimensional (2D) ultrasound images to guide biopsy and can miss up to 30% of prostate cancers. We are developing a molecular image-directed, three-dimensional (3D) ultrasound image-guided biopsy system for improved detection of prostate cancer. The system consists of a 3D mechanical localization system and software workstation for image segmentation, registration, and biopsy planning. In order to plan biopsy in a 3D prostate, we developed an automatic segmentation method based wavelet transform. In order to incorporate PET/CT images into ultrasound-guided biopsy, we developed image registration methods to fuse TRUS and PET/CT images. The segmentation method was tested in ten patients with a DICE overlap ratio of 92.4% ± 1.1 %. The registration method has been tested in phantoms. The biopsy system was tested in prostate phantoms and 3D ultrasound images were acquired from two human patients. We are integrating the system for PET/CT directed, 3D ultrasound-guided, targeted biopsy in human patients. PMID:22708023

Fei, Baowei; Schuster, David M; Master, Viraj; Akbari, Hamed; Fenster, Aaron; Nieh, Peter

2012-01-01

377

A Molecular Image-directed, 3D Ultrasound-guided Biopsy System for the Prostate  

PubMed Central

Systematic transrectal ultrasound (TRUS)-guided biopsy is the standard method for a definitive diagnosis of prostate cancer. However, this biopsy approach uses two-dimensional (2D) ultrasound images to guide biopsy and can miss up to 30% of prostate cancers. We are developing a molecular image-directed, three-dimensional (3D) ultrasound image-guided biopsy system for improved detection of prostate cancer. The system consists of a 3D mechanical localization system and software workstation for image segmentation, registration, and biopsy planning. In order to plan biopsy in a 3D prostate, we developed an automatic segmentation method based wavelet transform. In order to incorporate PET/CT images into ultrasound-guided biopsy, we developed image registration methods to fuse TRUS and PET/CT images. The segmentation method was tested in ten patients with a DICE overlap ratio of 92.4% ± 1.1 %. The registration method has been tested in phantoms. The biopsy system was tested in prostate phantoms and 3D ultrasound images were acquired from two human patients. We are integrating the system for PET/CT directed, 3D ultrasound-guided, targeted biopsy in human patients.

Fei, Baowei; Schuster, David M.; Master, Viraj; Akbari, Hamed; Fenster, Aaron; Nieh, Peter

2012-01-01

378

Space Radar Image of Death Valley in 3-D  

NASA Technical Reports Server (NTRS)

This picture is a three-dimensional perspective view of Death Valley, California. This view was constructed by overlaying a SIR-C radar image on a U.S. Geological Survey digital elevation map. The SIR-C image is centered at 36.629 degrees north latitude and 117.069 degrees west longitude. We are looking at Stove Pipe Wells, which is the bright rectangle located in the center of the picture frame. Our vantage point is located atop a large alluvial fan centered at the mouth of Cottonwood Canyon. In the foreground on the left, we can see the sand dunes near Stove Pipe Wells. In the background on the left, the Valley floor gradually falls in elevation toward Badwater, the lowest spot in the United States. In the background on the right we can see Tucki Mountain. This SIR-C/X-SAR supersite is an area of extensive field investigations and has been visited by both Space Radar Lab astronaut crews. Elevations in the Valley range from 70 meters (230 feet) below sea level, the lowest in the United States, to more than 3,300 meters (10,800 feet) above sea level. Scientists are using SIR-C/X-SAR data from Death Valley to help the answer a number of different questions about Earth's geology. One question concerns how alluvial fans are formed and change through time under the influence of climatic changes and earthquakes. Alluvial fans are gravel deposits that wash down from the mountains over time. They are visible in the image as circular, fan-shaped bright areas extending into the darker valley floor from the mountains. Information about the alluvial fans helps scientists study Earth's ancient climate. Scientists know the fans are built up through climatic and tectonic processes and they will use the SIR-C/X-SAR data to understand the nature and rates of weathering processes on the fans, soil formation and the transport of sand and dust by the wind. SIR-C/X-SAR's sensitivity to centimeter-scale (inch-scale) roughness provides detailed maps of surface texture. Such information can be used to study the occurrence and movement of dust storms and sand dunes. The goal of these studies is to gain a better understanding of the record of past climatic changes and the effects of those changes on a sensitive environment. This may lead to a better ability to predict future response of the land to different potential global climate-change scenarios. Vertical exaggeration is 1.87 times; exaggeration of relief is a common tool scientists use to detect relationships between structure (for example, faults and fractures) and topography. Death Valley is also one of the primary calibration sites for SIR-C/X-SAR. In the lower right quadrant of the picture frame two bright dots can be seen which form a line extending to Stove Pipe Wells. These dots are corner reflectors that have been set up to calibrate the radar as the shuttle passes overhead. Thirty triangular-shaped reflectors (they look like aluminum pyramids) have been deployed by the calibration team from JPL over a 40- by 40-kilometer (25- by 25-mile) area in and around Death Valley. The signatures of these reflectors were analyzed by JPL scientists to calibrate the image used in this picture. The calibration team here also deployed transponders (electronic reflectors) and receivers to measure the radar signals from SIR-C/X-SAR on the ground. SIR-C/X-SAR radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, in conjunction with aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche

1999-01-01

379

Digital holography particle image velocimetry for the measurement of 3D t-3c flows  

NASA Astrophysics Data System (ADS)

In this paper a digital in-line holographic recording and reconstruction system was set up and used in the particle image velocimetry for the 3D t-3c (the three-component (3c), velocity vector field measurements in a three-dimensional (3D), space field with time history ( t)) flow measurements that made up of the new full-flow field experimental technique—digital holographic particle image velocimetry (DHPIV). The traditional holographic film was replaced by a CCD chip that records instantaneously the interference fringes directly without the darkroom processing, and the virtual image slices in different positions were reconstructed by computation using Fresnel-Kirchhoff integral method from the digital holographic image. Also a complex field signal filter (analyzing image calculated by its intensity and phase from real and image parts in fast fourier transform (FFT)) was applied in image reconstruction to achieve the thin focus depth of image field that has a strong effect with the vertical velocity component resolution. Using the frame-straddle CCD device techniques, the 3c velocity vector was computed by 3D cross-correlation through space interrogation block matching through the reconstructed image slices with the digital complex field signal filter. Then the 3D-3c-velocity field (about 20 000 vectors), 3D-streamline and 3D-vorticiry fields, and the time evolution movies (30 field/s) for the 3D t-3c flows were displayed by the experimental measurement using this DHPIV method and techniques.

Shen, Gongxin; Wei, Runjie

2005-10-01

380

Atherosclerosis imaging using 3D black blood TSE SPACE vs 2D TSE  

PubMed Central

AIM: To compare 3D Black Blood turbo spin echo (TSE) sampling perfection with application-optimized contrast using different flip angle evolution (SPACE) vs 2D TSE in evaluating atherosclerotic plaques in multiple vascular territories. METHODS: The carotid, aortic, and femoral arterial walls of 16 patients at risk for cardiovascular or atherosclerotic disease were studied using both 3D black blood magnetic resonance imaging SPACE and conventional 2D multi-contrast TSE sequences using a consolidated imaging approach in the same imaging session. Qualitative and quantitative analyses were performed on the images. Agreement of morphometric measurements between the two imaging sequences was assessed using a two-sample t-test, calculation of the intra-class correlation coefficient and by the method of linear regression and Bland-Altman analyses. RESULTS: No statistically significant qualitative differences were found between the 3D SPACE and 2D TSE techniques for images of the carotids and aorta. For images of the femoral arteries, however, there were statistically significant differences in all four qualitative scores between the two techniques. Using the current approach, 3D SPACE is suboptimal for femoral imaging. However, this may be due to coils not being optimized for femoral imaging. Quantitatively, in our study, higher mean total vessel area measurements for the 3D SPACE technique across all three vascular beds were observed. No significant differences in lumen area for both the right and left carotids were observed between the two techniques. Overall, a significant-correlation existed between measures obtained between the two approaches. CONCLUSION: Qualitative and quantitative measurements between 3D SPACE and 2D TSE techniques are comparable. 3D-SPACE may be a feasible approach in the evaluation of cardiovascular patients.

Wong, Stephanie K; Mobolaji-Iawal, Motunrayo; Arama, Leron; Cambe, Joy; Biso, Sylvia; Alie, Nadia; Fayad, Zahi A; Mani, Venkatesh

2014-01-01

381

Reconstructing photorealistic 3D models from image sequence using domain decomposition method  

NASA Astrophysics Data System (ADS)

In the fields of industrial design, artistic design and heritage conservation, physical objects are usually digitalized by reverse engineering through some 3D scanning methods. Structured light and photogrammetry are two main methods to acquire 3D information, and both are expensive. Even if these expensive instruments are used, photorealistic 3D models are seldom available. In this paper, a new method to reconstruction photorealistic 3D models using a single camera is proposed. A square plate glued with coded marks is used to place the objects, and a sequence of about 20 images is taken. From the coded marks, the images are calibrated, and a snake algorithm is used to segment object from the background. A rough 3d model is obtained using shape from silhouettes algorithm. The silhouettes are decomposed into a combination of convex curves, which are used to partition the rough 3d model into some convex mesh patches. For each patch, the multi-view photo consistency constraints and smooth regulations are expressed as a finite element formulation, which can be resolved locally, and the information can be exchanged along the patches boundaries. The rough model is deformed into a fine 3d model through such a domain decomposition finite element method. The textures are assigned to each element mesh, and a photorealistic 3D model is got finally. A toy pig is used to verify the algorithm, and the result is exciting.

Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei

2009-11-01

382

3D digital stereophotogrammetry: a practical guide to facial image acquisition  

PubMed Central

The use of 3D surface imaging technology is becoming increasingly common in craniofacial clinics and research centers. Due to fast capture speeds and ease of use, 3D digital stereophotogrammetry is quickly becoming the preferred facial surface imaging modality. These systems can serve as an unparalleled tool for craniofacial surgeons, proving an objective digital archive of the patient's face without exposure to radiation. Acquiring consistent high-quality 3D facial captures requires planning and knowledge of the limitations of these devices. Currently, there are few resources available to help new users of this technology with the challenges they will inevitably confront. To address this deficit, this report will highlight a number of common issues that can interfere with the 3D capture process and offer practical solutions to optimize image quality.

2010-01-01

383

Coupling 2D/3D registration method and statistical model to perform 3D reconstruction from partial x-rays images data.  

PubMed

3D reconstructions of the spine from a frontal and sagittal radiographs is extremely challenging. The overlying features of soft tissues and air cavities interfere with image processing. It is also difficult to obtain information that is accurate enough to reconstruct complete 3D models. To overcome these problems, the proposed method efficiently combines the partial information contained in two images from a patient with a statistical 3D spine model generated from a database of scoliotic patients. The algorithm operates through two simultaneous iterating processes. The first one generates a personalized vertebra model using a 2D/3D registration process with bone boundaries extracted from radiographs, while the other one infers the position and the shape of other vertebrae from the current estimation of the registration process using a statistical 3D model. Experimental evaluations have shown good performances of the proposed approach in terms of accuracy and robustness when compared to CT-scan. PMID:19964494

Cresson, T; Chav, R; Branchaud, D; Humbert, L; Godbout, B; Aubert, B; Skalli, W; De Guise, J A

2009-01-01

384

Mechanically assisted 3D prostate ultrasound imaging and biopsy needle-guidance system  

NASA Astrophysics Data System (ADS)

Prostate biopsy procedures are currently limited to using 2D transrectal ultrasound (TRUS) imaging to guide the biopsy needle. Being limited to 2D causes ambiguity in needle guidance and provides an insufficient record to allow guidance to the same suspicious locations or avoid regions that are negative during previous biopsy sessions. We have developed a mechanically assisted 3D ultrasound imaging and needle tracking system, which supports a commercially available TRUS probe and integrated needle guide for prostate biopsy. The mechanical device is fixed to a cart and the mechanical tracking linkage allows its joints to be manually manipulated while fully supporting the weight of the ultrasound probe. The computer interface is provided in order to track the needle trajectory and display its path on a corresponding 3D TRUS image, allowing the physician to aim the needle-guide at predefined targets within the prostate. The system has been designed for use with several end-fired transducers that can be rotated about the longitudinal axis of the probe in order to generate 3D image for 3D navigation. Using the system, 3D TRUS prostate images can be generated in approximately 10 seconds. The system reduces most of the user variability from conventional hand-held probes, which make them unsuitable for precision biopsy, while preserving some of the user familiarity and procedural workflow. In this paper, we describe the 3D TRUS guided biopsy system and report on the initial clinical use of this system for prostate biopsy.

Bax, Jeffrey; Williams, Jackie; Cool, Derek; Gardi, Lori; Montreuil, Jacques; Karnik, Vaishali; Sherebrin, Shi; Romagnoli, Cesare; Fenster, Aaron

2010-03-01

385

Simulation of a new 3D imaging sensor for identifying difficult military targets  

NASA Astrophysics Data System (ADS)

This paper reports the successful application of automatic target recognition and identification (ATR/I) algorithms to simulated 3D imagery of 'difficult' military targets. QinetiQ and Selex S&AS are engaged in a joint programme to build a new 3D laser imaging sensor for UK MOD. The sensor is a 3D flash system giving an image containing range and intensity information suitable for targeting operations from fast jet platforms, and is currently being integrated with an ATR/I suite for demonstration and testing. The sensor has been extensively modelled and a set of high fidelity simulated imagery has been generated using the CAMEO-SIM scene generation software tool. These include a variety of different scenarios (varying range, platform altitude, target orientation and environments), and some 'difficult' targets such as concealed military vehicles. The ATR/I algorithms have been tested on this image set and their performance compared to 2D passive imagery from the airborne trials using a Wescam MX-15 infrared sensor and real-time ATR/I suite. This paper outlines the principles behind the sensor model and the methodology of 3D scene simulation. An overview of the 3D ATR/I programme and algorithms is presented, and the relative performance of the ATR/I against the simulated image set is reported. Comparisons are made to the performance of typical 2D sensors, confirming the benefits of 3D imaging for targeting applications.

Harvey, Christophe; Wood, Jonathan; Randall, Peter; Watson, Graham; Smith, Gordon

2008-05-01

386

Improved 3D skeletonization of trabecular bone images derived from in vivo MRI  

NASA Astrophysics Data System (ADS)

Independent of overall bone density, 3D trabecular bone (TB) architecture has been shown to play an important role in conferring strength to the skeleton. Advances in imaging technologies such as micro-computed tomography (CT) and micro-magnetic resonance (MR) now permit in vivo imaging of the 3D trabecular network in the distal extremities. However, various experimental factors preclude a straightforward analysis of the 3D trabecular structure on the basis of these in vivo images. For MRI, these factors include blurring due to patient motion, partial volume effects, and measurement noise. While a variety of techniques have been developed to deal with the problem of patient motion, the second and third issues are inherent limitations of the modality. To address these issues, we have developed a series of robust processing steps to be applied to a 3D MR image and leading to a 3D skeleton that accurately represents the trabecular bone structure. Here we describe the algorithm, provide illustrations of its use with both specimen and in vivo micro-MR images, and discuss the accuracy and quantify the relationship between the original bone structure and the resulting 3D skeleton volume.

Magland, Jeremy F.; Wehrli, Felix W.

2008-04-01

387

Medical image retrieval system using multiple features from 3D ROIs  

NASA Astrophysics Data System (ADS)

Compared to a retrieval using global image features, features extracted from regions of interest (ROIs) that reflect distribution patterns of abnormalities would benefit more for content-based medical image retrieval (CBMIR) systems. Currently, most CBMIR systems have been designed for 2D ROIs, which cannot reflect 3D anatomical features and region distribution of lesions comprehensively. To further improve the accuracy of image retrieval, we proposed a retrieval method with 3D features including both geometric features such as Shape Index (SI) and Curvedness (CV) and texture features derived from 3D Gray Level Co-occurrence Matrix, which were extracted from 3D ROIs, based on our previous 2D medical images retrieval system. The system was evaluated with 20 volume CT datasets for colon polyp detection. Preliminary experiments indicated that the integration of morphological features with texture features could improve retrieval performance greatly. The retrieval result using features extracted from 3D ROIs accorded better with the diagnosis from optical colonoscopy than that based on features from 2D ROIs. With the test database of images, the average accuracy rate for 3D retrieval method was 76.6%, indicating its potential value in clinical application.

Lu, Hongbing; Wang, Weiwei; Liao, Qimei; Zhang, Guopeng; Zhou, Zhiming

2012-02-01

388

The pulsed all fiber laser application in the high-resolution 3D imaging LIDAR system  

NASA Astrophysics Data System (ADS)

An all fiber laser with master-oscillator-power-amplifier (MOPA) configuration at 1064nm/1550nm for the high-resolution three-dimensional (3D) imaging light detection and ranging (LIDAR) system was reported. The pulsewidth and the repetition frequency could be arbitrarily tuned 1ns~10ns and 10KHz~1MHz, and the peak power exceeded 100kW could be obtained with the laser. Using this all fiber laser in the high-resolution 3D imaging LIDAR system, the image resolution of 1024x1024 and the distance precision of +/-1.5 cm was obtained at the imaging distance of 1km.

Gao, Cunxiao; Zhu, Shaolan; Niu, Linquan; Feng, Li; He, Haodong; Cao, Zongying

2014-05-01

389

Wavefront-coding technique for inexpensive and robust retinal imaging.  

PubMed

We propose a hybrid optical-digital imaging system that can provide high-resolution retinal images without wavefront sensing or correction of the spatial and dynamic variations of eye aberrations. A methodology based on wavefront coding is implemented in a fundus camera in order to obtain a high-quality image of retinal detail. Wavefront-coded systems rely simply on the use of a cubic-phase plate in the pupil of the optical system. The phase element is intended to blur images in such a way that invariance to optical aberrations is achieved. The blur is then removed by image postprocessing. Thus, the system can provide high-resolution retinal images, avoiding all the optics needed to sense and correct ocular aberration, i.e., wavefront sensors and deformable mirrors. PMID:24978788

Arines, Justo; Hernandez, Rene O; Sinzinger, Stefan; Grewe, A; Acosta, Eva

2014-07-01

390

Compressed Sensing Reconstruction for Whole-Heart Imaging with 3D Radial Trajectories: A GPU Implementation  

PubMed Central

A disadvantage of 3D isotropic acquisition in whole-heart coronary MRI is the prolonged data acquisition time. Isotropic 3D radial trajectories allow undersampling of k-space data in all three spatial dimensions, enabling accelerated acquisition of the volumetric data. Compressed sensing (CS) reconstruction can provide further acceleration in the acquisition by removing the incoherent artifacts due to undersampling and improving the image quality. However, the heavy computational overhead of the CS reconstruction has been a limiting factor for its application. In this paper, a parallelized implementation of an iterative CS reconstruction method for 3D radial acquisitions using a commercial graphics processing unit (GPU) is presented. The execution time of the GPU-implemented CS reconstruction was compared with that of the C++ implementation and the efficacy of the undersampled 3D radial acquisition with CS reconstruction was investigated in both phantom and whole-heart coronary data sets. Subsequently, the efficacy of CS in suppressing streaking artifacts in 3D whole-heart coronary MRI with 3D radial imaging and its convergence properties were studied. The CS reconstruction provides improved image quality (in terms of vessel sharpness and suppression of noise-like artifacts) compared with the conventional 3D gridding algorithm and the GPU implementation greatly reduces the execution time of CS reconstruction yielding 34–54 times speed-up compared with C++ implementation.

Nam, Seunghoon; Akcakaya, Mehmet; Basha, Tamer; Stehning, Christian; Manning, Warren J.; Tarokh, Vahid; Nezafat, Reza

2012-01-01

391

Label free cell tracking in 3D tissue engineering constructs with high resolution imaging  

NASA Astrophysics Data System (ADS)

Within the field of tissue engineering there is an emphasis on studying 3-D live tissue structures. Consequently, to investigate and identify cellular activities and phenotypes in a 3-D environment for all in vitro experiments, including shape, migration/proliferation and axon projection, it is necessary to adopt an optical imaging system that enables monitoring 3-D cellular activities and morphology through the thickness of the construct for an extended culture period without cell labeling. This paper describes a new 3-D tracking algorithm developed for Cell-IQ®, an automated cell imaging platform, which has been equipped with an environmental chamber optimized to enable capturing time-lapse sequences of live cell images over a long-term period without cell labeling. As an integral part of the algorithm, a novel auto-focusing procedure was developed for phase contrast microscopy equipped with 20x and 40x objectives, to provide a more accurate estimation of cell growth/trajectories by allowing 3-D voxels to be computed at high spatiotemporal resolution and cell density. A pilot study was carried out in a phantom system consisting of horizontally aligned nanofiber layers (with precise spacing between them), to mimic features well exemplified in cellular activities of neuronal growth in a 3-D environment. This was followed by detailed investigations concerning axonal projections and dendritic circuitry formation in a 3-D tissue engineering construct. Preliminary work on primary animal neuronal cells in response to chemoattractant and topographic cue within the scaffolds has produced encouraging results.

Smith, W. A.; Lam, K.-P.; Dempsey, K. P.; Mazzocchi-Jones, D.; Richardson, J. B.; Yang, Y.

2014-02-01

392

Automated quantification of 3D regional myocardial wall thickening from gated Magnetic Resonance images  

PubMed Central

Purpose To develop 3D quantitative measures of regional myocardial wall motion and thickening using cardiac MRI and to validate them by comparison to standard visual scoring assessment. Materials and Methods 53 consecutive subjects with short-axis slices and mid-ventricular 2-chamber/4-chamber views were analyzed. After correction for breath-hold related misregistration, 3D myocardial boundaries were fitted to images, and edited by an imaging cardiologist. Myocardial thickness was quantified at end-diastole and end-systole by computing the 3D distances using Laplace’s equation. 3D thickening was represented using the standard 17-segment polar coordinates. 3D thickening was compared with 3D wall motion and with expert visual scores (6-point visual scoring of wall motion and wall thickening; 0=normal; 5=greatest abnormality) assigned by imaging cardiologists. Results Correlation between ejection fraction and thickening measurements was (r=0.84; p<0.001) compared to correlation between ejection fraction and motion measurements (r= 0.86; p<0.001). Good negative correlation between summed visual scores and global wall thickening and motion measurements were also obtained (rthick = -0.79; rmotion= -0.74). Additionally, overall good correlation between individual segmental visual scores with thickening/wall motion (rthick=-0.69; rmotion=-0.65) was observed (p<0.0001). Conclusion 3D quantitative regional thickening and wall motion measures obtained from MRI correlate strongly with expert clinical scoring.

Prasad, Mithun; Ramesh, Amit; Kavanagh, Paul; Tamarappoo, Balaji K.; Nakazato, Ryo; Gerlach, James; Cheng, Victor; Thomson, Louise E. J.; Berman, Daniel S.; Germano, Guido; Slomka, Piotr J.

2010-01-01

393

Real Time Quantitative 3-D Imaging of Diffusion Flame Species  

NASA Technical Reports Server (NTRS)

A low-gravity environment, in space or ground-based facilities such as drop towers, provides a unique setting for study of combustion mechanisms. Understanding the physical phenomena controlling the ignition and spread of flames in microgravity has importance for space safety as well as better characterization of dynamical and chemical combustion processes which are normally masked by buoyancy and other gravity-related effects. Even the use of so-called 'limiting cases' or the construction of 1-D or 2-D models and experiments fail to make the analysis of combustion simultaneously simple and accurate. Ideally, to bridge the gap between chemistry and fluid mechanics in microgravity combustion, species concentrations and temperature profiles are needed throughout the flame. However, restrictions associated with performing measurements in reduced gravity, especially size and weight considerations, have generally limited microgravity combustion studies to the capture of flame emissions on film or video laser Schlieren imaging and (intrusive) temperature measurements using thermocouples. Given the development of detailed theoretical models, more sophisticated studies are needed to provide the kind of quantitative data necessary to characterize the properties of microgravity combustion processes as well as provide accurate feedback to improve the predictive capabilities of the computational models. While there have been a myriad of fluid mechanical visualization studies in microgravity combustion, little experimental work has been completed to obtain reactant and product concentrations within a microgravity flame. This is largely due to the fact that traditional sampling methods (quenching microprobes using GC and/or mass spec analysis) are too heavy, slow, and cumbersome for microgravity experiments. Non-intrusive optical spectroscopic techniques have - up until now - also required excessively bulky, power hungry equipment. However, with the advent of near-IR diode lasers, the possibility now exists to obtain reactant and product concentrations and temperatures non-intrusively in microgravity combustion studies. Over the past ten years, Southwest Sciences has focused its research on the high sensitivity, quantitative detection of gas phase species using diode lasers. Our research approach combines three innovations in an experimental system resulting in a new capability for nonintrusive measurement of major combustion species. FM spectroscopy or high frequency Wavelength Modulation Spectroscopy (WMS) have recently been applied to sensitive absorption measurements at Southwest Sciences and in other laboratories using GaAlAs or InGaAsP diode lasers in the visible or near-infrared as well as lead-salt lasers in the mid-infrared spectral region. Because these lasers exhibit essentially no source noise at the high detection frequencies employed with this technique, the achievement of sensitivity approaching the detector shot noise limit is possible.

Kane, Daniel J.; Silver, Joel A.

1997-01-01

394

A molecular image-directed, 3D ultrasound-guided biopsy system for the prostate  

NASA Astrophysics Data System (ADS)

Systematic transrectal ultrasound (TRUS)-guided biopsy is the standard method for a definitive diagnosis of prostate cancer. However, this biopsy approach uses two-dimensional (2D) ultrasound images to guide biopsy and can miss up to 30% of prostate cancers. We are developing a molecular image-directed, three-dimensional (3D) ultrasound imageguided biopsy system for improved detection of prostate cancer. The system consists of a 3D mechanical localization system and software workstation for image segmentation, registration, and biopsy planning. In order to plan biopsy in a 3D prostate, we developed an automatic segmentation method based wavelet transform. In order to incorporate PET/CT images into ultrasound-guided biopsy, we developed image registration methods to fuse TRUS and PET/CT images. The segmentation method was tested in ten patients with a DICE overlap ratio of 92.4% +/- 1.1 %. The registration method has been tested in phantoms. The biopsy system was tested in prostate phantoms and 3D ultrasound images were acquired from two human patients. We are integrating the system for PET/CT directed, 3D ultrasound-guided, targeted biopsy in human patients.

Fei, Baowei; Schuster, David M.; Master, Viraj; Akbari, Hamed; Fenster, Aaron; Nieh, Peter

2012-02-01

395

3D building reconstruction from aerial CCD image and sparse laser sample data  

NASA Astrophysics Data System (ADS)

An approach for 3D building reconstruction automatically based on aerial CCD image and sparse laser scanning sample points is presented in this paper. The geometry shape of a building is shown very clearly in the aerial high-resolution CCD image, so we use Laplacian sharpening operator and threshold segmentation to extract the edges of CCD image first, and then pixel connectivity is used to extract the linear features in the CCD image. Bi-direction projection histogram and line matching are proposed to extract the contours of buildings. The height of the building is determined from sparse laser sample points which are within the contour of the buildings extracted from CCD image; therefore the 3D information of each building is reconstructed. We reconstruct 3D buildings correctly by this approach using real aerial CCD and sparse laser rangefinder data.

Hongjian, You; Shiqiang, Zhang

2006-06-01

396

Robust 3D reconstruction using LiDAR and N - visual image  

NASA Astrophysics Data System (ADS)

3D image reconstruction is desirable in many applications such as city planning, cartography and many vision applications. The accuracy of the 3D reconstruction plays a vital role in many real world applications. We introduce a method which uses one LiDAR image and N conventional visual images to reduce the error and to build a robust registration for 3D reconstruction. In this method we used lines as features in both the LiDAR and visual images. Our proposed system consists of two steps. In the first step, we extract lines from the LiDAR and visual images using Hough transform. In the second step, we estimate the camera matrices using a search algorithm combined with the fundamental matrices for the visual cameras. We demonstrate our method on a synthetic model which is an idealized representation of an urban environment.

Duraisamy, Prakash; Jackson, Stephen; Namuduri, Kamesh; Alam, Mohammed S.; Buckles, Bill

2013-03-01

397

Sparse multipass 3D SAR imaging: applications to the GOTCHA data set  

NASA Astrophysics Data System (ADS)

Typically in SAR imaging, there is insufficient data to form well-resolved three-dimensional (3D) images using traditional Fourier image reconstruction; furthermore, scattering centers do not persist over wide-angles. In this work, we examine 3D non-coherent wide-angle imaging on the GOTCHA Air Force Research Laboratory (AFRL) data set; this data set consists of multipass complete circular aperture radar data from a scene at AFRL, with each pass varying in elevation as a result of aircraft flight dynamics . We compare two algorithms capable of forming well-resolved 3D images over this data set: regularized lp least-squares inversion, and non-uniform multipass interferometric SAR (IFSAR).

Austin, Christian D.; Ertin, Emre; Moses, Randolph L.

2009-05-01

398

Detection of optic disc in retinal images by means of a geometrical model of vessel structure  

Microsoft Academic Search

We present here a new method to identify the position of the optic disc (OD) in retinal fundus images. The method is based on the preliminary detection of the main retinal vessels. All retinal vessels originate from the OD and their path follows a similar directional pattern (parabolic course) in all images. To describe the general direction of retinal vessels

Marco Foracchia; Enrico Grisan; Alfredo Ruggeri

2004-01-01

399