These are representative sample records from Science.gov related to your search topic.
For comprehensive and current results, perform a real-time search at Science.gov.
1

Adaptive-optics optical coherence tomography for high-resolution and high-speed 3D retinal in vivo imaging  

PubMed Central

We have combined Fourier-domain optical coherence tomography (FD-OCT) with a closed-loop adaptive optics (AO) system using a Hartmann-Shack wavefront sensor and a bimorph deformable mirror. The adaptive optics system measures and corrects the wavefront aberration of the human eye for improved lateral resolution (~4 ?m) of retinal images, while maintaining the high axial resolution (~6 ?m) of stand alone OCT. The AO-OCT instrument enables the three-dimensional (3D) visualization of different retinal structures in vivo with high 3D resolution (4󫶚 ?m). Using this system, we have demonstrated the ability to image microscopic blood vessels and the cone photoreceptor mosaic. PMID:19096728

Zawadzki, Robert J.; Jones, Steven M.; Olivier, Scot S.; Zhao, Mingtao; Bower, Bradley A.; Izatt, Joseph A.; Choi, Stacey; Laut, Sophie; Werner, John S.

2008-01-01

2

A statistical model for 3D segmentation of retinal choroid in optical coherence tomography images  

NASA Astrophysics Data System (ADS)

The choroid is a densely layer under the retinal pigment epithelium (RPE). Its deeper boundary is formed by the sclera, the outer fibrous shell of the eye. However, the inhomogeneity within the layers of choroidal Optical Coherence Tomography (OCT)-tomograms presents a significant challenge to existing segmentation algorithms. In this paper, we performed a statistical study of retinal OCT data to extract the choroid. This model fits a Gaussian mixture model (GMM) to image intensities with Expectation Maximization (EM) algorithm. The goodness of fit for proposed GMM model is computed using Chi-square measure and is obtained lower than 0.04 for our dataset. After fitting GMM model on OCT data, Bayesian classification method is employed for segmentation of the upper and lower border of boundary of retinal choroid. Our simulations show the signed and unsigned error of -1.44 +/- 0.5 and 1.6 +/- 0.53 for upper border, and -5.7 +/- 13.76 and 6.3 +/- 13.4 for lower border, respectively.

Ghasemi, F.; Rabbani, H.

2014-03-01

3

Retinal Imaging and Image Analysis  

PubMed Central

Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships. PMID:21743764

Abr鄊off, Michael D.; Garvin, Mona K.; Sonka, Milan

2011-01-01

4

3D Imaging.  

ERIC Educational Resources Information Center

Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

Hastings, S. K.

2002-01-01

5

Adaptive-optics optical coherence tomography for high-resolution and high-speed 3D retinal in vivo imaging  

Microsoft Academic Search

We have combined Fourier-domain optical coherence tomography (FD-OCT) with a closed-loop adaptive optics (AO) system using a Hartmann-Shack wavefront sensor and a bimorph deformable mirror. The adaptive optics system measures and corrects the wavefront aberration of the human eye for improved lateral resolution (~4 mum) of retinal images, while maintaining the high axial resolution (~6 mum) of stand alone OCT.

Robert J. Zawadzki; Steven M. Jones; Scot S. Olivier; Mingtao Zhao; Bradley A. Bower; Joseph A. Izatt; Stacey Choi; Sophie Laut; John S. Werner

2005-01-01

6

Static 3D image space  

Microsoft Academic Search

As three-dimensional (3D) techniques continue to evolve from their humble beginnings-nineteenth century stereo photographs and twentieth century movies and holographs, the urgency for advancement in 3D display is escalating, as the need for widespread application in medical imaging, baggage scanning, gaming, television and movie display, and military strategizing increases. The most recent 3D developments center upon volumetric display, which generate

Badia Koudsi; Jim J. Sluss Jr.

2010-01-01

7

3-D threat image projection  

NASA Astrophysics Data System (ADS)

Automated Explosive Detection Systems utilizing Computed Tomography perform a series X-ray scans of passenger bags being checked in at the airport, and produce various 2-D projection images and 3-D volumetric images of the bag. The determination as to whether the passenger bag contains an explosive and needs to be searched manually is performed through trained Transportation Security Administration screeners following an approved protocol. In order to keep the screeners vigilant with regards to screening quality, the Transportation Security Administration has mandated the use of Threat Image Projection on 2-D projection X-ray screening equipment used at all US airports. These algorithms insert visual artificial threats into images of the normal passenger bags in order to test the screeners with regards to their screening efficiency and their screening quality at determining threats. This technology for 2-D X-ray system is proven and is widespread amongst multiple manufacturers of X-ray projection systems. Until now, Threat Image Projection has been unsuccessful at being introduced into 3-D Automated Explosive Detection Systems for numerous reasons. The failure of these prior attempts are mainly due to imaging queues that the screeners pickup on, and therefore make it easy for the screeners to discern the presence of the threat image and thus defeating the intended purpose. This paper presents a novel approach for 3-D Threat Image Projection for 3-D Automated Explosive Detection Systems. The method presented here is a projection based approach where both the threat object and the bag remain in projection sinogram space. Novel approaches have been developed for projection based object segmentation, projection based streak reduction used for threat object isolation along with scan orientation independence and projection based streak generation for an overall realistic 3-D image. The algorithms are prototyped in MatLab and C++ and demonstrate non discernible 3-D threat image insertion into various luggage, and non discernable streak patterns for 3-D images when compared to actual scanned images.

Yildiz, Yesna O.; Abraham, Douglas Q.; Agaian, Sos; Panetta, Karen

2008-02-01

8

Static 3D image space  

NASA Astrophysics Data System (ADS)

As three-dimensional (3D) techniques continue to evolve from their humble beginnings-nineteenth century stereo photographs and twentieth century movies and holographs, the urgency for advancement in 3D display is escalating, as the need for widespread application in medical imaging, baggage scanning, gaming, television and movie display, and military strategizing increases. The most recent 3D developments center upon volumetric display, which generate 3D images within actual 3D space. More specifically, CSpace volumetric display generates a truly natural 3D image consisting of perceived width, height, and depth within the confines of physical space. Wireframe graphics enable viewers a 360-degree display without the use of additional visual aids. In this paper, research detailing the selection and testing of several rare earth, single-doped, fluoride crystals, namely 1%Er: NYF4, 2%Er: NYF4, 3%Er: NYF4 , 2%Er:KY3F10, and 2%Er:YLF, is introduced. These materials are the basis for CSpace display in a two-step twofrequency up-Conversion process. Significant determinants were tested and identified to aid in the selection of a suitable medium. Results show that 2%Er: NYF4 demonstrates good optical emitted power. Its superior level of brightness makes it the most suitable candidate for CSpace display. Testing also proved 2%Er: KY3F10 crystal might be a viable medium.

Koudsi, Badia; Sluss, Jim J., Jr.

2010-02-01

9

Consistent stylization of stereoscopic 3D images  

Microsoft Academic Search

The application of stylization filters to photographs is common, Instagram being a popular recent example. These image manipulation applications work great for 2D images. However, stereoscopic 3D cameras are increasingly available to consumers (Nintendo 3DS, Fuji W3 3D, HTC Evo 3D). How will users apply these same stylizations to stereoscopic images?

Lesley Northam; Paul Asente; Craig S. Kaplan

2012-01-01

10

Minimizing camera-eye optical aberrations during the 3D reconstruction of retinal structures  

NASA Astrophysics Data System (ADS)

3D reconstruction of blood vessels is a powerful visualization tool for physicians, since it allows them to refer to qualitative representation of their subject of study. In this paper we propose a 3D reconstruction method of retinal vessels from fundus images. The reconstruction method propose herein uses images of the same retinal structure in epipolar geometry. Images are preprocessed by RISA system for segmenting blood vessels and obtaining feature points for correspondences. The correspondence points process is solved using correlation. The LMedS analysis and Graph Transformation Matching algorithm are used for outliers suppression. Camera projection matrices are computed with the normalized eight point algorithm. Finally, we retrieve 3D position of the retinal tree points by linear triangulation. In order to increase the power of visualization, 3D tree skeletons are represented by surfaces via generalized cylinders whose radius correspond to morphological measurements obtained by RISA. In this paper the complete calibration process including the fundus camera and the optical properties of the eye, the so called camera-eye system is proposed. On one hand, the internal parameters of the fundus camera are obtained by classical algorithms using a reference pattern. On the other hand, we minimize the undesirable efects of the aberrations induced by the eyeball optical system assuming that contact enlarging lens corrects astigmatism, spherical and coma aberrations are reduced changing the aperture size and eye refractive errors are suppressed adjusting camera focus during image acquisition. Evaluation of two self-calibration proposals and results of 3D blood vessel surface reconstruction are presented.

Aldana-Iuit, Javier; Martinez-Perez, M. Elena; Espinosa-Romero, Arturo; Diaz-Uribe, Rufino

2010-05-01

11

Transplantation of Embryonic and Induced Pluripotent Stem Cell-Derived 3D Retinal Sheets into Retinal Degenerative Mice.  

PubMed

In this article, we show that mouse embryonic stem cell- or induced pluripotent stem cell-derived 3D retinal tissue developed a structured outer nuclear layer (ONL) with complete inner and outer segments even in an advanced retinal degeneration model (rd1) that lacked ONL. We also observed host-graft synaptic connections by immunohistochemistry. This study provides a "proof of concept" for retinal sheet transplantation therapy for advanced retinal degenerative diseases. PMID:24936453

Assawachananont, Juthaporn; Mandai, Michiko; Okamoto, Satoshi; Yamada, Chikako; Eiraku, Mototsugu; Yonemura, Shigenobu; Sasai, Yoshiki; Takahashi, Masayo

2014-05-01

12

3D Modeling From 2D Images  

Microsoft Academic Search

This article will give an overview of the methods of transition from the set of images into 3D model. Direct method of creating 3D model using 3D software will be described. Creating photorealistic 3D models from a set of photographs is challenging problem in computer vision because the technology is still in its development stage while the demands for 3D

Lana Madracevic; Stjepan Sogoric

2010-01-01

13

Multispectral retinal image analysis: a  

E-print Network

of the imaging technique and refinement of the software are necessary to understand the full potential pigments in the fundus: macular pigment (MP), retinal blood, retinal pigment epithelium (RPE) melanin

Claridge, Ela

14

3D ultrafast ultrasound imaging in vivo  

NASA Astrophysics Data System (ADS)

Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32? ?32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra梐nd inter-observer variability.

Provost, Jean; Papadacci, Clement; Esteban Arango, Juan; Imbault, Marion; Fink, Mathias; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

2014-10-01

15

Elastic registration of 3D ultrasound images.  

PubMed

3D registration of ultrasound images is an important and fast-growing research area with various medical applications, such as image-guided radiotherapy and surgery. However, this registration process is extremely challenging due to the deformation of soft tissue and the existence of speckles in these images. This paper presents a novel intra-modality elastic registration technique for 3D ultrasound images. It uses the general concept of attribute vectors to find the corresponding voxels in the fixed and moving images. The method does not require any pre-segmentation and does not employ any numerical optimization procedure. Therefore, the computational requirements are very low and it has the potential to be used for real-time applications. The technique is implemented and tested for 3D ultrasound images of liver, captured by a 3D ultrasound transducer. The results show that the method is sufficiently accurate and robust and does not easily get trapped with local minima. PMID:16685832

Foroughi, Pezhman; Abolmaesumi, Purang

2005-01-01

16

Standoff 3D Gamma-Ray Imaging  

Microsoft Academic Search

We present a new standoff imaging technique able to provide 3-dimensional (3D) images of gamma-ray sources distributed in the environment. Unlike standard 3D tomographic methods, this technique does not require the radioactive sources to be bounded within a predefined physical space. In the present implementation, the gamma-ray imaging system is based on two large planar HPGe double sided segmented detectors,

Lucian Mihailescu; Kai Vetter; Daniel Chivers

2009-01-01

17

Retinal image blood vessel segmentation  

Microsoft Academic Search

The appearance and structure of blood vessels in retinal images play an important role in diagnosis of eye diseases. This paper proposes a method for segmentation of blood vessels in color retinal images. We present a method that uses 2-D Gabor wavelet to enhance the vascular pattern. We locate and segment the blood vessels using adaptive thresholding. The technique is

M. Usman Akram; Anam Tariq; Shoab A. Khan

2009-01-01

18

Acquisition and applications of 3D images  

NASA Astrophysics Data System (ADS)

The moir fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

Sterian, Paul; Mocanu, Elena

2007-08-01

19

A New Classification Mechanism for Retinal Images  

Microsoft Academic Search

In this paper, we propose a classification mechanism for retinal images so that the retinal images can be successfully distinguished from nonretinal images, egg yolk images for example. The proposed classification mechanism consists of two procedures: training and classification. The image features of retinal images and nonretinal images are extracted at the beginning of the training procedure to make sure

Chin-Chen Chang; Yen-Chang Chen; Chia-Chen Lin

2009-01-01

20

3D Biological Tissue Image Rendering Software  

Cancer.gov

Available for commercial development is software that provides automatic visualization of features inside biological image volumes in 3D. The software provides a simple and interactive visualization for the exploration of biological datasets through dataset-specific transfer functions and direct volume rendering.

21

3D imaging finds more breast cancers.  

PubMed

In the largest such study to date, researchers have found that breast cancer screening using a combination of traditional 2D digital mammography and 3D imaging detects more invasive breast cancers and reduces false alarms compared with traditional mammography alone. PMID:25185169

2014-09-01

22

Analysis of posterior retinal layers in spectral optical coherence tomography images of the normal retina and retinal pathologies  

E-print Network

We present a computationally efficient, semiautomated method for analysis of posterior retinal layers in three-dimensional (3-D) images obtained by spectral optical coherence tomography (SOCT). The method consists of two ...

Szkulmowski, Maciej

23

Geometric Smoothing of 3D Surfaces and Nonlinear Diffusion of 3D Images  

E-print Network

Geometric Smoothing of 3D Surfaces and Non颅linear Diffusion of 3D Images Technical Report LEMS颅144 for them. Keywords: Shape representation, deformation, scale, 3D smoothing, curvature dependent flow and formal theorems about its smoothing properties, the development of a similar process in 3D has been

24

3D imaging system for biometric applications  

NASA Astrophysics Data System (ADS)

There is a growing interest in the use of 3D data for many new applications beyond traditional metrology areas. In particular, using 3D data to obtain shape information of both people and objects for applications ranging from identification to game inputs does not require high degrees of calibration or resolutions in the tens of micron range, but does require a means to quickly and robustly collect data in the millimeter range. Systems using methods such as structured light or stereo have seen wide use in measurements, but due to the use of a triangulation angle, and thus the need for a separated second viewpoint, may not be practical for looking at a subject 10 meters away. Even when working close to a subject, such as capturing hands or fingers, the triangulation angle causes occlusions, shadows, and a physically large system that may get in the way. This paper will describe methods to collect medium resolution 3D data, plus highresolution 2D images, using a line of sight approach. The methods use no moving parts and as such are robust to movement (for portability), reliable, and potentially very fast at capturing 3D data. This paper will describe the optical methods considered, variations on these methods, and present experimental data obtained with the approach.

Harding, Kevin; Abramovich, Gil; Paruchura, Vijay; Manickam, Swaminathan; Vemury, Arun

2010-04-01

25

Pattern based 3D image Steganography  

NASA Astrophysics Data System (ADS)

This paper proposes a new high capacity Steganographic scheme using 3D geometric models. The novel algorithm re-triangulates a part of a triangle mesh and embeds the secret information into newly added position of triangle meshes. Up to nine bits of secret data can be embedded into vertices of a triangle without causing any changes in the visual quality and the geometric properties of the cover image. Experimental results show that the proposed algorithm is secure, with high capacity and low distortion rate. Our algorithm also resists against uniform affine transformations such as cropping, rotation and scaling. Also, the performance of the method is compared with other existing 3D Steganography algorithms. [Figure not available: see fulltext.

Thiyagarajan, P.; Natarajan, V.; Aghila, G.; Prasanna Venkatesan, V.; Anitha, R.

2013-03-01

26

Composite model of a 3-D image  

NASA Technical Reports Server (NTRS)

This paper presents a composite model of a moving (3-D) image especially useful for the sequential image processing and encoding. A non-linear predictor based on the composite model is described. The performance of this predictor is used as a measure of the validity of the model for a real image source. The minimization of a total mean square prediction error provides an inequality which determines a condition for the profitable use of the composite model and can serve as a decision device for the selection of the number of subsources within the model. The paper also describes statistical properties of the prediction error and contains results of computer simulation of two non-linear predictors in the case of perfect classification between subsources.

Dukhovich, I. J.

1980-01-01

27

Model Based Segmentation for Retinal Fundus Images  

E-print Network

of twenty digital fundus retinal images1 . 1 Introduction The detection and measurement of blood vessels linear and branching structures. (e) m=2 (d) m=1 (g) m=2 (b) (c) (a) Digital fundus retinal image (f) m=1 Fig. 1. (a) Sample of digital fundus retinal image (605 ? 700). (b)-(c): Part of inverted grey level

Bhalerao, Abhir

28

Ames Lab 101: Real-Time 3D Imaging  

SciTech Connect

Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

Zhang, Song

2010-01-01

29

Computational 3D and reflectivity imaging with high photon efficiency  

E-print Network

Imaging the 3D structure and reflectivity of a scene can be done using photon-counting detectors. Traditional imagers of this type typically require hundreds of detected photons per pixel for accurate 3D and reflectivity ...

Shin, Dongeek

2014-01-01

30

Layer extraction in rodent retinal images acquired by optical coherence tomography  

Microsoft Academic Search

Optical coherence tomography (OCT) is a modern technique that allows for in vivo, fast, high-resolution 3D imaging. OCT can\\u000a be efficiently used in eye research and diagnostics, when retinal images are processed to extract borders of retinal layers.\\u000a In this paper, we present two novel algorithms for delineation of three main borders in rodent retinal images. The first,\\u000a fast algorithm

J髗sef Moln醨; Dmitry Chetverikov; Delia Cabrera DeBuc; Wei Gao; G醔or M醨k Somfai

31

3D Reconstruction of CT Images Based on Isosurface Construction  

Microsoft Academic Search

With development of modern medical imaging computer technology, the rapid prototyping manufacturing and 3D visualized medical accessory system reality were achieved based on CT. Aiming at the key technology of 3D reconstruction from medical CT images, a 3D medical imaging surface reconstruction scheme was proposed, which integrated segmentation with marching cubes (MC) algorithm. Firstly, the shortage of standard MC algorithm

Hongjian Wang; Fen Luo; Jianshan Jiang

2008-01-01

32

USERS GUIDE FOR ASPIRE 3D IMAGE RECONSTRUCTION SOFTWARE  

E-print Network

USERS GUIDE FOR ASPIRE 3D IMAGE RECONSTRUCTION SOFTWARE Jeffrey A. Fessler COMMUNICATIONS & SIGNAL for public release; distribution unlimited. #12;Users guide for ASPIRE 3D image reconstruction software is a users guide for the iterative 3D image reconstruction portion of the ASPIRE software suite

Fessler, Jeffrey A.

33

Imaging a Sustainable Future in 3D  

NASA Astrophysics Data System (ADS)

It is the intention of this paper, to contribute to a sustainable future by providing objective object information based on 3D photography as well as promoting 3D photography not only for scientists, but also for amateurs. Due to the presentation of this article by CIPA Task Group 3 on "3D Photographs in Cultural Heritage", the presented samples are masterpieces of historic as well as of current 3D photography concentrating on cultural heritage. In addition to a report on exemplarily access to international archives of 3D photographs, samples for new 3D photographs taken with modern 3D cameras, as well as by means of a ground based high resolution XLITE staff camera and also 3D photographs taken from a captive balloon and the use of civil drone platforms are dealt with. To advise on optimum suited 3D methodology, as well as to catch new trends in 3D, an updated synoptic overview of the 3D visualization technology, even claiming completeness, has been carried out as a result of a systematic survey. In this respect, e.g., today's lasered crystals might be "early bird" products in 3D, which, due to lack in resolution, contrast and color, remember to the stage of the invention of photography.

Schuhr, W.; Lee, J. D.; Kanngieser, E.

2012-07-01

34

Progress in 3D imaging and display by integral imaging  

NASA Astrophysics Data System (ADS)

Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

2009-05-01

35

Volumetric Image Mining Based on Decomposition and Graph Analysis: An Application to Retinal  

E-print Network

the clas- sification of 3-D Optical Coherence Tomography (OCT) retinal images according to whetherVolumetric Image Mining Based on Decomposition and Graph Analysis: An Application to Retinal Optical Coherence Tomography Abdulrahman Albarrak, Frans Coenen, Yalin Zheng and Wen Yu Department

Coenen, Frans

36

3-D Volume Imaging for Dentistry: A New Dimension  

Microsoft Academic Search

The use of computed tomography for dental imaging procedures has in- creased recently. Use of CT for even seemingly routine diagnosis and treatment procedures suggests that the desire for 3-D imaging is more than a current trend but rather a shift toward a future of dimensional volume imaging. Recognizing this shift, several imaging manufacturers recently have developed 3-D imaging devices

Robert A. Danforth; Ivan Dus; James Mah

2003-01-01

37

Toward a compact underwater structured light 3-D imaging system  

E-print Network

A compact underwater 3-D imaging system based on the principles of structured light was created for classroom demonstration and laboratory research purposes. The 3-D scanner design was based on research by the Hackengineer ...

Dawson, Geoffrey E

2013-01-01

38

3D imaging system for biometric applications  

Microsoft Academic Search

There is a growing interest in the use of 3D data for many new applications beyond traditional metrology areas. In particular, using 3D data to obtain shape information of both people and objects for applications ranging from identification to game inputs does not require high degrees of calibration or resolutions in the tens of micron range, but does require a

Kevin Harding; Gil Abramovich; Vijay Paruchura; Swaminathan Manickam; Arun Vemury

2010-01-01

39

Human body 3D imaging by speckle texture projection photogrammetry  

Microsoft Academic Search

Describes a non-contact optical sensing technology called C3D that is based on speckle texture projection photogrammetry. C3D has been applied to capturing all-round 3D models of the human body of high dimensional accuracy and photorealistic appearance. The essential strengths and limitation of the C3D approach are presented and the basic principles of this stereo-imaging approach are outlined, from image capture

J. Paul Siebert; Stephen J. Marshall

2000-01-01

40

3D reconstruction of the operating field for image overlay in 3D-endoscopic surgery.  

E-print Network

and are not designed for robotically-assisted surgery. Because of the displacement of organs during operation3D reconstruction of the operating field for image overlay in 3D-endoscopic surgery. Fabien Mourgues, Fr麓ed麓eric Devernay, `Eve Coste-Mani`ere CHIR Medical Robotics Team www.inria.fr/chir, INRIA, BP

Boyer, Edmond

41

Digital tracking and control of retinal images  

NASA Astrophysics Data System (ADS)

Laser-induced retinal lesions are used to treat a variety of eye disorders such as diabetic retinopathy and retinal tears. An instrumentation system has been developed to track a specific lesion coordinate on the retinal surface and provide corrective signals to maintain laser position on the coordinate. High-resolution retinal images are acquired via a CCD camera coupled to a fundus camera and video frame grabber. Optical filtering and histogram modification are used to enhance the retinal vessel network against the lighter retinal background. Six distinct retinal landmarks are tracked on the high contrast image obtained from the frame grabber using 2D blood vessel templates. An overview of the robotic laser system design is followed by implementation and testing of a development system for proof of concept and, finally, specifications for a real-time system are provided.

Barrett, Steven F.; Jerath, Maya R.; Rylander, Henry G.; Welch, Ashley J.

1994-01-01

42

3D ultrasound imaging for prosthesis fabrication and diagnostic imaging  

SciTech Connect

The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

Morimoto, A.K.; Bow, W.J.; Strong, D.S. [and others

1995-06-01

43

3D SHADED GRAPIDCS IN RADIO-~ THERAPY AND DIAGNOSTIC IMAGING  

E-print Network

of Computer Science New West Hall 035 A Chapel Hill. N.C. 27514 #12;3D Shaded Graphics in Radiotherapy-by-slice techniques, we are developing methods that operate directly in 3D (or in 2D for single slices and begin3D SHADED GRAPIDCS IN RADIO- ~ THERAPY AND DIAGNOSTIC IMAGING Tecbnical Report 86-004 S. M. Pi

North Carolina at Chapel Hill, University of

44

Original Research Accelerated 3D MERGE Carotid Imaging Using  

E-print Network

Original Research Accelerated 3D MERGE Carotid Imaging Using Compressed Sensing With a Hidden, PhD,2 and Krishna S. Nayak, PhD1 Purpose: To determine the potential for accelerated 3D carotid analysis along with image quality. Results: Rate-4.5 acceleration with HMT model-based CS provided image

Southern California, University of

45

A 3D image analysis tool for SPECT imaging  

NASA Astrophysics Data System (ADS)

We have developed semi-automated and fully-automated tools for the analysis of 3D single-photon emission computed tomography (SPECT) images. The focus is on the efficient boundary delineation of complex 3D structures that enables accurate measurement of their structural and physiologic properties. We employ intensity based thresholding algorithms for interactive and semi-automated analysis. We also explore fuzzy-connectedness concepts for fully automating the segmentation process. We apply the proposed tools to SPECT image data capturing variation of gastric accommodation and emptying. These image analysis tools were developed within the framework of a noninvasive scintigraphic test to measure simultaneously both gastric emptying and gastric volume after ingestion of a solid or a liquid meal. The clinical focus of the particular analysis was to probe associations between gastric accommodation/emptying and functional dyspepsia. Employing the proposed tools, we outline effectively the complex three dimensional gastric boundaries shown in the 3D SPECT images. We also perform accurate volume calculations in order to quantitatively assess the gastric mass variation. This analysis was performed both with the semi-automated and fully-automated tools. The results were validated against manual segmentation performed by a human expert. We believe that the development of an automated segmentation tool for SPECT imaging of the gastric volume variability will allow for other new applications of SPECT imaging where there is a need to evaluate complex organ function or tumor masses.

Kontos, Despina; Wang, Qiang; Megalooikonomou, Vasileios; Maurer, Alan H.; Knight, Linda C.; Kantor, Steve; Fisher, Robert S.; Simonian, Hrair P.; Parkman, Henry P.

2005-04-01

46

Eye-safe laser radar 3D imaging  

NASA Astrophysics Data System (ADS)

This paper reviews the progress of Advanced Scientific Concepts, Inc (ASC). flash ladar 3-D imaging systems and presents their newest single-pulse 128 x 128 flash ladar 3-D images. The heart of the system, a multifunction ROIC based upon both analog and digital processing, is described. Of particular interest is the obscuration penetration function, which is illustrated with a series of images. An image tube-based low-laser-signal 3-D FPA is also presented. A small-size handheld version of the 3-D camera is illustrated which uses an InGaAs lensed PIN detector array indium bump bonded to the ROIC.

Stettner, Roger; Bailey, Howard; Richmond, Richard D.

2004-09-01

47

3D-image-based collaboration system for telemedicine  

NASA Astrophysics Data System (ADS)

This paper described a 3D image based collaboration system for telemedicine. This system enables doctor to observe 3D images transferred from image database through network, and also has collaboration function which operate 3D position of viewing of each doctor by 3D image displaying and data transferring through network. It is constructed based on server client style, and the server has the function of transferring data from a 3D image database and a collaboration record database, and also has control unit of transferring data between doctors during operating the system. The client has a user interface including operation part for parameter selection of viewing and sending comments, and display part for displaying 3D image based on volume rendering and 3D position and direction of observer by suing avatar. Doctors can use this system to do collaboration work by sending comments with viewing 3D image each other. We implement this system in different platforms including UNIX workstation and PC, and also supply Web browser based user interface for considering various user environments. We applied the 3D images of MRI in head for examining the structure of soft tissues and tumor in detail. We also evaluated the performance of the system through network including LAN and Internet, and experimental result shows that this system is useful.

Kobayashi, Toshihiko; Satou, Syuji; Jiang, Hao; Fujii, Tetsuya; Sugou, Nobuo; Mito, Toshiaki; Shibata, Iekado

2000-10-01

48

Acquisition and reconstruction of 3-D utrasonic imaging  

Microsoft Academic Search

This paper describes a three-dimensional liver imaging method based on orthogonal sectional images. These images were acquired by using a mechanical scanning transducer on the liver. The images were recorded and transferred to our computer. Contour curves were extracted from each image and reconstructed as a 3-D Arterial liver image. The segmental localization of hepatic metastases was evaluated based on

Ridha Ben Younes; Paul Rohmer; Khaldoun Elkhaldi; Annie Pousse; Roland Bidet

1992-01-01

49

Automatic detection, segmentation and characterization of retinal horizontal neurons in large-scale 3D confocal imagery  

NASA Astrophysics Data System (ADS)

Automatic analysis of neuronal structure from wide-field-of-view 3D image stacks of retinal neurons is essential for statistically characterizing neuronal abnormalities that may be causally related to neural malfunctions or may be early indicators for a variety of neuropathies. In this paper, we study classification of neuron fields in large-scale 3D confocal image stacks, a challenging neurobiological problem because of the low spatial resolution imagery and presence of intertwined dendrites from different neurons. We present a fully automated, four-step processing approach for neuron classification with respect to the morphological structure of their dendrites. In our approach, we first localize each individual soma in the image by using morphological operators and active contours. By using each soma position as a seed point, we automatically determine an appropriate threshold to segment dendrites of each neuron. We then use skeletonization and network analysis to generate the morphological structures of segmented dendrites, and shape-based features are extracted from network representations of each neuron to characterize the neuron. Based on qualitative results and quantitative comparisons, we show that we are able to automatically compute relevant features that clearly distinguish between normal and abnormal cases for postnatal day 6 (P6) horizontal neurons.

Karakaya, Mahmut; Kerekes, Ryan A.; Gleason, Shaun S.; Martins, Rodrigo A. P.; Dyer, Michael A.

2011-03-01

50

3D Imaging with Holographic Tomography  

NASA Astrophysics Data System (ADS)

There are two main types of tomography that enable the 3D internal structures of objects to be reconstructed from scattered data. The commonly known computerized tomography (CT) give good results in the x-ray wavelength range where the filtered back-projection theorem and Radon transform can be used. These techniques rely on the Fourier projection-slice theorem where rays are considered to propagate straight through the object. Another type of tomography called `diffraction tomography' applies in applications in optics and acoustics where diffraction and scattering effects must be taken into account. The latter proves to be a more difficult problem, as light no longer travels straight through the sample. Holographic tomography is a popular way of performing diffraction tomography and there has been active experimental research on reconstructing complex refractive index data using this approach recently. However, there are two distinct ways of doing tomography: either by rotation of the object or by rotation of the illumination while fixing the detector. The difference between these two setups is intuitive but needs to be quantified. From Fourier optics and information transformation point of view, we use 3D transfer function analysis to quantitatively describe how spatial frequencies of the object are mapped to the Fourier domain. We first employ a paraxial treatment by calculating the Fourier transform of the defocused OTF. The shape of the calculated 3D CTF for tomography, by scanning the illumination in one direction only, takes on a form that we might call a 'peanut,' compared to the case of object rotation, where a diablo is formed, the peanut exhibiting significant differences and non-isotropy. In particular, there is a line singularity along one transverse direction. Under high numerical aperture conditions, the paraxial treatment is not accurate, and so we make use of 3D analytical geometry to calculate the behaviour in the non-paraxial case. This time, we obtain a similar peanut, but without the line singularity.

Sheppard, Colin J. R.; Kou, Shan Shan

2010-04-01

51

Integrated computational system for portable retinal imaging  

E-print Network

This thesis introduces a system to improve image quality obtained from a low-light CMOS camera-specifically designed to image the surface of the retina. The retinal tissue, as well as having various diseases of its own, ...

Boggess, Jason (Jason Robert)

2012-01-01

52

Reconstruction of 3D Surface Temperature from IR images  

Microsoft Academic Search

The aim of the present work is to develop a camera calibration technique for high-accuracy 3D vision metrology using IR thermal imagers. The final task is the reconstruction of 3D surface temperature from IR images (and, as a consequence, thermal fluxes) in wind tunnels. Particular attention is given to the application in hypersonic wind tunnel \\

G. Cardone; S. Discetti

53

3D reconstruction of articulated objects from uncalibrated images  

Microsoft Academic Search

The goal of computing realistic 3-D models from image sequences is still a challenging problem. In recent years the demand for realistic reconstruction and modeling of objects and human bodies is increasing both for animation and medical applications. In this paper a system for the reconstruction of 3-D models of articulated objects, like human bodies, from uncalibrated images is presented.

Fabio Remondino

2002-01-01

54

Model-Based Interpretation of 3D Medical Images  

Microsoft Academic Search

The automatic segmentation and labelling of anatomical structures in 3D medical images is a challenging task of practical importance. We describe a model-based approach which allows robust and accurate interpretation using explicit anatomical knowledge. Our method is based on the extension to 3D of Point Distribution Mo- dels (PDMs) and associated image search algorithms. A combination of global, Genetic Algorithm

A. Hill; A. Thornham; C. J. Taylor

1993-01-01

55

Multiprocessing of Anisotropic Nonlinear Diffusion for Filtering 3D Images  

Microsoft Academic Search

This article describes and analyzes the parallelization of the Anisotropic Nonlinear Diffusion (AND) for filtering 3D images. AND is one of the most powerful denoising techniques in the field of computer vision. This technique consists in resolving the equation of diffusion tightly coupled with a massive set of eigensystems. Denoising large 3D images in biomedicine and structural cellular biology by

Siham Tabik; Ester M. Garz髇; Inmaculada Garc韆; Jos-jes鷖 Fern醤dez

2006-01-01

56

Recording of high-quality 3D images by holoprinter  

Microsoft Academic Search

Holographic (3-D) printer (holoprinter) is under development as a peripheral device for 3-D image processing systems. For the automatic printing of distortion-free image, `multidot' recording method is developed. In this paper, the technique for the synthesis of high-quality and gray-level images is presented, and the experimental results are demonstrated. Also the method to record full-color image is introduced.

Masahiro Yamaguchi; Hideaki Endoh; Toshio Honda; Nagaaki Ohyama

1993-01-01

57

3D model-based still image object categorization  

NASA Astrophysics Data System (ADS)

This paper proposes a novel recognition scheme algorithm for semantic labeling of 2D object present in still images. The principle consists of matching unknown 2D objects with categorized 3D models in order to infer the semantics of the 3D object to the image. We tested our new recognition framework by using the MPEG-7 and Princeton 3D model databases in order to label unknown images randomly selected from the web. Results obtained show promising performances, with recognition rate up to 84%, which opens interesting perspectives in terms of semantic metadata extraction from still images/videos.

Petre, Raluca-Diana; Zaharia, Titus

2011-09-01

58

3D laser scanner system using high dynamic range imaging  

NASA Astrophysics Data System (ADS)

Because of its high measuring speed, moderate accuracy, low cost and robustness in the industrial field, 3D laser scanning has been widely used in a variety of applications. However, the measurement of a 3D profile of a high dynamic range (HDR) brightness surface such as a partially highlighted object or a partial specular reflection remains one of the most challenging problems. This difficulty has limited the adoption of such scanner systems. In this paper, an optical imaging system based on a high-resolution liquid crystal on silicon (LCoS) device and an image sensor (CCD or CMOS) was built to adjust the image's brightness pixel by pixel as required. The radiance value of the image captured by the image sensor is constrained to lie within the dynamic range of the sensor after an adaptive algorithm of pixel mapping between the LCoS mask plane and image plane through the HDR imaging system is added. Thus, an HDR image was reconstructed by the LCoS mask and the CCD image on this system. The significant difference between the proposed system and a traditional 3D laser scanner system is that the HDR image was used to calibrate and calculate the 3D profile coordinate. Experimental results show that HDR imaging can enhance 3D laser scanner system environmental adaptability and improve the accuracy of 3D profile measurement.

Zhongdong, Yang; Peng, Wang; Xiaohui, Li; Changku, Sun

2014-03-01

59

Highway 3D model from image and lidar data  

NASA Astrophysics Data System (ADS)

We present a new method of highway 3-D model construction developed based on feature extraction in highway images and LIDAR data. We describe the processing road coordinate data that connect the image frames to the coordinates of the elevation data. Image processing methods are used to extract sky, road, and ground regions as well as significant objects (such as signs and building fronts) in the roadside for the 3D model. LIDAR data are interpolated and processed to extract the road lanes as well as other features such as trees, ditches, and elevated objects to form the 3D model. 3D geometry reasoning is used to match the image features to the 3D model. Results from successive frames are integrated to improve the final model.

Chen, Jinfeng; Chu, Henry; Sun, Xiaoduan

2014-05-01

60

Holographic 3D imaging - methods and applications  

NASA Astrophysics Data System (ADS)

In this paper, different techniques of three-dimensional holographic imaging with respect to particular applications are discussed. The realization techniques based on image synthesis at the hologram plane and at the observer's eye pupil plane are presented and compared with classical image holography approach. Each technique is accompanied with an example of the hologram, that was created using some of the in-house developed devices.

Svoboda, J.; 妅ere?, M.; Kv?to?, M.; Fiala, P.

2013-02-01

61

3D laser imaging for concealed object identification  

NASA Astrophysics Data System (ADS)

This paper deals with new optical non-conventional 3D laser imaging. Optical non-conventional imaging explores the advantages of laser imaging to form a three-dimensional image of the scene. 3D laser imaging can be used for threedimensional medical imaging, topography, surveillance, robotic vision because of ability to detect and recognize objects. In this paper, we present a 3D laser imaging for concealed object identification. The objective of this new 3D laser imaging is to provide the user a complete 3D reconstruction of the concealed object from available 2D data limited in number and with low representativeness. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different interfaces of the scene of interest and from experimental results. We show the global 3D reconstruction procedures capable to separate objects from foliage and reconstruct a threedimensional image of the considered object. In this paper, we present examples of reconstruction and completion of three-dimensional images and we analyse the different parameters of the identification process such as resolution, the scenario of camouflage, noise impact and lacunarity degree.

Berechet, Ion; Berginc, G閞ard; Berechet, Stefan

2014-09-01

62

Dynamic holographic 3-D image projection.  

PubMed

The display of dynamic holographic images is possible by computing the hologram of objects in a three-dimensional scene and then transcribing the two-dimensional digital hologram onto a digital micromirror system illuminated with coherent light. Proof-of-principle instruments that reconstruct real and virtual images are described. The underlying process, its characteristics, limitations and utility are discussed. PMID:19461750

Huebschman, Michael; Munjuluri, Bala; Garner, Harold

2003-03-10

63

Acoustic Imaging in 3D Frank Natterer  

E-print Network

of other imaging techniques such as CT or MRI. Recently, great efforts have been made to produce ultrasound images of higher quality. Scanners for ultrasound mammography have been developed by TechniScan in Salt Lake City and Karmanos Cancer Research in Detroit. These scanners produce high quality pictures

M眉nster, Westf盲lische Wilhelms-Universit盲t

64

Speckle reducing anisotropic diffusion for 3D ultrasound images  

Microsoft Academic Search

This paper presents an approach for reducing speckle in three dimensional (3D) ultrasound images. A 2D speckle reduction technique, speckle reducing anisotropic diffusion (SRAD), is explored and extended to 3D. 3D SRAD is advantageous in that, like 2D SRAD, it keeps the advantages of the conventional anisotropic diffusion and the traditional speckle reducing filter, the Lee filter, by exploiting the

Qingling Sun; John A. Hossack; Jinshan Tang; Scott T. Acton

2004-01-01

65

Content-based 3D Neuroradiologic Image Retrieval: Preliminary Results  

E-print Network

: 1 directly dealing with multimodal 3D images MR CT; 2 image similarity based on anatomical, which is especially true in neurology. Likewise, neuroradiologic images become a natural candidate normal and pathological bleed, stroke, tumor neuroradiologic MR CT images as an application domain

66

Digital Holography and 3D Imaging: introduction.  

PubMed

Twenty-six papers are presented, including several review articles, on topics such as integral imaging and holographic display systems, computer generated holograms, digital holographic microscopy, and biomedical and other applications in particle tracking, security, and materials science. PMID:25322115

Kim, Myung K; Cheng, Chau-Jern; Kim, Jinwoong; Osten, Wolfgang; Picart, Pascal; Yoshikawa, Hiroshi

2014-09-20

67

Image Processing Software for 3D Light Microscopy  

Microsoft Academic Search

Advances in microscopy now enable researchers to easily acquire multi-channel three-dimensional (3D) images and 3D time series (4D). However, processing, analyzing, and displaying this data can often be difficult and time- consuming. We discuss some of the software tools and techniques that are available to accomplish these tasks.

Jeffrey L. Clendenon; Jason M. Byars; Deborah P. Hyink

2006-01-01

68

Imaging hypoxia using 3D photoacoustic spectroscopy  

NASA Astrophysics Data System (ADS)

Purpose: The objective is to develop a multivariate in vivo hemodynamic model of tissue oxygenation (MiHMO2) based on 3D photoacoustic spectroscopy. Introduction: Low oxygen levels, or hypoxia, deprives cancer cells of oxygen and confers resistance to irradiation, some chemotherapeutic drugs, and oxygen-dependent therapies (phototherapy) leading to treatment failure and poor disease-free and overall survival. For example, clinical studies of patients with breast carcinomas, cervical cancer, and head and neck carcinomas (HNC) are more likely to suffer local reoccurrence and metastasis if their tumors are hypoxic. A novel method to non invasively measure tumor hypoxia, identify its type, and monitor its heterogeneity is devised by measuring tumor hemodynamics, MiHMO2. Material and Methods: Simulations are performed to compare tumor pO2 levels and hypoxia based on physiology - perfusion, fractional plasma volume, fractional cellular volume - and its hemoglobin status - oxygen saturation and hemoglobin concentration - based on in vivo measurements of breast, prostate, and ovarian tumors. Simulations of MiHMO2 are performed to assess the influence of scanner resolutions and different mathematic models of oxygen delivery. Results: Sensitivity of pO2 and hypoxic fraction to photoacoustic scanner resolution and dependencies on model complexity will be presented using hemodynamic parameters for different tumors. Conclusions: Photoacoustic CT spectroscopy provides a unique ability to monitor hemodynamic and cellular physiology in tissue, which can be used to longitudinally monitor tumor oxygenation and its response to anti-angiogenic therapies.

Stantz, Keith M.

2010-02-01

69

3-D capacitance density imaging system  

DOEpatents

A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved. 7 figs.

Fasching, G.E.

1988-03-18

70

Accommodation response measurements for integral 3D image  

NASA Astrophysics Data System (ADS)

We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468 dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned 60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15 and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions. The results suggest that the IP image induced more natural accommodation responses compared to the binocular stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however, they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce accommodation to the depth positions of 3D images.

Hiura, H.; Mishina, T.; Arai, J.; Iwadate, Y.

2014-03-01

71

3D reconstruction, visualization, and measurement of MRI images  

NASA Astrophysics Data System (ADS)

This paper primarily focuses on manipulating 2D medical image data that often come in as Magnetic Resonance and reconstruct them into 3D volumetric images. Clinical diagnosis and therapy planning using 2D medical images can become a torturous problem for a physician. For example, our 2D breast images of a patient mimic a breast carcinoma. In reality, the patient has 'fat necrosis', a benign breast lump. Physicians need powerful, accurate and interactive 3D visualization systems to extract anatomical details and examine the root cause of the problem. Our proposal overcomes the above mentioned limitations through the development of volume rendering algorithms and extensive use of parallel, distributed and neural networks computing strategies. MRI coupled with 3D imaging provides a reliable method for quantifying 'fat necrosis' characteristics and progression. Our 3D interactive application enables a physician to compute spatial measurements and quantitative evaluations and, from a general point of view, use all 3D interactive tools that can help to plan a complex surgical operation. The capability of our medical imaging application can be extended to reconstruct and visualize 3D volumetric brain images. Our application promises to be an important tool in neurological surgery planning, time and cost reduction.

Pandya, Abhijit S.; Patel, Pritesh P.; Desai, Mehul B.; Desai, Paramtap

1999-03-01

72

Data Processing for 3D Mass Spectrometry Imaging  

NASA Astrophysics Data System (ADS)

Data processing for three dimensional mass spectrometry (3D-MS) imaging was investigated, starting with a consideration of the challenges in its practical implementation using a series of sections of a tissue volume. The technical issues related to data reduction, 2D imaging data alignment, 3D visualization, and statistical data analysis were identified. Software solutions for these tasks were developed using functions in MATLAB. Peak detection and peak alignment were applied to reduce the data size, while retaining the mass accuracy. The main morphologic features of tissue sections were extracted using a classification method for data alignment. Data insertion was performed to construct a 3D data set with spectral information that can be used for generating 3D views and for data analysis. The imaging data previously obtained for a mouse brain using desorption electrospray ionization mass spectrometry (DESI-MS) imaging have been used to test and demonstrate the new methodology.

Xiong, Xingchuang; Xu, Wei; Eberlin, Livia S.; Wiseman, Justin M.; Fang, Xiang; Jiang, You; Huang, Zejian; Zhang, Yukui; Cooks, R. Graham; Ouyang, Zheng

2012-06-01

73

Acoustic 3D imaging of dental structures  

SciTech Connect

Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

Lewis, D.K. [Lawrence Livermore National Lab., CA (United States); Hume, W.R. [California Univ., Los Angeles, CA (United States); Douglass, G.D. [California Univ., San Francisco, CA (United States)

1997-02-01

74

Retinal image quality, reading and myopia  

Microsoft Academic Search

Analysis was undertaken of the retinal image characteristics of the best-spectacle corrected eyes of progressing myopes (n=20, mean age=22years; mean spherical equivalent=?3.84D) and a control group of emmetropes (n=20, mean age=23years; mean spherical equivalent=0.00D) before and after a 2h reading task. Retinal image quality was calculated based upon wavefront measurements taken with a Hartmann朣hack sensor with fixation on both a

Michael J. Collins; Tobias Buehren; D. Robert Iskander

2006-01-01

75

Quality of Visual Experience for 3D Presentation - Stereoscopic Image  

Microsoft Academic Search

\\u000a Three-dimensional television (3DTV) technology is becoming increasingly popular, as it can provide high quality and immersive\\u000a experience to end users. Stereoscopic imaging is a technique capable of recoding 3D visual information or creating the illusion\\u000a of depth. Most 3D compression schemes are developed for stereoscopic images including applying traditional two-dimensional\\u000a (2D) compression techniques, and considering theories of binocular suppression as

Junyong You; Gangyi Jiang; Liyuan Xing; Andrew Perkis

76

Imaging and 3D morphological analysis of collagen fibrils.  

PubMed

The recent booming of multiphoton imaging of collagen fibrils by means of second harmonic generation microscopy generates the need for the development and automation of quantitative methods for image analysis. Standard approaches sequentially analyse two-dimensional (2D) slices to gain knowledge on the spatial arrangement and dimension of the fibrils, whereas the reconstructed three-dimensional (3D) image yields better information about these characteristics. In this work, a 3D analysis method is proposed for second harmonic generation images of collagen fibrils, based on a recently developed 3D fibre quantification method. This analysis uses operators from mathematical morphology. The fibril structure is scanned with a directional distance transform. Inertia moments of the directional distances yield the main fibre orientation, corresponding to the main inertia axis. The collaboration of directional distances and fibre orientation delivers a geometrical estimate of the fibre radius. The results include local maps as well as global distribution of orientation and radius of the fibrils over the 3D image. They also bring a segmentation of the image into foreground and background, as well as a classification of the foreground pixels into the preferred orientations. This accurate determination of the spatial arrangement of the fibrils within a 3D data set will be most relevant in biomedical applications. It brings the possibility to monitor remodelling of collagen tissues upon a variety of injuries and to guide tissues engineering because biomimetic 3D organizations and density are requested for better integration of implants. PMID:22670759

Altendorf, H; Decenci鑢e, E; Jeulin, D; De sa Peixoto, P; Deniset-Besseau, A; Angelini, E; Mosser, G; Schanne-Klein, M-C

2012-08-01

77

High resolution 3D 搒napshot ISAR imaging and feature extraction  

Microsoft Academic Search

We have developed a new formulation for three dimensional (3D) radar imaging of inverse synthetic aperture radar (ISAR) data based on recent developments in high resolution spectral estimation theory. Typically for non real-time applications, image formation is a two step process consisting of motion determination and image generation. The technique presented focuses on this latter process, and assumes the motion

J. T. Mayhan; M. L. Burrows; K. M. Cuomo; J. E. Piou

2001-01-01

78

Digital tracking and control of retinal images  

NASA Astrophysics Data System (ADS)

Laser induced retinal lesions are used to treat a variety of eye diseases such as diabetic retinopathy and retinal detachment. An instrumentation system has been developed to track a specific lesion coordinate on the retinal surface and provide corrective signals to maintain laser position on the coordinate. High resolution retinal images are acquired via a CCD camera coupled to a fundus camera and video frame grabber. Optical filtering and histogram modification are used to enhance the retinal vessel network against the lighter retinal background. Six distinct retinal landmarks are tracked on the high contrast image obtained from the frame grabber using two-dimensional blood vessel templates. The frame grabber is hosted on a 486 PC. The PC performs correction signal calculations using an exhaustive search on selected image portions. An X and Y laser correction signal is derived from the landmark tracking information and provided to a pair of galvanometer steered mirrors via a data acquisition and control subsystem. This subsystem also responds to patient inputs and the system monitoring lesion growth. This paper begins with an overview of the robotic laser system design followed by implementation and testing of a development system for proof of concept. The paper concludes with specifications for a real time system.

Barrett, Steven F.; Jerath, Maya R.; Rylander, Henry G., III; Welch, Ashley J.

1993-06-01

79

Imaging fault zones using 3D seismic image processing techniques  

NASA Astrophysics Data System (ADS)

Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes and collecting these into "disturbance geobodies". These seismic image processing methods represents a first efficient step toward a construction of a robust technique to investigate sub-seismic strain, mapping noisy deformed zones and displacement within subsurface geology (Dutzer et al.,2011; Iacopini et al.,2012). In all these cases, accurate fault interpretation is critical in applied geology to building a robust and reliable reservoir model, and is essential for further study of fault seal behavior, and reservoir compartmentalization. They are also fundamental for understanding how deformation localizes within sedimentary basins, including the processes associated with active seismogenetic faults and mega-thrust systems in subduction zones. Dutzer, JF, Basford., H., Purves., S. 2009, Investigating fault sealing potential through fault relative seismic volume analysis. Petroleum Geology Conference series 2010, 7:509-515; doi:10.1144/0070509 Marfurt, K.J., Chopra, S., 2007, Seismic attributes for prospect identification and reservoir characterization. SEG Geophysical development Iacopini, D., Butler, RWH. & Purves, S. (2012). 'Seismic imaging of thrust faults and structural damage: a visualization workflow for deepwater thrust belts'. First Break, vol 5, no. 30, pp. 39-46.

Iacopini, David; Butler, Rob; Purves, Steve

2013-04-01

80

Imaging retinal mosaics in the living  

E-print Network

in the retina. The first cells to be imaged in the living eye using adaptive optics technology were the cone1,2 and DR Williams1,2,3 Abstract Adaptive optics imaging of cone photoreceptors has provided unique for both basic scientists and clinicians. Recent advances in adaptive optics retinal imaging

81

Retinal imaging using adaptive optics technology.  

PubMed

Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effect of wave front distortions. Retinal imaging using AO aims to compensate for higher order aberrations originating from the cornea and the lens by using deformable mirror. The main application of AO retinal imaging has been to assess photoreceptor cell density, spacing, and mosaic regularity in normal and diseased eyes. Apart from photoreceptors, the retinal pigment epithelium, retinal nerve fiber layer, retinal vessel wall and lamina cribrosa can also be visualized with AO technology. Recent interest in AO technology in eye research has resulted in growing number of reports and publications utilizing this technology in both animals and humans. With the availability of first commercially available instruments we are making transformation of AO technology from a research tool to diagnostic instrument. The current challenges include imaging eyes with less than perfect optical media, formation of normative databases for acquired images such as cone mosaics, and the cost of the technology. The opportunities for AO will include more detailed diagnosis with description of some new findings in retinal diseases and glaucoma as well as expansion of AO into clinical trials which has already started. PMID:24843304

Kozak, Igor

2014-04-01

82

SNR analysis of 3D magnetic resonance tomosynthesis (MRT) imaging  

NASA Astrophysics Data System (ADS)

In conventional 3D Fourier transform (3DFT) MR imaging, signal-to-noise ratio (SNR) is governed by the well-known relationship of being proportional to the voxel size and square root of the imaging time. Here, we introduce an alternative 3D imaging approach, termed MRT (Magnetic Resonance Tomosynthesis), which can generate a set of tomographic MR images similar to multiple 2D projection images in x-ray. A multiple-oblique-view (MOV) pulse sequence is designed to acquire the tomography-like images used in tomosynthesis process and an iterative back-projection (IBP) reconstruction method is used to reconstruct 3D images. SNR analysis is performed and shows that resolution and SNR tradeoff is not governed as with typical 3DFT MR imaging case. The proposed method provides a higher SNR than the conventional 3D imaging method with a partial loss of slice-direction resolution. It is expected that this method can be useful for extremely low SNR cases.

Kim, Min-Oh; Kim, Dong-Hyun

2012-03-01

83

Digital holography particle image velocimetry for 3D flow measurement  

NASA Astrophysics Data System (ADS)

A digital in-line holography recording system was used in the holography particle image velocimetry for a 3D flow measurement that are made up of the new full field fluid mechanics experimental technique--DHPIV. In this experiment, the traditional holography film was replaced by a CCD chip that record the interference stripe directly without the darkroom processing, and the virtual image slices in different position were reconstructed by compuation using Fresnel-Kirchhoff integral from the digital image. Also, complex field signal filter was applied in image reconstruction to achieve the thin depth of image field that has strong effecting with the vertical velocity component resolution. Using the frame-straddle CCD techniques, the 3D velocity was computed by 3D cross-correlation through space interrogation block matching through the reconstructed image slices with digital complex field signal filter. Then the 3D velocity, vortex, iso-surface details and the time evolution movie in a 3D flow field were displayed by numerical computation and experimental measurement using this DHPIV method.

Wei, Runjie; Shen, Gongxin; Ding, Hanquan

2003-04-01

84

Enhancing retinal images by nonlinear registration  

E-print Network

Being able to image the human retina in high resolution opens a new era in many important fields, such as pharmacological research for retinal diseases, researches in human cognition, nervous system, metabolism and blood stream, to name a few. In this paper, we propose to share the knowledge acquired in the fields of optics and imaging in solar astrophysics in order to improve the retinal imaging at very high spatial resolution in the perspective to perform a medical diagnosis. The main purpose would be to assist health care practitioners by enhancing retinal images and detect abnormal features. We apply a nonlinear registration method using local correlation tracking to increase the field of view and follow structure evolutions using correlation techniques borrowed from solar astronomy technique expertise. Another purpose is to define the tracer of movements after analyzing local correlations to follow the proper motions of an image from one moment to another, such as changes in optical flows that would be o...

Molodij, Guillaume; Glanc, Marie; Chenegros, Guillaume

2014-01-01

85

A 3D surface imaging system for assessing human obesity  

NASA Astrophysics Data System (ADS)

The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

2009-08-01

86

On Anisotropic Diffusion in 3D image processing and image sequence analysis  

Microsoft Academic Search

A morphological multiscale method in 3D image and 3D image sequence processing is discussed which identifies edges on level sets and the motion of features in time. Based on these indicator evaluation the image data is processed applying nonlinear diffusion and the theory of geometric evolution problems. The aim is to smooth level sets of a 3D image while simultaneously

Karol Mikula; Martin Rumpf; Fiorella Sgallari

87

On Anisotropic Geometric Diffusion in 3D Image Processing and Image Sequence Analysis  

Microsoft Academic Search

A morphological multiscale method in 3D image and 3D image sequence processing is discussed which identifies edges on level sets and the motion of features in time. Based on these indicator evaluation the image data is processed applying nonlinear diffusion and the theory of geometric evolution problems. The aim is to smooth level sets of a 3D image while preserving

Karol Mikula; Martin Rumpf; Fiorella Sgallari

2002-01-01

88

Realistic 3-D Scene Modeling from Uncalibrated Image Sequences  

Microsoft Academic Search

This contribution addresses the problem of obtaining photo- realistic 3D models of a scene from images alone with a structure-from-motion approach. The 3D scene is observed from multiple viewpoints by freely moving a camera around the object. No restrictions on camera movement and inter- nal camera parameters like zoom are imposed, as the cam- era pose and intrinsic parameters are

Reinhard Koch; Marc Pollefeys; Luc J. Van Gool

1999-01-01

89

Computational analysis of 3D mouse embryo images  

E-print Network

Computational analysis of 3D mouse embryo images Ruben Schilling 26.11.2008 #12;Introduction #12 staging, high flexibility) 路 Elderly stages have the more direct impact on medical applications 路 Our imaging modality Sharpe et. al, Science (2002)EMAGE website #12;Goals: Atlas and functional groups Atlas

Spang, Rainer

90

Getting in touch3D printing in Forensic Imaging  

Microsoft Academic Search

With the increasing use of medical imaging in forensics, as well as the technological advances in rapid prototyping, we suggest combining these techniques to generate displays of forensic findings. We used computed tomography (CT), CT angiography, magnetic resonance imaging (MRI) and surface scanning with photogrammetry in conjunction with segmentation techniques to generate 3D polygon meshes. Based on these data sets,

Lars Chr. Ebert; Michael J. Thali; Steffen Ross

2011-01-01

91

2D/3D Image Registration using Regression Learning  

PubMed Central

In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object抯 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called CLARET (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projec-tion(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region抯 motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method抯 application to Image-guided Radiation Therapy (IGRT) requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof. PMID:24058278

Chou, Chen-Rui; Frederick, Brandon; Mageras, Gig; Chang, Sha; Pizer, Stephen

2013-01-01

92

3-D Terahertz Synthetic-Aperture Imaging and Spectroscopy  

NASA Astrophysics Data System (ADS)

Terahertz (THz) wavelengths have attracted recent interest in multiple disciplines within engineering and science. Situated between the infrared and the microwave region of the electromagnetic spectrum, THz energy can propagate through non-polar materials such as clothing or packaging layers. Moreover, many chemical compounds, including explosives and many drugs, reveal strong absorption signatures in the THz range. For these reasons, THz wavelengths have great potential for non-destructive evaluation and explosive detection. Three-dimensional (3-D) reflection imaging with considerable depth resolution is also possible using pulsed THz systems. While THz imaging (especially 3-D) systems typically operate in transmission mode, reflection offers the most practical configuration for standoff detection, especially for objects with high water content (like human tissue) which are opaque at THz frequencies. In this research, reflection-based THz synthetic-aperture (SA) imaging is investigated as a potential imaging solution. THz SA imaging results presented in this dissertation are unique in that a 2-D planar synthetic array was used to generate a 3-D image without relying on a narrow time-window for depth isolation cite [Shen 2005]. Novel THz chemical detection techniques are developed and combined with broadband THz SA capabilities to provide concurrent 3-D spectral imaging. All algorithms are tested with various objects and pressed pellets using a pulsed THz time-domain system in the Northwest Electromagnetics and Acoustics Research Laboratory (NEAR-Lab).

Henry, Samuel C.

93

Computerized analysis of pelvic incidence from 3D images  

NASA Astrophysics Data System (ADS)

The sagittal alignment of the pelvis can be evaluated by the angle of pelvic incidence (PI), which is constant for an arbitrary subject position and orientation and can be therefore compared among subjects in standing, sitting or supine position. In this study, PI was measured from three-dimensional (3D) computed tomography (CT) images of normal subjects that were acquired in supine position. A novel computerized method, based on image processing techniques, was developed to automatically determine the anatomical references required to measure PI, i.e. the centers of the femoral heads in 3D, and the center and inclination of the sacral endplate in 3D. Multiplanar image reformation was applied to obtain perfect sagittal views with all anatomical structures completely in line with the hip axis, from which PI was calculated. The resulting PI (mean+/-standard deviation) was equal to 46.6+/-9.2 for male subjects (N = 189), 47.6+/-10.7 for female subjects (N = 181), and 47.1+/-10.0 for all subjects (N = 370). The obtained measurements of PI from 3D images were not biased by acquisition projection or structure orientation, because all anatomical structures were completely in line with the hip axis. The performed measurements in 3D therefore represent PI according to the actual geometrical relationships among anatomical structures of the sacrum, pelvis and hips, as observed from the perfect sagittal views.

Vrtovec, Toma; Janssen, Michiel M. A.; Pernu, Franjo; Castelein, Ren M.; Viergever, Max A.

2012-02-01

94

3D TRUS Image Segmentation in Prostate Brachytherapy.  

PubMed

Brachytherapy is a minimally invasive interventional surgery used to treat prostate cancer. It is composed of three steps: dose pre-planning, implantation of radioactive seeds, and dose post-planning. In these procedures, it is crucial to determine the positions of needles and seeds, measure the volume of the prostate gland. Three-dimensional transrectal ultrasound (TRUS) imaging has been demonstrated to be a useful technique to perform such tasks. Compared to CT, MRI or X-ray imaging, US image suffers from low contrast, image speckle and shadows, making it challenging for segmentation of needles, the prostates and seeds in the 3D TRUS images. In this paper, we reviewed 3D TRUS image segmentation methods used in prostate brachytherapy including the segmentations of the needles, the prostate, as well as the seeds. Furthermore, some experimental results with agar phantom, turkey and chicken phantom, as well as the patient data are reported. PMID:17281931

Ding, Mingyue; Gardi, Lori; Wei, Zhouping; Fenster, Aaron

2005-01-01

95

Single 3D cell segmentation from optical CT microscope images  

NASA Astrophysics Data System (ADS)

The automated segmentation of the nucleus and cytoplasm regions in 3D optical CT microscope images has been achieved with two methods, a global threshold gradient based approach and a graph-cut approach. For the first method, the first two peaks of a gradient figure of merit curve are selected as the thresholds for cytoplasm and nucleus segmentation. The second method applies a graph-cut segmentation twice: the first identifies the nucleus region and the second identifies the cytoplasm region. Image segmentation of single cells is important for automated disease diagnostic systems. The segmentation methods were evaluated with 200 3D images consisting of 40 samples of 5 different cell types. The cell types consisted of columnar, macrophage, metaplastic and squamous human cells and cultured A549 cancer cells. The segmented cells were compared with both 2D and 3D reference images and the quality of segmentation was determined by the Dice Similarity Coefficient (DSC). In general, the graph-cut method had a superior performance to the gradient-based method. The graph-cut method achieved an average DSC of 86% and 72% for nucleus and cytoplasm segmentations respectively for the 2D reference images and 83% and 75% for the 3D reference images. The gradient method achieved an average DSC of 72% and 51% for nucleus and cytoplasm segmentation for the 2D reference images and 71% and 51% for the 3D reference images. The DSC of cytoplasm segmentation was significantly lower than for the nucleus since the cytoplasm was not differentiated as well by image intensity from the background.

Xie, Yiting; Reeves, Anthony P.

2014-03-01

96

MULTIMODAL RETINAL IMAGING: NEW STRATEGIES FOR THE DETECTION OF Paul L. Rosin and David Marshall  

E-print Network

. In particular we consider the problems of registering 3D laser data with a digital image. We introduce a new such as digital fundus photogra- phy and scanning laser ophthalmoscopy (SLO) [2, 3] that can provide objectiveMULTIMODAL RETINAL IMAGING: NEW STRATEGIES FOR THE DETECTION OF GLAUCOMA Paul L. Rosin and David

Martin, Ralph R.

97

3D texture analysis on MRI images of Alzheimer's disease.  

PubMed

This study investigated three-dimensional (3D) texture as a possible diagnostic marker of Alzheimer's disease (AD). T1-weighted magnetic resonance (MR) images were obtained from 17燗D patients and 17 age and gender-matched healthy controls. 3D texture features were extracted from the circular 3D ROIs placed using a semi-automated technique in the hippocampus and entorhinal cortex. We found that classification accuracies based on texture analysis of the ROIs varied from 64.3% to 96.4% due to different ROI selection, feature extraction and selection options, and that most 3D texture features selected were correlated with the mini-mental state examination (MMSE) scores. The results indicated that 3D texture could detect the subtle texture differences between tissues in AD patients and normal controls, and texture features of MR images in the hippocampus and entorhinal cortex might be related to the severity of AD cognitive impairment. These results suggest that 3D texture might be a useful aid in AD diagnosis. PMID:22101754

Zhang, Jing; Yu, Chunshui; Jiang, Guilian; Liu, Weifang; Tong, Longzheng

2012-03-01

98

Navigator motion correction of diffusion weighted 3D SSFP imaging  

Microsoft Academic Search

Diffusion weighted (DW) 3D steady state MR (SSFP) head imaging technique using navigator echo抯 motion correction is presented.\\u000a This new scheme enables acquisition of DW images even at regions where severe susceptibility is present. Another advantage\\u000a is the moderate gradient performance requirements. DW imaging methods are sensitive to any kind of motion, thus, most of these\\u000a methods might suffer from

Elyakim Bosak; Paul R. Harvey

2001-01-01

99

3-D electronics interconnect for high-performance imaging detectors  

Microsoft Academic Search

We describe work that extends three-dimensional (3-D) patterned overlay high-density interconnect (HDI) to high-performance imaging applications. The work was motivated by the rigorous requirements of the multiple-pulse imager for dynamic proton radiography. The optical imager has to provide large (>90%) optical fill factor, high quantum efficiency, 200-ns inter-frame time interval, and storage for >32 frames. In order to accommodate the

Kris Kwiatkowski; Jim Lyke; Robert Wojnarowski; Chris Kapusta; Stuart Kleinfelder; Mark Wilke

2004-01-01

100

Integrated optical 3D digital imaging based on DSP scheme  

NASA Astrophysics Data System (ADS)

We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.

Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.

2008-03-01

101

Automatic 3d Mapping Using Multiple Uncalibrated Close Range Images  

NASA Astrophysics Data System (ADS)

Automatic three-dimensions modeling of the real world is an important research topic in the geomatics and computer vision fields for many years. By development of commercial digital cameras and modern image processing techniques, close range photogrammetry is vastly utilized in many fields such as structure measurements, topographic surveying, architectural and archeological surveying, etc. A non-contact photogrammetry provides methods to determine 3D locations of objects from two-dimensional (2D) images. Problem of estimating the locations of 3D points from multiple images, often involves simultaneously estimating both 3D geometry (structure) and camera pose (motion), it is commonly known as structure from motion (SfM). In this research a step by step approach to generate the 3D point cloud of a scene is considered. After taking images with a camera, we should detect corresponding points in each two views. Here an efficient SIFT method is used for image matching for large baselines. After that, we must retrieve the camera motion and 3D position of the matched feature points up to a projective transformation (projective reconstruction). Lacking additional information on the camera or the scene makes the parallel lines to be unparalleled. The results of SfM computation are much more useful if a metric reconstruction is obtained. Therefor multiple views Euclidean reconstruction applied and discussed. To refine and achieve the precise 3D points we use more general and useful approach, namely bundle adjustment. At the end two real cases have been considered to reconstruct (an excavation and a tower).

Rafiei, M.; Saadatseresht, M.

2013-09-01

102

Laboratory 3D Micro-XRF/Micro-CT Imaging System  

NASA Astrophysics Data System (ADS)

A prototype micro-XRF laboratory system based on pinhole imaging was developed to produce 3D elemental maps. The fluorescence x-rays are detected by a deep-depleted CCD camera operating in photon-counting mode. A charge-clustering algorithm, together with dynamically adjusted exposure times, ensures a correct energy measurement. The XRF component has a spatial resolution of 70 ?m and an energy resolution of 180 eV at 6.4 keV. The system is augmented by a micro-CT imaging modality. This is used for attenuation correction of the XRF images and to co-register features in the 3D XRF images with morphological structures visible in the volumetric CT images of the object.

Bruyndonckx, P.; Sasov, A.; Liu, X.

2011-09-01

103

Speckle reducing anisotropic diffusion for 3D ultrasound images Qingling Suna  

E-print Network

Speckle reducing anisotropic diffusion for 3D ultrasound images Qingling Suna , John A. Hossackb anisotropic diffusion and the 3D Lee filter. The experimental results show that the quality of the 3D SRAD for speckle reduction in 3D ultrasound images improves upon that of 3D anisotropic diffusion and 3D Lee filter

Acton, Scott

104

3-D Display Of Magnetic Resonance Imaging Of The Spine  

NASA Astrophysics Data System (ADS)

The original data is produced through standard magnetic resonance imaging (MRI) procedures with a surface coil applied to the lower back of a normal human subject. The 3-D spine image data consists of twenty-six contiguous slices with 256 x 256 pixels per slice. Two methods for visualization of the 3-D spine are explored. One method utilizes a verifocal mirror system which creates a true 3-D virtual picture of the object. Another method uses a standard high resolution monitor to simultaneously show the three orthogonal sections which intersect at any user-selected point within the object volume. We discuss the application of these systems in assessment of low back pain.

Nelson, Alan C.; Kim, Yongmin; Haralick, Robert M.; Anderson, Paul A.; Johnson, Roger H.; DeSoto, Larry A.

1988-06-01

105

Reconstruction of 3D scenes from sequences of images  

NASA Astrophysics Data System (ADS)

Reconstruction of three-dimensional (3D) scenes is an active research topic in the field of computer vision and 3D display. It's a challenge to model 3D objects rapidly and effectively. A 3D model can be extracted from multiple images. The system only requires a sequence of images taken with cameras without knowing the parameters of camera, which provide flexibility to a high degree. We focus on quickly merging point cloud of the object from depth map sequences. The whole system combines algorithms of different areas in computer vision, such as camera calibration, stereo correspondence, point cloud splicing and surface reconstruction. The procedure of 3D reconstruction is decomposed into a number of successive steps. Firstly, image sequences are received by the camera freely moving around the object. Secondly, the scene depth is obtained by a non-local stereo matching algorithm. The pairwise is realized with the Scale Invariant Feature Transform (SIFT) algorithm. An initial matching is then made for the first two images of the sequence. For the subsequent image that is processed with previous image, the point of interest corresponding to ones in previous images are refined or corrected. The vertical parallax between the images is eliminated. The next step is to calibrate camera, and intrinsic parameters and external parameters of the camera are calculated. Therefore, The relative position and orientation of camera are gotten. A sequence of depth maps are acquired by using a non-local cost aggregation method for stereo matching. Then point cloud sequence is achieved by the scene depths, which consists of point cloud model using the external parameters of camera and the point cloud sequence. The point cloud model is then approximated by a triangular wire-frame mesh to reduce geometric complexity and to tailor the model to the requirements of computer graphics visualization systems. Finally, the texture is mapped onto the wire-frame model, which can also be used for 3D display. According to the experimental results, we can reconstruct a 3D point cloud model more quickly and efficiently than other methods.

Niu, Bei; Sang, Xinzhu; Chen, Duo; Cai, Yuanfa

2013-08-01

106

Gabor wavelet based vessel segmentation in retinal images  

Microsoft Academic Search

Retinal image vessel segmentation and their branching pattern are used for automated screening and diagnosis of diabetic retinopathy. Vascular pattern is normally not visible in retinal images. We present a method that uses 2-D Gabor wavelet and sharpening filter to enhance and sharpen the vascular pattern respectively. Our technique extracts the vessels from sharpened retinal image using edge detection algorithm

M. Usman Akram; Anam Tariq; Sarwat Nasir; Shoab A. Khan

2009-01-01

107

3D Image Viz-Analysis Tools and V3D Development Hackathon, July 26 -August 8, 2010  

E-print Network

- Hacking / Dinner at Bob's Pub #12;3D Image Viz-Analysis Tools and V3D Development Hackathon, July 263D Image Viz-Analysis Tools and V3D Development Hackathon, July 26 - August 8, 2010 Janelia Farm, Zongcai Ruan, and Luis Ibanez 12:00-1:00 pm Lunch 1:00 pm- Hacking / Dinner at Bob's Pub July 28, 29, 30

Peng, Hanchuan

108

3-D Facial Imaging for Identification Anselmo Lastra  

E-print Network

3-D Facial Imaging for Identification Anselmo Lastra November 4, 2010 #12;The Team 路 UNC 颅 Henry Elkins 颅 Ali Farsaie 颅 Ping Zhuang #12;The Vision 路 For program like Global Entry, NEXUS, or SENTRI 路 How do you do this? #12;Basics of Stereo Vision 路 Use 2 (or more) cameras Top View #12;The Two Camera

McShea, Daniel W.

109

Analyzing 3D Images of the Brain NICHOLAS AYACHE  

E-print Network

of physical measuring devices like microelectrodes, for instance. NEW MEDICAL IMAGE ANALYSIS, GRAPHICS the particularity of describ- ing the physical or chemical properties at each point of a studied volume, AND ROBOTICS ISSUES To complete these new tasks automatically, it is necessary to solve a number of advanced 3D

Paris-Sud XI, Universit茅 de

110

3D ACQUISITION OF ARCHAEOLOGICAL HERITAGE FROM IMAGES  

Microsoft Academic Search

In this contribution an approach is proposed that can capture the 3D shape and appearance of objects, monuments or sites from photographs or video. The flexibility of the approach allows us to deal with uncalibrated hand-held camera images. In addition, through the use of advanced computer vision algorithms the process is largely automated. Both these factors make the approach ideally

Marc Pollefeys; Maarten Vergauwen; Kurt Cornelis; Frank Verbiest; Joris Schouteden; Jan Tops; Luc Van Gool

111

3-D transformations of images in scanline order  

Microsoft Academic Search

Currerntly texture mapping onto projections of 3-D surfaces is time consuming and subject to considerable aliasing errors. Usually the procedure is to perform some inverse mapping from the area of the pixel onto the surface texture. It is difficult to do this correctly. There is an alternate approach where the texture surface is transformed as a 2-D image until it

Ed Catmull; Alvy Ray Smith

1980-01-01

112

Real Time 3D Imaging Systems for Bio-Photonics  

E-print Network

Real Time 3D Imaging Systems for Bio-Photonics & Cellular Processes PROBLEM THIS TECHNOLOGY SOLVES for Inverted or Upright microscopes 路 Tested in Fluorescence, DIC, Bright/ Dark Field & Phase contrast OF DEVELOPMENT 路 "Proof of Concept" has been demonstrated on existing microscopes (e.g. Olympus IX71) 路 Prototype

Painter, Kevin

113

Practical pseudo-3D registration for large tomographic images  

NASA Astrophysics Data System (ADS)

Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has been performed.

Liu, Xuan; Laperre, Kjell; Sasov, Alexander

2014-09-01

114

Optimizing 3D image quality and performance for stereoscopic gaming  

NASA Astrophysics Data System (ADS)

The successful introduction of stereoscopic TV systems, such as Samsung's 3D Ready Plasma, requires high quality 3D content to be commercially available to the consumer. Console and PC games provide the most readily accessible source of high quality 3D content. This paper describes innovative developments in a generic, PC-based game driver architecture that addresses the two key issues affecting 3D gaming: quality and speed. At the heart of the quality issue are the same considerations that studios face producing stereoscopic renders from CG movies: how best to perform the mapping from a geometric CG environment into the stereoscopic display volume. The major difference being that for game drivers this mapping cannot be choreographed by hand but must be automatically calculated in real-time without significant impact on performance. Performance is a critical issue when dealing with gaming. Stereoscopic gaming has traditionally meant rendering the scene twice with the associated performance overhead. An alternative approach is to render the scene from one virtual camera position and use information from the z-buffer to generate a stereo pair using Depth-Image-Based Rendering (DIBR). We analyze this trade-off in more detail and provide some results relating to both 3D image quality and render performance.

Flack, Julien; Sanderson, Hugh; Pegg, Steven; Kwok, Simon; Paterson, Daniel

2009-02-01

115

Digital holography particle image velocimetry for 3D flow measurement  

Microsoft Academic Search

A digital in-line holography recording system was used in the holography particle image velocimetry for a 3D flow measurement that are made up of the new full field fluid mechanics experimental technique--DHPIV. In this experiment, the traditional holography film was replaced by a CCD chip that record the interference stripe directly without the darkroom processing, and the virtual image slices

Runjie Wei; Gongxin Shen; Hanquan Ding

2003-01-01

116

3D Winding Number: Theory and Application to Medical Imaging  

PubMed Central

We develop a new formulation, mathematically elegant, to detect critical points of 3D scalar images. It is based on a topological number, which is the generalization to three dimensions of the 2D winding number. We illustrate our method by considering three different biomedical applications, namely, detection and counting of ovarian follicles and neuronal cells and estimation of cardiac motion from tagged MR images. Qualitative and quantitative evaluation emphasizes the reliability of the results. PMID:21317978

Becciu, Alessandro; Fuster, Andrea; Pottek, Mark; van den Heuvel, Bart; ter Haar Romeny, Bart; van Assen, Hans

2011-01-01

117

Refraction Correction in 3D Transcranial Ultrasound Imaging  

PubMed Central

We present the first correction of refraction in three-dimensional (3D) ultrasound imaging using an iterative approach that traces propagation paths through a two-layer planar tissue model, applying Snell抯 law in 3D. This approach is applied to real-time 3D transcranial ultrasound imaging by precomputing delays offline for several skull thicknesses, allowing the user to switch between three sets of delays for phased array imaging at the push of a button. Simulations indicate that refraction correction may be expected to increase sensitivity, reduce beam steering errors, and partially restore lost spatial resolution, with the greatest improvements occurring at the largest steering angles. Distorted images of cylindrical lesions were created by imaging through an acrylic plate in a tissue-mimicking phantom. As a result of correcting for refraction, lesions were restored to 93.6% of their original diameter in the lateral direction and 98.1% of their original shape along the long axis of the cylinders. In imaging two healthy volunteers, the mean brightness increased by 8.3% and showed no spatial dependency. PMID:24275538

Lindsey, Brooks D.; Smith, Stephen W.

2014-01-01

118

Single-shot retinal imaging with AO spectral OCT  

NASA Astrophysics Data System (ADS)

We demonstrate for the first time an adaptive optics (AO) spectral OCT retina camera that acquires with unprecedented 3D resolution (2.9 ?m lateral; 5.5 ?m axial) single shot B-scans of the living human retina. The camera centers on a Michelson interferometer that consists of a superluminescent diode for line illuminating the subject's retinal; voice coil translator for controlling the optical path length of the reference channel; and an imaging spectrometer that is cascaded with a 12-bit area CCD array. The imaging spectrometer was designed with negligible off-axis aberrations and was constructed from stock optical components. AO was integrated into the detector channel of the interferometer and dynamically compensated for most of the ocular aberration across a 6 mm pupil. Short bursts of B-scans, with 100 Ascans each, were successfully acquired at 1 msec intervals. Camera sensitivity was found sufficient to detect reflections from all major retinal layers. Individual outer segments of photoreceptors at different retinal eccentricities were observed in vivo. Periodicity of the outer segments matched cone spacing as measured from AO flood illuminated images of the same patches of retina.

Zhang, Yan; Rha, Jungtae; Jonnal, Ravi S.; Miller, Donald T.

2005-04-01

119

3D ultrasound image segmentation using wavelet support vector machines  

PubMed Central

Purpose: Transrectal ultrasound (TRUS) imaging is clinically used in prostate biopsy and therapy. Segmentation of the prostate on TRUS images has many applications. In this study, a three-dimensional (3D) segmentation method for TRUS images of the prostate is presented for 3D ultrasound-guided biopsy. Methods: This segmentation method utilizes a statistical shape, texture information, and intensity profiles. A set of wavelet support vector machines (W-SVMs) is applied to the images at various subregions of the prostate. The W-SVMs are trained to adaptively capture the features of the ultrasound images in order to differentiate the prostate and nonprostate tissue. This method consists of a set of wavelet transforms for extraction of prostate texture features and a kernel-based support vector machine to classify the textures. The voxels around the surface of the prostate are labeled in sagittal, coronal, and transverse planes. The weight functions are defined for each labeled voxel on each plane and on the model at each region. In the 3D segmentation procedure, the intensity profiles around the boundary between the tentatively labeled prostate and nonprostate tissue are compared to the prostate model. Consequently, the surfaces are modified based on the model intensity profiles. The segmented prostate is updated and compared to the shape model. These two steps are repeated until they converge. Manual segmentation of the prostate serves as the gold standard and a variety of methods are used to evaluate the performance of the segmentation method. Results: The results from 40 TRUS image volumes of 20 patients show that the Dice overlap ratio is 90.3%??2.3% and that the sensitivity is 87.7%??4.9%. Conclusions: The proposed method provides a useful tool in our 3D ultrasound image-guided prostate biopsy and can also be applied to other applications in the prostate. PMID:22755682

Akbari, Hamed; Fei, Baowei

2012-01-01

120

Integration of real-time 3D image acquisition and multiview 3D display  

NASA Astrophysics Data System (ADS)

Seamless integration of 3D acquisition and 3D display systems offers enhanced experience in 3D visualization of the real world objects or scenes. The vivid representation of captured 3D objects displayed on a glasses-free 3D display screen could bring the realistic viewing experience to viewers as if they are viewing real-world scene. Although the technologies in 3D acquisition and 3D display have advanced rapidly in recent years, effort is lacking in studying the seamless integration of these two different aspects of 3D technologies. In this paper, we describe our recent progress on integrating a light-field 3D acquisition system and an autostereoscopic multiview 3D display for real-time light field capture and display. This paper focuses on both the architecture design and the implementation of the hardware and the software of this integrated 3D system. A prototype of the integrated 3D system is built to demonstrate the real-time 3D acquisition and 3D display capability of our proposed system.

Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Li, Wei; Wang, Jingyi; Liu, Yongchun

2014-03-01

121

1024 pixels single photon imaging array for 3D ranging  

NASA Astrophysics Data System (ADS)

Three dimensions (3D) acquisition systems are driving applications in many research field. Nowadays 3D acquiring systems are used in a lot of applications, such as cinema industry or in automotive (for active security systems). Depending on the application, systems present different features, for example color sensitivity, bi-dimensional image resolution, distance measurement accuracy and acquisition frame rate. The system we developed acquires 3D movie using indirect Time of Flight (iTOF), starting from phase delay measurement of a sinusoidally modulated light. The system acquires live movie with a frame rate up to 50frame/s in a range distance between 10 cm up to 7.5 m.

Bellisai, S.; Guerrieri, F.; Tisa, S.; Zappa, F.; Tosi, A.; Giudice, A.

2011-01-01

122

3D acoustic imaging applied to the Baikal Neutrino Telescope  

E-print Network

A hydro-acoustic imaging system was tested in a pilot study on distant localization of elements of the Baikal underwater neutrino telescope. For this innovative approach, based on broad band acoustic echo signals and strictly avoiding any active acoustic elements on the telescope, the imaging system was temporarily installed just below the ice surface, while the telescope stayed in its standard position at 1100 m depth. The system comprised an antenna with four acoustic projectors positioned at the corners of a 50 meter square; acoustic pulses were "linear sweep-spread signals" - multiple-modulated wide-band signals (10-22 kHz) of 51.2 s duration. Three large objects (two string buoys and the central electronics module) were localized by the 3D acoustic imaging, with a accuracy of ~0.2 m (along the beam) and ~1.0 m (transverse). We discuss signal forms and parameters necessary for improved 3D acoustic imaging of the telescope, and suggest a layout of a possible stationary bottom based 3D imaging setup. The presented technique may be of interest for neutrino telescopes of km3-scale and beyond, as a flexible temporary or as a stationary tool to localize basic telescope elements, while these are completely passive.

K. G. Kebkal; R. Bannasch; O. G. Kebkal; A. I. Panfilov; R. Wischnewski

2008-11-07

123

The Interpretation of a Moving Retinal Image  

Microsoft Academic Search

It is shown that from a monocular view of a rigid, textured, curved surface it is possible, in principle, to determine the gradient of the surface at any point, and the motion of the eye relative to it, from the velocity field of the changing retinal image, and its first and second spatial derivatives. The relevant equations are redundant, thus

H. C. Longuet-Higgins; K. Prazdny

1980-01-01

124

Model Based Segmentation for Retinal Fundus Images  

Microsoft Academic Search

This paper presents a method for detecting and measuring the vascular structures of retinal images. Features are modelled as a superposition of Gaussian functions in a local region. The parameters i.e. centroid, orientation, width of the feature are derived by a mini- mum mean square error (MMSE) type of spatial regression. We employ a penalised likelihood test, the Akakie Information

Li Wang; Abhir Bhalerao

2003-01-01

125

A microfabricated 3-D stem cell delivery scaffold for retinal regenerative therapy  

E-print Network

Diseases affecting the retina, such as Age-related Macular Degeneration (AMD) and Retinitis Pigmentosa (RP), result in the degeneration of the photoreceptor cells and can ultimately lead to blindness in patients. There is ...

Sodha, Sonal

2009-01-01

126

Large deformation 3D image registration in image-guided radiation therapy  

E-print Network

Large deformation 3D image registration in image-guided radiation therapy Mark Foskey, Brad Davis processing of serial 3D CT images used in image- guided radiation therapy. A major assumption in deformable-guided radiation therapy 2 1. Introduction In radiation cancer therapy, the problem of organ motion over the course

Utah, University of

127

Linear tracking for 3-D medical ultrasound imaging.  

PubMed

As the clinical application grows, there is a rapid technical development of 3-D ultrasound imaging. Compared with 2-D ultrasound imaging, 3-D ultrasound imaging can provide improved qualitative and quantitative information for various clinical applications. In this paper, we proposed a novel tracking method for a freehand 3-D ultrasound imaging system with improved portability, reduced degree of freedom, and cost. We designed a sliding track with a linear position sensor attached, and it transmitted positional data via a wireless communication module based on Bluetooth, resulting in a wireless spatial tracking modality. A traditional 2-D ultrasound probe fixed to the position sensor on the sliding track was used to obtain real-time B-scans, and the positions of the B-scans were simultaneously acquired when moving the probe along the track in a freehand manner. In the experiments, the proposed method was applied to ultrasound phantoms and real human tissues. The results demonstrated that the new system outperformed a previously developed freehand system based on a traditional six-degree-of-freedom spatial sensor in phantom and in vivo studies, indicating its merit in clinical applications for human tissues and organs. PMID:23757592

Huang, Qing-Hua; Yang, Zhao; Hu, Wei; Jin, Lian-Wen; Wei, Gang; Li, Xuelong

2013-12-01

128

3D radio reflection imaging of asteroid interiors  

NASA Astrophysics Data System (ADS)

Imaging the interior structure of comets and asteroids in 3D holds the key for understand- ing early Solar System and planetary processes, aids mitigation of collisional hazards, and enables future space investigation. 3D wavefield extrapolation of time-domain finite differ- ences, which is referred to as reverse-time migration (RTM), is a tool to provide high-quality images of the complex 3D-internal structure of the target. Instead of a type of acquisition that separately deploys one orbiting and one landing satellite, I discuss dual orbiter systems, where transmitter and receiver satellites orbit around the asteroid target at different speeds. The dual orbiter acquisition can provide multi-offset data that improve the image quality by illuminating the target from different directions and by attenuating coherent noise caused by wavefield multi-pathing. Shot-record imaging requires dense and evenly distributed receiver coordinates to fully image the interior structure at every source-location. I illustrate a 3D imaging method on a complex asteroid model based on the asteroid 433 Eros using realistic data generated from different acquisition designs for the dual orbiter system. In realistic 3D acquisition, the distribution and number of receivers are limited by the acquisition time, revolving speed and direction of both the transmitter and receiver satellites, and the rotation of the asteroid. The migrated image quality depends on different acquisition parameters (i.e., source frequency bandwidth, acquisition time, the spinning rate of the asteroid) and the intrinsic asteroid medium parameters (i.e., the asteroid attenuation factor and an accurate velocity model). A critical element in reconstructing the interior of an asteroid is to have different ac- quisition designs, where the transmitter and receivers revolve quasi-continuously in different inclinational and latitudinal directions and offer evenly distributed receiver coordinates in the shot-record domain. Among different acquisition designs, the simplest orbit (where the transmitter satellite is fixed in the longitudinal plane and the receiver plane gradually shifts in the latitudinal direction around the asteroid target) offers the best data coverage and requires the least energy to shift the satellite. To obtain reasonable coverage for successfully imaging the asteroid interior, the selected acquisition takes up to eight months. However, this mission is attainable because the propulsion requirements are small due to the slow (< 10 cm/s) orbital velocities around a kilometer-sized asteroid.

Ittharat, Detchai

129

Retinal imaging after corneal inlay implantation.  

PubMed

We report 2 cases of implantation with the Kamra corneal inlay to describe central and peripheral retinal visibility and the quality of optical coherence tomography (OCT) scans. Under pharmacological mydriasis, the central and peripheral retina was explored without disturbance by an experienced retinal ophthalmologist. Central color imaging was done without difficulty, and peripheral imaging was accurate despite a small bright shadow in every image. The quality of the OCT scans of the macular line, macular 3-dimensional cube, and macular radial protocols were 156.51, 77.49, and 84.35, respectively, in patient 1 and 106.66, 63.03, and 64.69, respectively, in patient 2 without artifact scanning. The inlay allowed normal visualization of the central and peripheral fundus, as well as good-quality central and peripheral imaging and OCT scans. PMID:21855770

Casas-Llera, Pilar; Ruiz-Moreno, Jos M; Ali, Jorge L

2011-09-01

130

Validation of 3D ultrasound: CT registration of prostate images  

NASA Astrophysics Data System (ADS)

All over the world 20% of men are expected to develop prostate cancer sometime in his life. In addition to surgery - being the traditional treatment for cancer - the radiation treatment is getting more popular. The most interesting radiation treatment regarding prostate cancer is Brachytherapy radiation procedure. For the safe delivery of that therapy imaging is critically important. In several cases where a CT device is available a combination of the information provided by CT and 3D Ultrasound (U/S) images offers advantages in recognizing the borders of the lesion and delineating the region of treatment. For these applications the CT and U/S scans should be registered and fused in a multi-modal dataset. Purpose of the present development is a registration tool (registration, fusion and validation) for available CT volumes with 3D U/S images of the same anatomical region, i.e. the prostate. The combination of these two imaging modalities interlinks the advantages of the high-resolution CT imaging and low cost real-time U/S imaging and offers a multi-modality imaging environment for further target and anatomy delineation. This tool has been integrated into the visualization software "InViVo" which has been developed over several years in Fraunhofer IGD in Darmstadt.

Firle, Evelyn A.; Wesarg, Stefan; Karangelis, Grigoris; Dold, Christian

2003-05-01

131

3D sound and 3D image interactions: a review of audio-visual depth perception  

NASA Astrophysics Data System (ADS)

There has been much research concerning visual depth perception in 3D stereoscopic displays and, to a lesser extent, auditory depth perception in 3D spatial sound systems. With 3D sound systems now available in a number of different forms, there is increasing interest in the integration of 3D sound systems with 3D displays. It therefore seems timely to review key concepts and results concerning depth perception in such display systems. We first present overviews of both visual and auditory depth perception, before focussing on cross-modal effects in audio-visual depth perception, which may be of direct interest to display and content designers.

Berry, Jonathan S.; Roberts, David A. T.; Holliman, Nicolas S.

2014-02-01

132

Optimal Point Spread Function Design for 3D Imaging  

NASA Astrophysics Data System (ADS)

To extract from an image of a single nanoscale object maximum physical information about its position, we propose and demonstrate a framework for pupil-plane modulation for 3D imaging applications requiring precise localization, including single-particle tracking and superresolution microscopy. The method is based on maximizing the information content of the system, by formulating and solving the appropriate optimization problem梖inding the pupil-plane phase pattern that would yield a point spread function (PSF) with optimal Fisher information properties. We use our method to generate and experimentally demonstrate two example PSFs: one optimized for 3D localization precision over a 3 ?m depth of field, and another with an unprecedented 5 ?m depth of field, both designed to perform under physically common conditions of high background signals.

Shechtman, Yoav; Sahl, Steffen J.; Backer, Adam S.; Moerner, W. E.

2014-09-01

133

Getting in touch--3D printing in forensic imaging.  

PubMed

With the increasing use of medical imaging in forensics, as well as the technological advances in rapid prototyping, we suggest combining these techniques to generate displays of forensic findings. We used computed tomography (CT), CT angiography, magnetic resonance imaging (MRI) and surface scanning with photogrammetry in conjunction with segmentation techniques to generate 3D polygon meshes. Based on these data sets, a 3D printer created colored models of the anatomical structures. Using this technique, we could create models of bone fractures, vessels, cardiac infarctions, ruptured organs as well as bitemark wounds. The final models are anatomically accurate, fully colored representations of bones, vessels and soft tissue, and they demonstrate radiologically visible pathologies. The models are more easily understood by laypersons than volume rendering or 2D reconstructions. Therefore, they are suitable for presentations in courtrooms and for educational purposes. PMID:21602004

Ebert, Lars Chr; Thali, Michael J; Ross, Steffen

2011-09-10

134

Optimal Point Spread Function Design for 3D Imaging.  

PubMed

To extract from an image of a single nanoscale object maximum physical information about its position, we propose and demonstrate a framework for pupil-plane modulation for 3D imaging applications requiring precise localization, including single-particle tracking and superresolution microscopy. The method is based on maximizing the information content of the system, by formulating and solving the appropriate optimization problem-finding the pupil-plane phase pattern that would yield a point spread function (PSF) with optimal Fisher information properties. We use our method to generate and experimentally demonstrate two example PSFs: one optimized for 3D localization precision over a 3???m depth of field, and another with an unprecedented 5???m depth of field, both designed to perform under physically common conditions of high background signals. PMID:25302889

Shechtman, Yoav; Sahl, Steffen J; Backer, Adam S; Moerner, W E

2014-09-26

135

Automated Recognition of 3D Features in GPIR Images  

NASA Technical Reports Server (NTRS)

A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a directed-graph data structure. Relative to past approaches, this multiaxis approach offers the advantages of more reliable detections, better discrimination of objects, and provision of redundant information, which can be helpful in filling gaps in feature recognition by one of the component algorithms. The image-processing class also includes postprocessing algorithms that enhance identified features to prepare them for further scrutiny by human analysts (see figure). Enhancement of images as a postprocessing step is a significant departure from traditional practice, in which enhancement of images is a preprocessing step.

Park, Han; Stough, Timothy; Fijany, Amir

2007-01-01

136

A 3D imaging radar for small unmanned airplanes - ARTINO  

Microsoft Academic Search

In this paper a 3D imaging radar concept, suitable for an unmanned aerial vehicle (UAV), and its status is presented. The concept combines a real aperture, realized by a linear array of nadir pointing antennas, and a synthetic aperture, which is spanned by the moving airplane. The radar front-end uses frequency modulated continuous wave (FMCW) technique with direct down-conversion in

M. Weib; J. H. G. Ender

2005-01-01

137

Target detection performance using 3-D laser radar images  

Microsoft Academic Search

Target detection theory is developed for 3-D pulsed imager operation of a coherent laser radar in a downlooking scenario. Generalized likelihood-ratio tests (GLRTs) and receiver operating characteristics (ROCs) are presented for range-only and joint-range-intensity processors. This work extends previous studies in three ways: (1) fine-range information is included; (2) maximum-likelihood estimation of an unknown range plane is performed; and (3)

Thomas J. Green; Jeffrey H. Shapiro; Murali M. Menon

1991-01-01

138

High-resolution 3D coherent laser radar imaging  

Microsoft Academic Search

The Super-resolution Sensor System (S3) program is an ambitious effort to exploit the maximum information a laser-based sensor can obtain. At Lockheed Martin Coherent Technologies (LMCT), we are developing methods of incorporating multi-function operation (3D imaging, vibrometry, polarimetry, aperture synthesis, etc.) into a single device. The waveforms will be matched to the requirements of both hardware (e.g., optical amplifiers, modulators)

Joseph Buck; Andrew Malm; Andrew Zakel; Brian Krause; Bruce Tiemann

2007-01-01

139

Combined registration of 3D tibia and femur implant models in 3D magnetic resonance images  

NASA Astrophysics Data System (ADS)

The most frequent reasons for revision of total knee arthroplasty are loosening and abnormal axial alignment leading to an unphysiological kinematic of the knee implant. To get an idea about the postoperative kinematic of the implant, it is essential to determine the position and orientation of the tibial and femoral prosthesis. Therefore we developed a registration method for fitting 3D CAD-models of knee joint prostheses into an 3D MR image. This rigid registration is the basis for a quantitative analysis of the kinematics of knee implants. Firstly the surface data of the prostheses models are converted into a voxel representation; a recursive algorithm determines all boundary voxels of the original triangular surface data. Secondly an initial preconfiguration of the implants by the user is still necessary for the following step: The user has to perform a rough preconfiguration of both remaining prostheses models, so that the fine matching process gets a reasonable starting point. After that an automated gradient-based fine matching process determines the best absolute position and orientation: This iterative process changes all 6 parameters (3 rotational- and 3 translational parameters) of a model by a minimal amount until a maximum value of the matching function is reached. To examine the spread of the final solutions of the registration, the interobserver variability was measured in a group of testers. This variability, calculated by the relative standard deviation, improved from about 50% (pure manual registration) to 0.5% (rough manual preconfiguration and subsequent fine registration with the automatic fine matching process).

Englmeier, Karl-Hans; Siebert, Markus; von Eisenhart-Rothe, Ruediger; Graichen, Heiko

2008-03-01

140

Automated Identification of Fiducial Points on 3D Torso Images  

PubMed Central

Breast reconstruction is an important part of the breast cancer treatment process for many women. Recently, 2D and 3D images have been used by plastic surgeons for evaluating surgical outcomes. Distances between different fiducial points are frequently used as quantitative measures for characterizing breast morphology. Fiducial points can be directly marked on subjects for direct anthropometry, or can be manually marked on images. This paper introduces novel algorithms to automate the identification of fiducial points in 3D images. Automating the process will make measurements of breast morphology more reliable, reducing the inter- and intra-observer bias. Algorithms to identify three fiducial points, the nipples, sternal notch, and umbilicus, are described. The algorithms used for localization of these fiducial points are formulated using a combination of surface curvature and 2D color information. Comparison of the 3D co-ordinates of automatically detected fiducial points and those identified manually, and geodesic distances between the fiducial points are used to validate algorithm performance. The algorithms reliably identified the location of all three of the fiducial points. We dedicate this article to our late colleague and friend, Dr. Elisabeth K. Beahm. Elisabeth was both a talented plastic surgeon and physician-scientist; we deeply miss her insight and her fellowship. PMID:25288903

Kawale, Manas M; Reece, Gregory P; Crosby, Melissa A; Beahm, Elisabeth K; Fingeret, Michelle C; Markey, Mia K; Merchant, Fatima A

2013-01-01

141

Multimodal Imaging in Hereditary Retinal Diseases  

PubMed Central

Introduction. In this retrospective study we evaluated the multimodal visualization of retinal genetic diseases to better understand their natural course. Material and Methods. We reviewed the charts of 70 consecutive patients with different genetic retinal pathologies who had previously undergone multimodal imaging analyses. Genomic DNA was extracted from peripheral blood and genotyped at the known locus for the different diseases. Results. The medical records of 3 families of a 4-generation pedigree affected by North Carolina macular dystrophy were reviewed. A total of 8 patients with Stargardt disease were evaluated for their two main defining clinical characteristics, yellow subretinal flecks and central atrophy. Nine male patients with a previous diagnosis of choroideremia and eleven female carriers were evaluated. Fourteen patients with Best vitelliform macular dystrophy and 6 family members with autosomal recessive bestrophinopathy were included. Seven patients with enhanced s-cone syndrome were ascertained. Lastly, we included 3 unrelated patients with fundus albipunctatus. Conclusions. In hereditary retinal diseases, clinical examination is often not sufficient for evaluating the patient's condition. Retinal imaging then becomes important in making the diagnosis, in monitoring the progression of disease, and as a surrogate outcome measure of the efficacy of an intervention. PMID:23710333

Morara, Mariachiara; Veronese, Chiara; Nucci, Paolo; Ciardella, Antonio P.

2013-01-01

142

Pavement cracking measurements using 3D laser-scan images  

NASA Astrophysics Data System (ADS)

Pavement condition surveying is vital for pavement maintenance programs that ensure ride quality and traffic safety. This paper first introduces an automated pavement inspection system which uses a three-dimensional (3D) camera and a structured laser light to acquire dense transverse profiles of a pavement lane surface when it carries a moving vehicle. After the calibration, the 3D system can yield a depth resolution of 0.5 mm and a transverse resolution of 1.56 mm pixel-1 at 1.4 m camera height from the ground. The scanning rate of the camera can be set to its maximum at 5000 lines s-1, allowing the density of scanned profiles to vary with the vehicle's speed. The paper then illustrates the algorithms that utilize 3D information to detect pavement distress, such as transverse, longitudinal and alligator cracking, and presents the field tests on the system's repeatability when scanning a sample pavement in multiple runs at the same vehicle speed, at different vehicle speeds and under different weather conditions. The results show that this dedicated 3D system can capture accurate pavement images that detail surface distress, and obtain consistent crack measurements in repeated tests and under different driving and lighting conditions.

Ouyang, W.; Xu, B.

2013-10-01

143

Triangulation Based 3D Laser Imaging for Fracture Orientation Analysis  

NASA Astrophysics Data System (ADS)

Laser imaging has recently been identified as a potential tool for rock mass characterization. This contribution focuses on the application of triangulation based, short-range laser imaging to determine fracture orientation and surface texture. This technology measures the distance to the target by triangulating the projected and reflected laser beams, and also records the reflection intensity. In this study, we acquired 3D laser images of rock faces using the Laser Camera System (LCS), a portable instrument developed by Neptec Design Group (Ottawa, Canada). The LCS uses an infrared laser beam and is immune to the lighting conditions. The maximum image resolution is 1024 x 1024 volumetric image elements. Depth resolution is 0.5 mm at 5 m. An above ground field trial was conducted at a blocky road cut with well defined joint sets (Kingston, Ontario). An underground field trial was conducted at the Inco 175 Ore body (Sudbury, Ontario) where images were acquired in the dark and the joint set features were more subtle. At each site, from a distance of 3 m away from the rock face, a grid of six images (approximately 1.6 m by 1.6 m) was acquired at maximum resolution with 20% overlap between adjacent images. This corresponds to a density of 40 image elements per square centimeter. Polyworks, a high density 3D visualization software tool, was used to align and merge the images into a single digital triangular mesh. The conventional method of determining fracture orientations is by manual measurement using a compass. In order to be accepted as a substitute for this method, the LCS should be capable of performing at least to the capabilities of manual measurements. To compare fracture orientation estimates derived from the 3D laser images to manual measurements, 160 inclinometer readings were taken at the above ground site. Three prominent joint sets (strike/dip: 236/09, 321/89, 325/01) were identified by plotting the joint poles on a stereonet. Underground, two main joint sets (strike/dip: 060/00, 114/86) were identified from 49 manual inclinometer measurements A stereonet of joint poles from the 3D laser data was generated using the commercial software Split-FX. Joint sets were identified successfully and their orientations correlated well with the hand measurements. However, Split-Fx overlays a simply 2D grid of equal-sized triangles onto the 3D surface and requires significant user input. In a more automated approach, we have developed a MATLAB script which directly imports the Polyworks 3D triangular mesh. A typical mesh is composed of over 1 million triangles of variable sizes: smooth regions are represented by large triangles, whereas rough surfaces are captured by several smaller triangles. Using the triangle vertices, the script computes the strike and dip of each triangle. This approach opens possibilities for statistical analysis of a large population of fracture orientation estimates, including surface texture. The methodology will be used to evaluate both synthetic and field data.

Mah, J.; Claire, S.; Steve, M.

2009-05-01

144

Detection of retinal nerve fiber layer defects on retinal fundus images for early diagnosis of glaucoma  

Microsoft Academic Search

Retinal nerve fiber layer defect (NFLD) is a major sign of glaucoma, which is the second leading cause of blindness in the world. Early detection of NFLDs is critical for improved prognosis of this progressive, blinding disease. We have investigated a computerized scheme for detection of NFLDs on retinal fundus images. In this study, 162 images, including 81 images with

Chisako Muramatsu; Yoshinori Hayashi; Akira Sawada; Yuji Hatanaka; Takeshi Hara; Tetsuya Yamamoto; Hiroshi Fujita

2010-01-01

145

Mesh generation from 3D multi-material images.  

PubMed

The problem of generating realistic computer models of objects represented by 3D segmented images is important in many biomedical applications. Labelled 3D images impose particular challenges for meshing algorithms because multi-material junctions form features such as surface pacthes, edges and corners which need to be preserved into the output mesh. In this paper, we propose a feature preserving Delaunay refinement algorithm which can be used to generate high-quality tetrahedral meshes from segmented images. The idea is to explicitly sample corners and edges from the input image and to constrain the Delaunay refinement algorithm to preserve these features in addition to the surface patches. Our experimental results on segmented medical images have shown that, within a few seconds, the algorithm outputs a tetrahedral mesh in which each material is represented as a consistent submesh without gaps and overlaps. The optimization property of the Delaunay triangulation makes these meshes suitable for the purpose of realistic visualization or finite element simulations. PMID:20426123

Boltcheva, Dobrina; Yvinec, Mariette; Boissonnat, Jean-Daniel

2009-01-01

146

Sparse aperture 3D passive image sensing and recognition  

NASA Astrophysics Data System (ADS)

The way we perceive, capture, store, communicate and visualize the world has greatly changed in the past century Novel three dimensional (3D) imaging and display systems are being pursued both in academic and industrial settings. In many cases, these systems have revolutionized traditional approaches and/or enabled new technologies in other disciplines including medical imaging and diagnostics, industrial metrology, entertainment, robotics as well as defense and security. In this dissertation, we focus on novel aspects of sparse aperture multi-view imaging systems and their application in quantum-limited object recognition in two separate parts. In the first part, two concepts are proposed. First a solution is presented that involves a generalized framework for 3D imaging using randomly distributed sparse apertures. Second, a method is suggested to extract the profile of objects in the scene through statistical properties of the reconstructed light field. In both cases, experimental results are presented that demonstrate the feasibility of the techniques. In the second part, the application of 3D imaging systems in sensing and recognition of objects is addressed. In particular, we focus on the scenario in which only 10s of photons reach the sensor from the object of interest, as opposed to hundreds of billions of photons in normal imaging conditions. At this level, the quantum limited behavior of light will dominate and traditional object recognition practices may fail. We suggest a likelihood based object recognition framework that incorporates the physics of sensing at quantum-limited conditions. Sensor dark noise has been modeled and taken into account. This framework is applied to 3D sensing of thermal objects using visible spectrum detectors. Thermal objects as cold as 250K are shown to provide enough signature photons to be sensed and recognized within background and dark noise with mature, visible band, image forming optics and detector arrays. The results suggest that one might not need to venture into exotic and expensive detector arrays and associated optics for sensing room-temperature thermal objects in complete darkness.

Daneshpanah, Mehdi

147

Feature detection on 3D images of dental imprints  

NASA Astrophysics Data System (ADS)

A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.

Mokhtari, Marielle; Laurendeau, Denis

1994-09-01

148

Quantitative 3D Optical Imaging: Applications in Dosimetry and Biophysics  

NASA Astrophysics Data System (ADS)

Optical-CT has been shown to be a potentially useful imaging tool for the two very different spheres of biologists and radiation therapy physicists, but it has yet to live up to that potential. In radiation therapy, researchers have used optical-CT for the readout of 3D dosimeters, but it is yet to be a clinically relevant tool as the technology is too slow to be considered practical. Biologists have used the technique for structural imaging, but have struggled with emission tomography as the reality of photon attenuation for both excitation and emission have made the images quantitatively irrelevant. Dosimetry. The DLOS (Duke Large field of view Optical-CT Scanner) was designed and constructed to make 3D dosimetry utilizing optical-CT a fast and practical tool while maintaining the accuracy of readout of the previous, slower readout technologies. Upon construction/optimization/implementation of several components including a diffuser, band pass filter, registration mount & fluid filtration system the dosimetry system provides high quality data comparable to or exceeding that of commercial products. In addition, a stray light correction algorithm was tested and implemented. The DLOS in combination with the 3D dosimeter it was designed for, PREAGETM, then underwent rigorous commissioning and benchmarking tests validating its performance against gold standard data including a set of 6 irradiations. DLOS commissioning tests resulted in sub-mm isotropic spatial resolution (MTF >0.5 for frequencies of 1.5lp/mm) and a dynamic range of 60dB. Flood field uniformity was 10% and stable after 45minutes. Stray light proved to be small, due to telecentricity, but even the residual can be removed through deconvolution. Benchmarking tests showed the mean 3D passing gamma rate (3%, 3mm, 5% dose threshold) over the 6 benchmark data sets was 97.3% +/- 0.6% (range 96%-98%) scans totaling 10 minutes, indicating excellent ability to perform 3D dosimetry while improving the speed of readout. Noise was low at 2% for 2mm reconstructions. The DLOS/PRESAGERTM benchmark tests show consistently excellent performance, with very good agreement to simple known distributions. The telecentric design was critical to enabling fast (~15mins) imaging with minimal stray light artifacts. The system produces accurate isotropic 2mm3 dose data over clinical volumes (e.g. 16cm diameter phantoms, 12 cm height), and represents a uniquely useful and versatile new tool for commissioning complex radiotherapy techniques. The system also has wide versatility, and has successfully been used in preliminary tests with protons and with kV irradiations. Biology. Attenuation corrections for optical-emission-CT were done by modeling physical parameters in the imaging setup within the framework of an ordered subset expectation maximum (OSEM) iterative reconstruction algorithm. This process has a well documented history in single photon emission computed tomography (SPECT), but is inherently simpler due to the lack of excitation photons to account for. Excitation source strength distribution, excitation and emission attenuation were modeled. The accuracy of the correction was investigated by imaging phantoms containing known distributions of attenuation and fluorophores. The correction was validated on a manufactured phantom designed to give uniform emission in a central cuboidal region and later applied to a cleared mouse brain with GFP (green-fluorescentprotein) labeled vasculature and a cleared 4T1 xenograft flank tumor with constitutive RFP (red-fluorescent-protein). Reconstructions were compared to corresponding slices imaged with a fluorescent dissection microscope. Significant optical-ECT attenuation artifacts were observed in the uncorrected phantom images and appeared up to 80% less intense than the verification image in the central region. The corrected phantom images showed excellent agreement with the verification image with only slight variations. The corrected tissue sample reconstructions showed general agreement between the verification images. Comp

Thomas, Andrew Stephen

149

Retrospective Illumination Correction of Retinal Images  

PubMed Central

A method for correction of nonhomogenous illumination based on optimization of parameters of B-spline shading model with respect to Shannon's entropy is presented. The evaluation of Shannon's entropy is based on Parzen windowing method (Mangin, 2000) with the spline-based shading model. This allows us to express the derivatives of the entropy criterion analytically, which enables efficient use of gradient-based optimization algorithms. Seven different gradient- and nongradient-based optimization algorithms were initially tested on a set of 40 simulated retinal images, generated by a model of the respective image acquisition system. Among the tested optimizers, the gradient-based optimizer with varying step has shown to have the fastest convergence while providing the best precision. The final algorithm proved to be able of suppressing approximately 70% of the artificially introduced non-homogenous illumination. To assess the practical utility of the method, it was qualitatively tested on a set of 336 real retinal images; it proved the ability of eliminating the illumination inhomogeneity substantially in most of cases. The application field of this method is especially in preprocessing of retinal images, as preparation for reliable segmentation or registration. PMID:20671909

Kubecka, Libor; Jan, Jiri; Kolar, Radim

2010-01-01

150

The 3D model control of image processing  

NASA Technical Reports Server (NTRS)

Telerobotics studies remote control of distant robots by a human operator using supervisory or direct control. Even if the robot manipulators has vision or other senses, problems arise involving control, communications, and delay. The communication delays that may be expected with telerobots working in space stations while being controlled from an Earth lab have led to a number of experiments attempting to circumvent the problem. This delay in communication is a main motivating factor in moving from well understood instantaneous hands-on manual control to less well understood supervisory control; the ultimate step would be the realization of a fully autonomous robot. The 3-D model control plays a crucial role in resolving many conflicting image processing problems that are inherent in resolving in the bottom-up approach of most current machine vision processes. The 3-D model control approach is also capable of providing the necessary visual feedback information for both the control algorithms and for the human operator.

Nguyen, An H.; Stark, Lawrence

1989-01-01

151

Development of an Image-Based Network Model of Retinal Vasculature  

Microsoft Academic Search

The paper presents an image-based network model of retinal vasculature taking account of the 3D vascular distribution of the\\u000a retina. Mouse retinas were prepared using flat-mount technique and vascular images were obtained using confocal microscopy.\\u000a The vascular morphometric information obtained from confocal images was used for the model development. The network model\\u000a developed directly represents the vascular geometry of all

P. Ganesan; S. He; H. Xu

2010-01-01

152

Calibration of an intensity ratio system for 3D imaging  

NASA Astrophysics Data System (ADS)

An intensity ratio method for 3D imaging is proposed with error analysis given for assessment and future improvements. The method is cheap and reasonably fast as it requires no mechanical scanning or laborious correspondence computation. One drawback of the intensity ratio methods which hamper their widespread use is the undesirable change of image intensity. This is usually caused by the difference in reflection from different parts of an object surface and the automatic iris or gain control of the camera. In our method, gray-level patterns used include an uniform pattern, a staircase pattern and a sawtooth pattern to make the system more robust against errors in intensity ratio. 3D information of the surface points of an object can be derived from the intensity ratios of the images by triangulation. A reference back plane is put behind the object to monitor the change in image intensity. Errors due to camera calibration, projector calibration, variations in intensity, imperfection of the slides etc. are analyzed. Early experiments of the system using a newvicon CCTV camera with back plane intensity correction gives a mean-square range error of about 0.5 percent. Extensive analysis of various errors is expected to yield methods for improving the accuracy.

Tsui, H. T.; Tang, K. C.

1989-03-01

153

FELIX 3D display: an interactive tool for volumetric imaging  

NASA Astrophysics Data System (ADS)

The FELIX 3D display belongs to the class of volumetric displays using the swept volume technique. It is designed to display images created by standard CAD applications, which can be easily imported and interactively transformed in real-time by the FELIX control software. The images are drawn on a spinning screen by acousto-optic, galvanometric or polygon mirror deflection units with integrated lasers and a color mixer. The modular design of the display enables the user to operate with several equal or different projection units in parallel and to use appropriate screens for the specific purpose. The FELIX 3D display is a compact, light, extensible and easy to transport system. It mainly consists of inexpensive standard, off-the-shelf components for an easy implementation. This setup makes it a powerful and flexible tool to keep track with the rapid technological progress of today. Potential applications include imaging in the fields of entertainment, air traffic control, medical imaging, computer aided design as well as scientific data visualization.

Langhans, Knut; Bahr, Detlef; Bezecny, Daniel; Homann, Dennis; Oltmann, Klaas; Oltmann, Krischan; Guill, Christian; Rieper, Elisabeth; Ardey, Goetz

2002-05-01

154

Imaging PVC gas pipes using 3-D GPR  

SciTech Connect

Over the years, many enhancements have been made by the oil and gas industry to improve the quality of seismic images. The GPR project at GTRI borrows heavily from these technologies in order to produce 3-D GPR images of PVC gas pipes. As will be demonstrated, improvements in GPR data acquisition, 3-D processing and visualization schemes yield good images of PVC pipes in the subsurface. Data have been collected in cooperation with the local gas company and at a test facility in Texas. Surveys were conducted over both a metal pipe and PVC pipes of diameters ranging from {1/2} in. to 4 in. at depths from 1 ft to 3 ft in different soil conditions. The metal pipe produced very good reflections and was used to fine tune and optimize the processing run stream. It was found that the following steps significantly improve the overall image: (1) Statics for drift and topography compensation, (2) Deconvolution, (3) Filtering and automatic gain control, (4) Migration for focusing and resolution, and (5) Visualization optimization. The processing flow implemented is relatively straightforward, simple to execute and robust under varying conditions. Future work will include testing resolution limits, effects of soil conditions, and leak detection.

Bradford, J.; Ramaswamy, M.; Peddy, C. [GTRI/HARC, Woodlands, TX (United States)

1996-11-01

155

3D-LZ helicopter ladar imaging system  

NASA Astrophysics Data System (ADS)

A joint-service team led by the Air Force Research Laboratory's Munitions and Sensors Directorates completed a successful flight test demonstration of the 3D-LZ Helicopter LADAR Imaging System. This was a milestone demonstration in the development of technology solutions for a problem known as "helicopter brownout", the loss of situational awareness caused by swirling sand during approach and landing. The 3D-LZ LADAR was developed by H.N. Burns Engineering and integrated with the US Army Aeroflightdynamics Directorate's Brown-Out Symbology System aircraft state symbology aboard a US Army EH-60 Black Hawk helicopter. The combination of these systems provided an integrated degraded visual environment landing solution with landing zone situational awareness as well as aircraft guidance and obstacle avoidance information. Pilots from the U.S. Army, Air Force, Navy, and Marine Corps achieved a 77% landing rate in full brownout conditions at a test range at Yuma Proving Ground, Arizona. This paper will focus on the LADAR technology used in 3D-LZ and the results of this milestone demonstration.

Savage, James; Harrington, Walter; McKinley, R. Andrew; Burns, H. N.; Braddom, Steven; Szoboszlay, Zoltan

2010-04-01

156

3D laser optoacoustic ultrasonic imaging system for preclinical research  

NASA Astrophysics Data System (ADS)

In this work, we introduce a novel three-dimensional imaging system for in vivo high-resolution anatomical and functional whole-body visualization of small animal models developed for preclinical or other type of biomedical research. The system (LOUIS-3DM) combines a multi-wavelength optoacoustic and ultrawide-band laser ultrasound tomographies to obtain coregistered maps of tissue optical absorption and acoustic properties, displayed within the skin outline of the studied animal. The most promising applications of the LOUIS-3DM include 3D angiography, cancer research, and longitudinal studies of biological distribution of optoacoustic contrast agents (carbon nanotubes, metal plasmonic nanoparticles, etc.).

Ermilov, Sergey A.; Conjusteau, Andr; Hernandez, Travis; Su, Richard; Nadvoretskiy, Vyacheslav; Tsyboulski, Dmitri; Anis, Fatima; Anastasio, Mark A.; Oraevsky, Alexander A.

2013-03-01

157

Molecular Imaging of Retinal Disease  

PubMed Central

Abstract Imaging of the eye plays an important role in ocular therapeutic discovery and evaluation in preclinical models and patients. Advances in ophthalmic imaging instrumentation have enabled visualization of the retina at an unprecedented resolution. These developments have contributed toward early detection of the disease, monitoring of disease progression, and assessment of the therapeutic response. These powerful technologies are being further harnessed for clinical applications by configuring instrumentation to detect disease biomarkers in the retina. These biomarkers can be detected either by measuring the intrinsic imaging contrast in tissue, or by the engineering of targeted injectable contrast agents for imaging of the retina at the cellular and molecular level. Such approaches have promise in providing a window on dynamic disease processes in the retina such as inflammation and apoptosis, enabling translation of biomarkers identified in preclinical and clinical studies into useful diagnostic targets. We discuss recently reported and emerging imaging strategies for visualizing diverse cell types and molecular mediators of the retina in vivo during health and disease, and the potential for clinical translation of these approaches. PMID:23421501

Capozzi, Megan E.; Gordon, Andrew Y.; Penn, John S.

2013-01-01

158

3D Imaging of Tissue Integration with Porous Biomaterials  

PubMed Central

Porous biomaterials designed to support cellular infiltration and tissue formation play a critical role in implant fixation and engineered tissue repair. The purpose of this Leading Opinion Paper is to advocate the use of high resolution 3D imaging techniques as a tool to quantify extracellular matrix formation and vascular ingrowth within porous biomaterials and objectively compare different strategies for functional tissue regeneration. An initial over-reliance on qualitative evaluation methods may have contributed to the false perception that developing effective tissue engineering technologies would be relatively straightforward. Moreover, the lack of comparative studies with quantitative metrics in challenging pre-clinical models has made it difficult to determine which of the many available strategies to invest in or use clinically for companies and clinicians, respectively. This paper will specifically illustrate the use of microcomputed tomography (micro-CT) imaging with and without contrast agents to nondestructively quantify the formation of bone, cartilage, and vasculature within porous biomaterials. PMID:18635260

Guldberg, Robert E.; Duvall, Craig L.; Peister, Alexandra; Oest, Megan E.; Lin, Angela S.P.; Palmer, Ashley W.; Levenston, Marc E.

2008-01-01

159

Computing 3D head orientation from a monocular image sequence  

NASA Astrophysics Data System (ADS)

An approach for estimating 3D head orientation in a monocular image sequence is proposed. The approach employs recently developed image-based parameterized tracking for face and face features to locate the area in which a sub- pixel parameterized shape estimation of the eye's boundary is performed. This involves tracking of five points (four at the eye corners and the fifth is the tip of the nose). We describe an approach that relies on the coarse structure of the face to compute orientation relative to the camera plane. Our approach employs projective invariance of the cross-ratios of the eye corners and anthropometric statistics to estimate the head yaw, roll and pitch. Analytical and experimental results are reported.

Horprasert, Thanarat; Yacoob, Yaser; Davis, Larry S.

1997-02-01

160

Quantitative validation of 3D image registration techniques  

NASA Astrophysics Data System (ADS)

Multimodality images obtained from different medical imaging systems such as magnetic resonance (MR), computed tomography (CT), ultrasound (US), positron emission tomography (PET), single photon emission computed tomography (SPECT) provide largely complementary characteristic or diagnostic information. Therefore, it is an important research objective to `fuse' or combine this complementary data into a composite form which would provide synergistic information about the objects under examination. An important first step in the use of complementary fused images is 3D image registration, where multi-modality images are brought into spatial alignment so that the point-to-point correspondence between image data sets is known. Current research in the field of multimodality image registration has resulted in the development and implementation of several different registration algorithms, each with its own set of requirements and parameters. Our research has focused on the development of a general paradigm for measuring, evaluating and comparing the performance of different registration algorithms. Rather than evaluating the results of one algorithm under a specific set of conditions, we suggest a general approach to validation using simulation experiments, where the exact spatial relationship between data sets is known, along with phantom data, to characterize the behavior of an algorithm via a set of quantitative image measurements. This behavior may then be related to the algorithm's performance with real patient data, where the exact spatial relationship between multimodality images is unknown. Current results indicate that our approach is general enough to apply to several different registration algorithms. Our methods are useful for understanding the different sources of registration error and for comparing the results between different algorithms.

Holton Tainter, Kerrie S.; Taneja, Udita; Robb, Richard A.

1995-05-01

161

Non-rigid 2D-3D Medical Image Registration using Markov Random Fields  

E-print Network

Non-rigid 2D-3D Medical Image Registration using Markov Random Fields Enzo Ferrante and Nikos of this approach. Keywords: 2D-3D registration, medical imaging, markov random fields, dis- crete optimization. 1 Introduction 2D-3D image registration is an important problem in medical imaging and it can be applied

162

Contrast Enhancement of Retinal Vasculature in Digital Fundus Image  

Microsoft Academic Search

Analyzing retinal fundus image is important for early detection of diseases related to the eye. However, in fundus images the contrast between retinal blood vessels and the background is very low. Hence, analyzing or visualizing tiny blood vessels is difficult. Fluorescein angiogram overcomes this imaging problem but it is an invasive procedure that leads to other physiological problems. In this

M. H. Ahmad Fadzil; H. Adi Nugroho; I. Lila Iznita

2009-01-01

163

Cone photoreceptor definition on adaptive optics retinal imaging  

E-print Network

Cone photoreceptor definition on adaptive optics retinal imaging Manickam Nick Muthiah,1,2,3 Carlos the emergence of high resolution adaptive optics (AO) retinal imaging systems.1 Prior to the development of AO锟1079. ABSTRACT Aims To quantitatively analyse cone photoreceptor matrices on images captured on an adaptive

Guillas, Serge

164

Adaptive Optics Retinal Imaging: Emerging Clinical Applications  

PubMed Central

The human retina is a uniquely accessible tissue. Tools like scanning laser ophthalmoscopy (SLO) and spectral domain optical coherence tomography (SD-OCT) provide clinicians with remarkably clear pictures of the living retina. While the anterior optics of the eye permit such non-invasive visualization of the retina and associated pathology, these same optics induce significant aberrations that in most cases obviate cellular-resolution imaging. Adaptive optics (AO) imaging systems use active optical elements to compensate for aberrations in the optical path between the object and the camera. Applied to the human eye, AO allows direct visualization of individual rod and cone photoreceptor cells, RPE cells, and white blood cells. AO imaging has changed the way vision scientists and ophthalmologists see the retina, helping to clarify our understanding of retinal structure, function, and the etiology of various retinal pathologies. Here we review some of the advances made possible with AO imaging of the human retina, and discuss applications and future prospects for clinical imaging. PMID:21057346

Godara, Pooja; Dubis, Adam M.; Roorda, Austin; Duncan, Jacque L.; Carroll, Joseph

2010-01-01

165

Compensation of log-compressed images for 3-D ultrasound.  

PubMed

In this study, a Bayesian approach was used for 3-D reconstruction in the presence of multiplicative noise and nonlinear compression of the ultrasound (US) data. Ultrasound images are often considered as being corrupted by multiplicative noise (speckle). Several statistical models have been developed to represent the US data. However, commercial US equipment performs a nonlinear image compression that reduces the dynamic range of the US signal for visualization purposes. This operation changes the distribution of the image pixels, preventing a straightforward application of the models. In this paper, the nonlinear compression is explicitly modeled and considered in the reconstruction process, where the speckle noise present in the radio frequency (RF) US data is modeled with a Rayleigh distribution. The results obtained by considering the compression of the US data are then compared with those obtained assuming no compression. It is shown that the estimation performed using the nonlinear log-compression model leads to better results than those obtained with the Rayleigh reconstruction method. The proposed algorithm is tested with synthetic and real data and the results are discussed. The results have shown an improvement in the reconstruction results when the compression operation is included in the image formation model, leading to sharper images with enhanced anatomical details. PMID:12659912

Sanches, Jo鉶 M; Marques, Jorge S

2003-02-01

166

High Resolution 3D Radar Imaging of Comet Interiors  

NASA Astrophysics Data System (ADS)

Knowing the interiors of comets and other primitive bodies is fundamental to our understanding of how planets formed. We have developed a Discovery-class mission formulation, Comet Radar Explorer (CORE), based on the use of previously flown planetary radar sounding techniques, with the goal of obtaining high resolution 3D images of the interior of a small primitive body. We focus on the Jupiter-Family Comets (JFCs) as these are among the most primitive bodies reachable by spacecraft. Scattered in from far beyond Neptune, they are ultimate targets of a cryogenic sample return mission according to the Decadal Survey. Other suitable targets include primitive NEOs, Main Belt Comets, and Jupiter Trojans. The approach is optimal for small icy bodies ~3-20 km diameter with spin periods faster than about 12 hours, since (a) navigation is relatively easy, (b) radar penetration is global for decameter wavelengths, and (c) repeated overlapping ground tracks are obtained. The science mission can be as short as ~1 month for a fast-rotating JFC. Bodies smaller than ~1 km can be globally imaged, but the navigation solutions are less accurate and the relative resolution is coarse. Larger comets are more interesting, but radar signal is unlikely to be reflected from depths greater than ~10 km. So, JFCs are excellent targets for a variety of reasons. We furthermore focus on the use of Solar Electric Propulsion (SEP) to rendezvous shortly after the comet's perihelion. This approach leaves us with ample power for science operations under dormant conditions beyond ~2-3 AU. This leads to a natural mission approach of distant observation, followed by closer inspection, terminated by a dedicated radar mapping orbit. Radar reflections are obtained from a polar orbit about the icy nucleus, which spins underneath. Echoes are obtained from a sounder operating at dual frequencies 5 and 15 MHz, with 1 and 10 MHz bandwidths respectively. The dense network of echoes is used to obtain global 3D images of interior structure to ~20 m, and to map dielectric properties (related to internal composition) to better than 200 m throughout. This is comparable in detail to modern 3D medical ultrasound, although we emphasize that the techniques are somewhat different. An interior mass distribution is obtained through spacecraft tracking, using data acquired during the close, quiet radar orbits. This is aligned with the radar-based images of the interior, and the shape model, to contribute to the multi-dimensional 3D global view. High-resolution visible imaging provides boundary conditions and geologic context to these interior views. An infrared spectroscopy and imaging campaign upon arrival reveals the time-evolving activity of the nucleus and the structure and composition of the inner coma, and the definition of surface units. CORE is designed to obtain a total view of a comet, from the coma to the active and evolving surface to the deep interior. Its primary science goal is to obtain clear images of internal structure and dielectric composition. These will reveal how the comet was formed, what it is made of, and how it 'works'. By making global yet detailed connections from interior to exterior, this knowledge will be an important complement to the Rosetta mission, and will lay the foundation for comet nucleus sample return by revealing the areas of shallow depth to 'bedrock', and relating accessible deposits to their originating provenances within the nucleus.

Asphaug, E. I.; Gim, Y.; Belton, M.; Brophy, J.; Weissman, P. R.; Heggy, E.

2012-12-01

167

Automated 3D segmentation of intraretinal layers from optic nerve head optical coherence tomography images  

NASA Astrophysics Data System (ADS)

Optical coherence tomography (OCT), being a noninvasive imaging modality, has begun to find vast use in the diagnosis and management of ocular diseases such as glaucoma, where the retinal nerve fiber layer (RNFL) has been known to thin. Furthermore, the recent availability of the considerably larger volumetric data with spectral-domain OCT has increased the need for new processing techniques. In this paper, we present an automated 3-D graph-theoretic approach for the segmentation of 7 surfaces (6 layers) of the retina from 3-D spectral-domain OCT images centered on the optic nerve head (ONH). The multiple surfaces are detected simultaneously through the computation of a minimum-cost closed set in a vertex-weighted graph constructed using edge/regional information, and subject to a priori determined varying surface interaction and smoothness constraints. The method also addresses the challenges posed by presence of the large blood vessels and the optic disc. The algorithm was compared to the average manual tracings of two observers on a total of 15 volumetric scans, and the border positioning error was found to be 7.25 +/- 1.08 ?m and 8.94 +/- 3.76 ?m for the normal and glaucomatous eyes, respectively. The RNFL thickness was also computed for 26 normal and 70 glaucomatous scans where the glaucomatous eyes showed a significant thinning (p < 0.01, mean thickness 73.7 +/- 32.7 ?m in normal eyes versus 60.4 +/- 25.2 ?m in glaucomatous eyes).

Antony, Bhavna J.; Abr鄊off, Michael D.; Lee, Kyungmoo; Sonkova, Pavlina; Gupta, Priya; Kwon, Young; Niemeijer, Meindert; Hu, Zhihong; Garvin, Mona K.

2010-03-01

168

Study of the performance of different subpixel image correlation methods in 3D digital image correlation.  

PubMed

The three-dimensional digital image correlation (3D-DIC) method is rapidly developing and is being widely applied to engineering and manufacturing. Despite its extensive use, the error caused by different image matching algorithms is seldom discussed. An algorithm for 3D speckle image generation is proposed, and the performances of different subpixel correlation algorithms are studied. The advantage is that there is no interpolation bias of texture in the simulation before and after deformation, and the error from the interpolation of speckle can be omitted in this algorithm. An error criterion for 3D reconstruction is proposed. 3D speckle images were simulated, and the performance of four subpixel algorithms is addressed. Based on the research results of different subpixel algorithms, a first-order Newton-Raphson iteration method and gradient-based method are recommended for 3D-DIC measurement. PMID:20648187

Hu, Zhenxing; Xie, Huimin; Lu, Jian; Hua, Tao; Zhu, Jianguo

2010-07-20

169

Performance assessment of 3D surface imaging technique for medical imaging applications  

NASA Astrophysics Data System (ADS)

Recent development in optical 3D surface imaging technologies provide better ways to digitalize the 3D surface and its motion in real-time. The non-invasive 3D surface imaging approach has great potential for many medical imaging applications, such as motion monitoring of radiotherapy, pre/post evaluation of plastic surgery and dermatology, to name a few. Various commercial 3D surface imaging systems have appeared on the market with different dimension, speed and accuracy. For clinical applications, the accuracy, reproducibility and robustness across the widely heterogeneous skin color, tone, texture, shape properties, and ambient lighting is very crucial. Till now, a systematic approach for evaluating the performance of different 3D surface imaging systems still yet exist. In this paper, we present a systematic performance assessment approach to 3D surface imaging system assessment for medical applications. We use this assessment approach to exam a new real-time surface imaging system we developed, dubbed "Neo3D Camera", for image-guided radiotherapy (IGRT). The assessments include accuracy, field of view, coverage, repeatability, speed and sensitivity to environment, texture and color.

Li, Tuotuo; Geng, Jason; Li, Shidong

2013-03-01

170

Adaptive Optics Technology for High-Resolution Retinal Imaging  

PubMed Central

Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effects of optical aberrations. The direct visualization of the photoreceptor cells, capillaries and nerve fiber bundles represents the major benefit of adding AO to retinal imaging. Adaptive optics is opening a new frontier for clinical research in ophthalmology, providing new information on the early pathological changes of the retinal microstructures in various retinal diseases. We have reviewed AO technology for retinal imaging, providing information on the core components of an AO retinal camera. The most commonly used wavefront sensing and correcting elements are discussed. Furthermore, we discuss current applications of AO imaging to a population of healthy adults and to the most frequent causes of blindness, including diabetic retinopathy, age-related macular degeneration and glaucoma. We conclude our work with a discussion on future clinical prospects for AO retinal imaging. PMID:23271600

Lombardo, Marco; Serrao, Sebastiano; Devaney, Nicholas; Parravano, Mariacristina; Lombardo, Giuseppe

2013-01-01

171

Vector Acoustics, Vector Sensors, and 3D Underwater Imaging  

NASA Astrophysics Data System (ADS)

Vector acoustic data has two more dimensions of information than pressure data and may allow for 3D underwater imaging with much less data than with hydrophone data. The vector acoustic sensors measures the particle motions due to passing sound waves and, in conjunction with a collocated hydrophone, the direction of travel of the sound waves. When using a controlled source with known source and sensor locations, the reflection points of the sound field can be determined with a simple trigonometric calculation. I demonstrate this concept with an experiment that used an accelerometer based vector acoustic sensor in a water tank with a short-pulse source and passive scattering targets. The sensor consists of a three-axis accelerometer and a matched hydrophone. The sound source was a standard transducer driven by a short 7 kHz pulse. The sensor was suspended in a fixed location and the hydrophone was moved about the tank by a robotic arm to insonify the tank from many locations. Several floats were placed in the tank as acoustic targets at diagonal ranges of approximately one meter. The accelerometer data show the direct source wave as well as the target scattered waves and reflections from the nearby water surface, tank bottom and sides. Without resorting to the usual methods of seismic imaging, which in this case is only two dimensional and relied entirely on the use of a synthetic source aperture, the two targets, the tank walls, the tank bottom, and the water surface were imaged. A directional ambiguity inherent to vector sensors is removed by using collocated hydrophone data. Although this experiment was in a very simple environment, it suggests that 3-D seismic surveys may be achieved with vector sensors using the same logistics as a 2-D survey that uses conventional hydrophones. This work was supported by the Office of Naval Research, program element 61153N.

Lindwall, D.

2007-12-01

172

3D Slicer as an Image Computing Platform for the Quantitative Imaging Network  

PubMed Central

Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm, and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future directions that can further facilitate development and validation of imaging biomarkers using 3D Slicer. PMID:22770690

Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V.; Pieper, Steve; Kikinis, Ron

2012-01-01

173

Acoustic 3-D Imaging Unveils Swimming Behavior of Microscopic Ocean Plankton  

NSF Publications Database

... Physics Press Release 05-069Acoustic 3-D Imaging Unveils Swimming Behavior of Microscopic ... sea creatures. Now, using a newly developed, 3-D imaging system called "Fish TV," an international ...

174

High-resolution 3-D refractive index imaging and Its biological applications  

E-print Network

This thesis presents a theory of 3-D imaging in partially coherent light under a non-paraxial condition. The transmission cross-coefficient (TCC) has been used to characterize partially coherent imaging in a 2- D and 3-D ...

Sung, Yongjin

2011-01-01

175

Brain surface maps from 3-D medical images  

NASA Astrophysics Data System (ADS)

The anatomic and functional localization of brain lesions for neurologic diagnosis and brain surgery is facilitated by labeling the cortical surface in 3D images. This paper presents a method which extracts cortical contours from magnetic resonance (MR) image series and then produces a planar surface map which preserves important anatomic features. The resultant map may be used for manual anatomic localization as well as for further automatic labeling. Outer contours are determined on MR cross-sectional images by following the clear boundaries between gray matter and cerebral-spinal fluid, skipping over sulci. Carrying this contour below the surface by shrinking it along its normal produces an inner contour that alternately intercepts gray matter (sulci) and white matter along its length. This procedure is applied to every section in the set, and the image (grayscale) values along the inner contours are radially projected and interpolated onto a semi-cylindrical surface with axis normal to the slices and large enough to cover the whole brain. A planar map of the cortical surface results by flattening this cylindrical surface. The projection from inner contour to cylindrical surface is unique in the sense that different points on the inner contour correspond to different points on the cylindrical surface. As the outer contours are readily obtained by automatic segmentation, cortical maps can be made directly from an MR series.

Lu, Jiuhuai; Hansen, Eric W.; Gazzaniga, Michael S.

1991-06-01

176

Fast 3-D Tomographic Microwave Imaging for Breast Cancer Detection  

PubMed Central

Microwave breast imaging (using electromagnetic waves of frequencies around 1 GHz) has mostly remained at the research level for the past decade, gaining little clinical acceptance. The major hurdles limiting patient use are both at the hardware level (challenges in collecting accurate and noncorrupted data) and software level (often plagued by unrealistic reconstruction times in the tens of hours). In this paper we report improvements that address both issues. First, the hardware is able to measure signals down to levels compatible with sub-centimeter image resolution while keeping an exam time under 2 min. Second, the software overcomes the enormous time burden and produces similarly accurate images in less than 20 min. The combination of the new hardware and software allows us to produce and report here the first clinical 3-D microwave tomographic images of the breast. Two clinical examples are selected out of 400+ exams conducted at the Dartmouth Hitchcock Medical Center (Lebanon, NH). The first example demonstrates the potential usefulness of our system for breast cancer screening while the second example focuses on therapy monitoring. PMID:22562726

Meaney, Paul M.; Kaufman, Peter A.; diFlorio-Alexander, Roberta M.; Paulsen, Keith D.

2013-01-01

177

3D Chemical and Elemental Imaging by STXM Spectrotomography  

SciTech Connect

Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

Wang, J.; Karunakaran, C.; Lu, Y.; Hormes, J. [Canadian Light Source Inc., University of Saskatchewan, Saskatoon, SK S7N 0X4 (Canada); Hitchcock, A. P. [BIMR, McMaster University, Hamilton, ON L8S 4M1 (Canada); Prange, A. [Microbiology and Food Hygiene, Niederrhein University of Applied Sciences, Moenchengladbach (Germany); Institute for Microbiology and Virology, University of Witten/Herdecke, Witten (Germany); Center for Advanced Microstructures and Devices (CAMD), Louisiana State University, Baton Rouge, LA (United States); Franz, B. [Microbiology and Food Hygiene, Niederrhein University of Applied Sciences, Moenchengladbach (Germany); Harkness, T. [College of Medicine, University of Saskatchewan, Saskatoon, SK S7N 5E5 (Canada); Obst, M. [Center for Applied Geoscience, Tuebingen University, Tuebingen (Germany)

2011-09-09

178

Myocardial strains from 3D displacement encoded magnetic resonance imaging  

PubMed Central

Background The ability to measure and quantify myocardial motion and deformation provides a useful tool to assist in the diagnosis, prognosis and management of heart disease. The recent development of magnetic resonance imaging methods, such as harmonic phase analysis of tagging and displacement encoding with stimulated echoes (DENSE), make detailed non-invasive 3D kinematic analyses of human myocardium possible in the clinic and for research purposes. A robust analysis method is required, however. Methods We propose to estimate strain using a polynomial function which produces local models of the displacement field obtained with DENSE. Given a specific polynomial order, the model is obtained as the least squares fit of the acquired displacement field. These local models are subsequently used to produce estimates of the full strain tensor. Results The proposed method is evaluated on a numerical phantom as well as in vivo on a healthy human heart. The evaluation showed that the proposed method produced accurate results and showed low sensitivity to noise in the numerical phantom. The method was also demonstrated in vivo by assessment of the full strain tensor and to resolve transmural strain variations. Conclusions Strain estimation within a 3D myocardial volume based on polynomial functions yields accurate and robust results when validated on an analytical model. The polynomial field is capable of resolving the measured material positions from the in vivo data, and the obtained in vivo strains values agree with previously reported myocardial strains in normal human hearts. PMID:22533791

2012-01-01

179

3D Chemical and Elemental Imaging by STXM Spectrotomography  

NASA Astrophysics Data System (ADS)

Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

Wang, J.; Hitchcock, A. P.; Karunakaran, C.; Prange, A.; Franz, B.; Harkness, T.; Lu, Y.; Obst, M.; Hormes, J.

2011-09-01

180

3D-Imaging of cardiac structures using 3D heart models for planning in heart surgery: a preliminary study.  

PubMed

The aim of the study was to create an anatomical correct 3D rapid prototyping model (RPT) for patients with complex heart disease and altered geometry of the atria or ventricles to facilitate planning and execution of the surgical procedure. Based on computer tomography (CT) and magnetic resonance imaging (MRI) images, regions of interest were segmented using the Mimics 9.0 software (Materialise, Leuven, Belgium). The segmented regions were the target volume and structures at risk. After generating an STL-file (StereoLithography file) out of the patient's data set, the 3D printer Ztrade mark 510 (4D Concepts, Gross-Gerau, Germany) created a 3D plaster model. The patient individual 3D printed RPT-models were used to plan the resection of a left ventricular aneurysm and right ventricular tumor. The surgeon was able to identify risk structures, assess the ideal resection lines and determine the residual shape after a reconstructive procedure (LV remodelling, infiltrating tumor resection). Using a 3D-print of the LV-aneurysm, reshaping of the left ventricle ensuring sufficient LV volume was easily accomplished. The use of the 3D rapid prototyping model (RPT-model) during resection of ventricular aneurysm and malignant cardiac tumors may facilitate the surgical procedure due to better planning and improved orientation. PMID:17925319

Jacobs, Stephan; Grunert, Ronny; Mohr, Friedrich W; Falk, Volkmar

2008-02-01

181

3D imaging of semiconductor components by discrete laminography  

NASA Astrophysics Data System (ADS)

X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

Batenburg, K. J.; Palenstijn, W. J.; Sijbers, J.

2014-06-01

182

High-Resolution Isotropic 3D Diffusion Tensor Imaging of the Human Brain  

E-print Network

High-Resolution Isotropic 3D Diffusion Tensor Imaging of the Human Brain Xavier Golay,1,2* Hangyi Jiang,1,2 Peter C.M. van Zijl,1,2 and Susumu Mori1,2 High-resolution cardiac-gated 3D diffusion tensor tensor imaging; high resolution; 3D isotro- pic imaging; white matter; brainstem Diffusion

Jiang, Hangyi

183

Needle placement for piriformis injection using 3-D imaging.  

PubMed

Piriformis syndrome is a pain syndrome originating in the buttock and is attributed to 6% - 8% of patients referred for the treatment of back and leg pain. The treatment for piriformis syndrome using fluoroscopy, computed tomography (CT), electromyography (EMG), and ultrasound (US) has become standard practice. The treatment of Piriformis Syndrome has evolved to include fluoroscopy and EMG with CT guidance. We present a case study of 5 successful piriformis injections using 3-D computer-assisted electromagnet needle tracking coupled with ultrasound. A 6-degree of freedom electromagnetic position tracker was attached to the ultrasound probe that allowed the system to detect the position and orientation of the probe in the magnetic field. The tracked ultrasound probe was used to find the posterior superior iliac spine. Subsequently, 3 points were captured to register the ultrasound image with the CT or magnetic resonance image scan. Moreover, after the registration was obtained, the navigation system visualized the tracked needle relative to the CT scan in real-time using 2 orthogonal multi-planar reconstructions centered at the tracked needle tip. Conversely, a recent study revealed that fluoroscopically guided injections had 30% accuracy compared to ultrasound guided injections, which tripled the accuracy percentage. This novel technique exhibited an accurate needle guidance injection precision of 98% while advancing to the piriformis muscle and avoiding the sciatic nerve. The mean ( SD) procedure time was 19.08 ( 4.9) minutes. This technique allows for electromagnetic instrument tip tracking with real-time 3-D guidance to the selected target. As with any new technique, a learning curve is expected; however, this technique could offer an alternative, minimizing radiation exposure. PMID:23703429

Clendenen, Steven R; Candler, Shawn A; Osborne, Michael D; Palmer, Scott C; Duench, Stephanie; Glynn, Laura; Ghazi, Salim M

2013-01-01

184

Enhanced visualization of MR angiogram with modified MIP and 3D image fusion  

Microsoft Academic Search

We have developed a 3D image processing and display technique that include image resampling, modification of MIP, volume rendering, and fusion of MIP image with volumetric rendered image. This technique facilitates the visualization of the 3D spatial relationship between vasculature and surrounding organs by overlapping the MIP image on the volumetric rendered image of the organ. We applied this technique

Jong H. Kim; Kyoung M. Yeon; Man C. Han; Dong Hyuk Lee; Han I. Cho

1997-01-01

185

3D CHIRP SUB-BOTTOM IMAGING SYSTEM: DESIGN AND FIRST 3D VOLUME  

Microsoft Academic Search

Chirp sub-bottom profilers are marine acoustic devices that use a known and repeatable source signature (1 - 24 kHz) to produce decimetre vertical resolution cross- sections of the sub-seabed. Here the design and development of the first true 3D Chirp system is described. When developing the design, critical factors that had to be considered included spatial aliasing, and precise positioning

Jonathan M. Bull; Martin Gutowski; Justin K. Dix; Timothy J. Henstock; Peter Hogarth; Timothy G. Leighton; Paul R. White

186

Fractal analysis of the retinal vascular network in fundus images  

Microsoft Academic Search

Complexity of the retinal vascular network is quantified through the measurement of fractal dimension. A computerized approach enhances and segments the retinal vasculature in digital fundus images with an accuracy of 94% in comparison to the gold standard of manual tracing. Fractal analysis was performed on skeletonized versions of the network in 40 images from a study of stroke. Mean

T. J. MacGillivray; N. Patton; F. N. Doubal; C. Graham; J. M. Wardlaw

2007-01-01

187

Adaptive optics with pupil tracking for high resolution retinal imaging  

E-print Network

Adaptive optics with pupil tracking for high resolution retinal imaging Betul Sahin,1, Barbara@gmail.com Abstract: Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing with a compact adaptive optics flood illumination fundus camera where it was possible to compensate

Dainty, Chris

188

3D Fiber Tractography with Susceptibility Tensor Imaging  

PubMed Central

Gradient-echo MRI has revealed anisotropic magnetic susceptibility in the brain white matter. This magnetic susceptibility anisotropy can be measured and characterized with susceptibility tensor imaging (STI). In this study, a method of fiber tractography based on STI is proposed and demonstrated in the mouse brain. STI experiments of perfusion-fixed mouse brains were conducted at 7.0 T. The magnetic susceptibility tensor was calculated for each voxel with regularization and decomposed into its eigensystem. The major eigenvector is found to be aligned with the underlying fiber orientation. Following the orientation of the major eigenvector, we are able to map distinctive fiber pathways in 3D. As a comparison, diffusion tensor imaging (DTI) and DTI fiber tractography were also conducted on the same specimens. The relationship between STI and DTI fiber tracts was explored with similarities and differences identified. It is anticipated that the proposed method of STI tractography may provide a new way to study white matter fiber architecture. As STI tractography is based on physical principles that are fundamentally different from DTI, it may also be valuable for the ongoing validation of DTI tractography. PMID:21867759

Liu, Chunlei; Li, Wei; Wu, Bing; Jiang, Yi; Johnson, G. Allan

2011-01-01

189

A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images  

NASA Astrophysics Data System (ADS)

Glaucoma is an ocular disease characterized by distinctive changes in the optic nerve head (ONH) and visual field. Glaucoma can strike without symptoms and causes blindness if it remains without treatment. Therefore, early disease detection is important so that treatment can be initiated and blindness prevented. In this context, important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, an essential element for glaucoma detection and monitoring. 3D spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, has been commonly used to discriminate glaucomatous from healthy subjects. In this paper, we present a new framework for detection of glaucoma progression using 3D SD-OCT images. In contrast to previous works that the retinal nerve fiber layer (RNFL) thickness measurement provided by commercially available spectral-domain optical coherence tomograph, we consider the whole 3D volume for change detection. To integrate a priori knowledge and in particular the spatial voxel dependency in the change detection map, we propose the use of the Markov Random Field to handle a such dependency. To accommodate the presence of false positive detection, the estimated change detection map is then used to classify a 3D SDOCT image into the "non-progressing" and "progressing" glaucoma classes, based on a fuzzy logic classifier. We compared the diagnostic performance of the proposed framework to existing methods of progression detection.

Belghith, Akram; Bowd, Christopher; Weinreb, Robert N.; Zangwill, Linda M.

2014-03-01

190

Blood vessel segmentation methodologies in retinal images--a survey.  

PubMed

Retinal vessel segmentation algorithms are a fundamental component of automatic retinal disease screening systems. This work examines the blood vessel segmentation methodologies in two dimensional retinal images acquired from a fundus camera and a survey of techniques is presented. The aim of this paper is to review, analyze and categorize the retinal vessel extraction algorithms, techniques and methodologies, giving a brief description, highlighting the key points and the performance measures. We intend to give the reader a framework for the existing research; to introduce the range of retinal vessel segmentation algorithms; to discuss the current trends and future directions and summarize the open problems. The performance of algorithms is compared and analyzed on two publicly available databases (DRIVE and STARE) of retinal images using a number of measures which include accuracy, true positive rate, false positive rate, sensitivity, specificity and area under receiver operating characteristic (ROC) curve. PMID:22525589

Fraz, M M; Remagnino, P; Hoppe, A; Uyyanonvara, B; Rudnicka, A R; Owen, C G; Barman, S A

2012-10-01

191

Segmentation, registration,and selective watermarking of retinal images  

E-print Network

In this dissertation, I investigated some fundamental issues related to medical image segmentation, registration, and watermarking. I used color retinal fundus images to perform my study because of the rich representation of different objects (blood...

Wu, Di

2006-08-16

192

Vessel Cross-Sectional Diameter Measurement on Color Retinal Image  

Microsoft Academic Search

Vessel cross-sectional diameter is an important feature for analyzing retinal vascular changes. In automated retinal image\\u000a analysis, the measurement of vascular width is a complex process as most of the vessels are few pixels wide or suffering from\\u000a lack of contrast. In this paper, we propose a new method to measure the retinal blood vessel diameter which can be used

Alauddin Bhuiyan; Baikunth Nath; Josel韙o J. Chua; Ramamohanarao Kotagiri

2008-01-01

193

Analysis and dynamic 3D visualization of cerebral blood flow combining 3D and 4D MR image sequences  

NASA Astrophysics Data System (ADS)

In this paper we present a method for the dynamic visualization of cerebral blood flow. Spatio-temporal 4D magnetic resonance angiography (MRA) image datasets and 3D MRA datasets with high spatial resolution were acquired for the analysis of arteriovenous malformations (AVMs). One of the main tasks is the combination of the information of the 3D and 4D MRA image sequences. Initially, in the 3D MRA dataset the vessel system is segmented and a 3D surface model is generated. Then, temporal intensity curves are analyzed voxelwise in the 4D MRA image sequences. A curve fitting of the temporal intensity curves to a patient individual reference curve is used to extract the bolus arrival times in the 4D MRA sequences. After non-linear registration of both MRA datasets the extracted hemodynamic information is transferred to the surface model where the time points of inflow can be visualized color coded dynamically over time. The dynamic visualizations computed using the curve fitting method for the estimation of the bolus arrival times were rated superior compared to those computed using conventional approaches for bolus arrival time estimation. In summary the procedure suggested allows a dynamic visualization of the individual hemodynamic situation and better understanding during the visual evaluation of cerebral vascular diseases.

Forkert, Nils Daniel; S鋜ing, Dennis; Fiehler, Jens; Illies, Till; M鰈ler, Dietmar; Handels, Heinz

2009-02-01

194

REALISTIC 3-D SCENE MODELING FROM UNCALIBRATED IMAGE SEQUENCES Reinhard Koch  

E-print Network

REALISTIC 3-D SCENE MODELING FROM UNCALIBRATED IMAGE SEQUENCES Reinhard Koch Multimedia the problem of obtaining photo- realistic 3D models of a scene from images alone with a structure-from-motion approach. The 3D scene is observed from multiple viewpoints by freely moving a camera around the object

Pollefeys, Marc

195

Reconstructing Plants in 3D from a Single Image using Analysis-by-Synthesis  

E-print Network

Reconstructing Plants in 3D from a Single Image using Analysis-by-Synthesis J麓er^ome Gu麓enard1 G from images. However, due to high complexity of plant topology, dedicated methods for generating 3D plant models must be devised. We propose to generate a 3D model of a plant, using an analysis

Paris-Sud XI, Universit茅 de

196

3d Space-Varying Coefficient Models with Application to Diffusion Tensor Imaging  

E-print Network

3d Space-Varying Coefficient Models with Application to Diffusion Tensor Imaging S. Heim a,, L regressions is reformulated as a 3d space-varying coefficient model (SVCM) for the entire set of diffusion tensor images recorded on a 3d voxel grid. The SVCM unifies the three-step cascade of standard data

Marx, Brian D.

197

The 3D Visualization of Brain Anatomy from Diffusion-Weighted Magnetic Resonance Imaging Data  

E-print Network

The 3D Visualization of Brain Anatomy from Diffusion-Weighted Magnetic Resonance Imaging Data Convolution create images segmented by tissue type and incorporating a texture rep- resenting the 3D orientation of nerve fibers. Finally streamtubes and hyperstreamlines represent the full 3D structure of nerve

Goodman, James R.

198

A Level Set Method for Anisotropic Geometric Diffusion in 3D Image Processing  

E-print Network

A Level Set Method for Anisotropic Geometric Diffusion in 3D Image Processing Tobias Preu?er. A noisy 3D echocardiographical dataset (top left) is evolved by isotropic Perona Malik diffusion (top and Martin Rumpf Abstract--A new morphological multiscale method in 3D image process- ing is presented which

Preusser, Tobias

199

2D\\/3D Mixed Service in T-DMB System Using Depth Image Based Rendering  

Microsoft Academic Search

In this paper, we introduce a 2D\\/3D mixed service in terrestrial digital multimedia broadcasting (T-DMB) system using depth-image-based rendering (DIBR). The 2D\\/3D mixed service is the 3D service type that 3D contents are shown partially while a 2D video sequence is displayed in the entire screen, or vice versa. This service is very attractive because partial display of 3D contents

KwangHee Jung; Young Kyung Park; Joong Kyu Kim; Hyun Lee; KugJin Yun; NamHo Hur; JinWoong Kim

2008-01-01

200

3D-3D registration of partial capitate bones using spin-images  

NASA Astrophysics Data System (ADS)

It is often necessary to register partial objects in medical imaging. Due to limited field of view (FOV), the entirety of an object cannot always be imaged. This study presents a novel application of an existing registration algorithm to this problem. The spin-image algorithm [1] creates pose-invariant representations of global shape with respect to individual mesh vertices. These `spin-images,' are then compared for two different poses of the same object to establish correspondences and subsequently determine relative orientation of the poses. In this study, the spin-image algorithm is applied to 4DCT-derived capitate bone surfaces to assess the relative accuracy of registration with various amounts of geometry excluded. The limited longitudinal coverage under the 4DCT technique (38.4mm, [2]), results in partial views of the capitate when imaging wrist motions. This study assesses the ability of the spin-image algorithm to register partial bone surfaces by artificially restricting the capitate geometry available for registration. Under IRB approval, standard static CT and 4DCT scans were obtained on a patient. The capitate was segmented from the static CT and one phase of 4DCT in which the whole bone was available. Spin-image registration was performed between the static and 4DCT. Distal portions of the 4DCT capitate (10-70%) were then progressively removed and registration was repeated. Registration accuracy was evaluated by angular errors and the percentage of sub-resolution fitting. It was determined that 60% of the distal capitate could be omitted without appreciable effect on registration accuracy using the spin-image algorithm (angular error < 1.5 degree, sub-resolution fitting < 98.4%).

Breighner, Ryan; Holmes, David R.; Leng, Shuai; An, Kai-Nan; McCollough, Cynthia; Zhao, Kristin

2013-03-01

201

Criterion independent hierarchical segmentation for unstructured 3D datasets -Application to range images  

E-print Network

an unstructured input data, rec- ognizing physical anomalies from medical 3D images and 3D scene modelling by robotics applications. Recognizing parts on assembly lines, recon- structing a CAD model from

Paris-Sud XI, Universit茅 de

202

Statistical methods for 2D-3D registration of optical and LIDAR images  

E-print Network

Fusion of 3D laser radar (LIDAR) imagery and aerial optical imagery is an efficient method for constructing 3D virtual reality models. One difficult aspect of creating such models is registering the optical image with the ...

Mastin, Dana Andrew

2009-01-01

203

Direct writing of digital images onto 3D surfaces  

Microsoft Academic Search

Purpose Aims to develop a greyscale 損ainting system by enabling the physical reproduction of digital texture maps on arbitrary 3D objects selectively exposing 損ixels of photographic emulsion with a robot mounted light source. Design\\/methodology\\/approach After reviewing existing methods of 揹ecorating 3D components, the properties of photographic emulsion are introduced and the nature of the rendering process' pixels discussed.

Raymond C. W. Sung; Jonathan R. Corney; David P. Towers; Ian Black; Duncan P. Hand; Finlay McPherson; Doug E. R. Clark; Markus S. Gross

2006-01-01

204

A Robust Method for Filling Holes in 3D Meshes Based on Image Restoration  

Microsoft Academic Search

In this work a method for filling holes in 3D meshes based on a 2D image restoration algorithm is expounded. Since 3D data\\u000a must be converted to a suitable input format, a 3D to 2D transformation is executed by projecting the 3D surface onto a grid.\\u000a The storage of the depth information in every grid provides the 2D image which

Emiliano P閞ez; Santiago Salamanca; Pilar Merch醤; Antonio Ad醤; Carlos Cerrada; Inocente Cambero

2008-01-01

205

3D shape reconstructing system from multiple-view images using octree and silhouette  

Microsoft Academic Search

In this paper, we describe the 3D shape reconstructing system from multiple view images using octree and silhouette. Our system consists of four calibrated cameras. Each camera is connected to a PC that locally extracts the silhouettes from the image captured by the camera. The four silhouette images and camera images are then sent to host computer to perform 3D

Daisuke Iso; Hideo Saito; Shinji Ozawa

2001-01-01

206

Pattern Recognition Project : Vessel Detection in Retinal Images  

E-print Network

Pattern Recognition Project : Vessel Detection in Retinal Images Instructor: Wei-Yang Lin Due date study. The goal of this project is to provide a hand-on experience in building a vessel detection methods for detecting retinal blood vessel reported in the literature, e.g., [1, 2, 3]. Each group

Lin, Wei-Yang

207

Dual-view integral imaging 3D display using polarizer parallax barriers.  

PubMed

We propose a dual-view integral imaging (DVII) 3D display using polarizer parallax barriers (PPBs). The DVII 3D display consists of a display panel, a microlens array, and two PPBs. The elemental images (EIs) displayed on the left and right half of the display panel are captured from two different 3D scenes, respectively. The lights emitted from two kinds of EIs are modulated by the left and right half of the microlens array to present two different 3D images, respectively. A prototype of the DVII 3D display is developed, and the experimental results agree well with the theory. PMID:24787159

Wu, Fei; Wang, Qiong-Hua; Luo, Cheng-Gao; Li, Da-Hai; Deng, Huan

2014-04-01

208

Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration  

NASA Astrophysics Data System (ADS)

The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43+/-1.19, 0.45+/-2.17, 0.23+/-1.05) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

2012-02-01

209

3D holographic display with enlarged image using a concave reflecting mirror  

NASA Astrophysics Data System (ADS)

We propose a method to enlarge the 3D image in holographic display using a concave reflecting mirror based on the optical reversibility theorem. The holograms are computed using the look-up table (LUT) method, and the common data of the 3D objects are compressed to reduce the memory usage of LUT. Optical experiments are performed and the results show that 3D image can be magnified without any distortion in a shortened image distance, and the memory usage of LUT is reduced. Keywords: computer holography; holographic display; magnification of 3D image size; distortion of the image; compensation of the distortion.

Jia, Jia; Wang, Yongtian; Liu, Juan; Li, Xin; Pan, Yijie

2012-11-01

210

3-D Seismic Methods for Shallow Imaging Beneath Pavement  

E-print Network

The research presented in this dissertation focuses on survey design and acquisition of near-surface 3D seismic reflection and surface wave data on pavement. Increased efficiency for mapping simple subsurface interfaces ...

Miller, Brian

2013-05-31

211

Geometric Smoothing of 3D Surfaces and Non-linear Diffusion of 3D Images  

Microsoft Academic Search

In this paper we present a geometric smoothing technique for three-dimensional surfacesand images. The technique relies on curvature-dependent deformations of surfaces and theintuition that highly curved regions should move into their convexity by an amount proportionalto their curvature. While this intuition in 2D has lead to a well-defined process andformal theorems about its smoothing properties, the development of a similar

Predrag Neskovic; Benjamin B. Kimia

1995-01-01

212

Optimization of the open-loop liquid crystal adaptive optics retinal imaging system.  

PubMed

An open-loop adaptive optics (AO) system for retinal imaging was constructed using a liquid crystal spatial light modulator (LC-SLM) as the wavefront compensator. Due to the dispersion of the LC-SLM, there was only one illumination source for both aberration detection and retinal imaging in this system. To increase the field of view (FOV) for retinal imaging, a modified mechanical shutter was integrated into the illumination channel to control the size of the illumination spot on the fundus. The AO loop was operated in a pulsing mode, and the fundus was illuminated twice by two laser impulses in a single AO correction loop. As a result, the FOV for retinal imaging was increased to 1.7-deg without compromising the aberration detection accuracy. The correction precision of the open-loop AO system was evaluated in a closed-loop configuration; the residual error is approximately 0.0909? (root-mean-square, RMS), and the Strehl ratio ranges to 0.7217. Two subjects with differing rates of myopia (-3D and -5D) were tested. High-resolution images of capillaries and photoreceptors were obtained. PMID:22463033

Kong, Ningning; Li, Chao; Xia, Mingliang; Li, Dayu; Qi, Yue; Xuan, Li

2012-02-01

213

Image-to-physical registration for image-guided interventions using 3-D ultrasound and an ultrasound imaging model.  

PubMed

We present a technique for automatic intensity-based image-to-physical registration of a 3-D segmentation for image-guided interventions. The registration aligns the segmentation with tracked and calibrated 3-D ultrasound (US) images of the target region. The technique uses a probabilistic framework and explicitly incorporates a model of the US image acquisition process. The rigid body registration parameters are varied to maximise the likelihood that the real US image(s) were formed using the US imaging model from the probe transducer position. The proposed technique is validated on images segmented from cardiac magnetic resonance imaging (MRI) data and 3-D US images acquired from 3 volunteers and 1 patient. We show that the accuracy of the algorithm is 2.6-4.2mm and the capture range is 9-18mm. The proposed technique has the potential to provide accurate image-to-physical registrations for a range of image guidance applications. PMID:19694263

King, Andrew P; Ma, Ying-Liang; Yao, Cheng; Jansen, Christian; Razavi, Reza; Rhode, Kawal S; Penney, Graeme P

2009-01-01

214

Hydraulic conductivity imaging from 3-D transient hydraulic tomography at several pumping/observation densities  

E-print Network

Hydraulic conductivity imaging from 3-D transient hydraulic tomography at several pumping August 2013; accepted 7 September 2013; published 13 November 2013. [1] 3-D Hydraulic tomography (3-D HT (primarily hydraulic conductivity, K) is estimated by joint inversion of head change data from multiple

Barrash, Warren

215

Abstract--Ultrasound Current Source Density Imaging (UCSDI) potentially improves 3-D mapping of bioelectric sources  

E-print Network

on a 3D printer, each electrode can be placed anywhere on an XY grid (5mm spacing) and individually as the acoustic waves propagates through a conducting material. This technique potentially improves 3-D mappingAbstract--Ultrasound Current Source Density Imaging (UCSDI) potentially improves 3-D mapping

Witte, Russell S.

216

Medical Image Registration and Fusion with 3D CT and MR Data of Head  

Microsoft Academic Search

The purpose of this study is to register the 3D image from computed tomography (CT) and magnetic resonance (MR) and therefore to integrate the information of hard and soft tissue. The slices with maximum areas detected from both 3D data sets were used as the corresponding slices to calculate the parameters of scale, rotation and translation for 3D data registration

Chih-hua Huang; Ching-fen Jiang; Wen-hsu Sung

2006-01-01

217

A novel 3D terrain matching algorithm based on image laser radar  

Microsoft Academic Search

The imaging laser radar was an ideal imaging sensor to get the high precision 3D terrain for terrain aided navigation (TAN). For the application, decreasing the mismatching rate was very important. In this paper, the measurement model of imaging laser radar was introduced and a novel 3D terrain matching algorithm with low mismatching rate was presented. According to the theory

Junbin Gong; Hua Cheng; Jie Ma; Jinwen Tian

2008-01-01

218

DXSoil, a library for 3D image analysis in soil science  

Microsoft Academic Search

A comprehensive series of routines has been developed to extract structural and topological information from 3D images of porous media. The main application aims at feeding a pore network approach to simulate unsaturated hydraulic properties from soil core images. Beyond the application example, the successive algorithms presented in the paper allow, from any 3D object image, the extraction of the

Jean-fran-cois Delerue; Edith Perrier

2002-01-01

219

Using a genetic algorithm to register an uncalibrated image pair to a 3D surface model  

E-print Network

Using a genetic algorithm to register an uncalibrated image pair to a 3D surface model Zsolt Jank a successful application of genetic algorithms to the regis- tration of uncalibrated optical images to a 3D surface model. The problem is to find the projection matrices corresponding to the images in order

Chetverikov, Dmitry

220

Investigating 3D Geometry of Porous Media from High Resolution Images  

E-print Network

Investigating 3D Geometry of Porous Media from High Resolution Images W. B. Lindquist and A im颅 age. 2 Image Segmentation CAT and LSCM images of porous media are grey scale im颅 ages, usually of the Earth MS No.: SE39.2颅012 First author: Lindquist 1 Investigating 3D Geometry of Porous Media from High

New York at Stoney Brook, State University of

221

A Level Set Method for Anisotropic Geometric Diffusion in 3D Image Processing  

Microsoft Academic Search

A new morphological multiscale method in 3D image processing is presented which combines the image processing methodology based on nonlinear diffusion equations and the theory of geometric evolution prob- lems. Its aim is to smooth level sets of a 3D image while simultaneously preserving geometric features such as edges and corners on the level sets. This is obtained by an

Martin Rumpf

2000-01-01

222

Intra-retinal layer segmentation in optical coherence tomography images.  

PubMed

Retinal layer thickness, evaluated as a function of spatial position from optical coherence tomography (OCT) images is an important diagnostics marker for many retinal diseases. However, due to factors such as speckle noise, low image contrast, irregularly shaped morphological features such as retinal detachments, macular holes, and drusen, accurate segmentation of individual retinal layers is difficult. To address this issue, a computer method for retinal layer segmentation from OCT images is presented. An efficient two-step kernel-based optimization scheme is employed to first identify the approximate locations of the individual layers, which are then refined to obtain accurate segmentation results for the individual layers. The performance of the algorithm was tested on a set of retinal images acquired in-vivo from healthy and diseased rodent models with a high speed, high resolution OCT system. Experimental results show that the proposed approach provides accurate segmentation for OCT images affected by speckle noise, even in sub-optimal conditions of low image contrast and presence of irregularly shaped structural features in the OCT images. PMID:20052083

Mishra, Akshaya; Wong, Alexander; Bizheva, Kostadinka; Clausi, David A

2009-12-21

223

Full 3D Rigid Body Automatic Motion Correction of MRI Images  

Microsoft Academic Search

We demonstrate the first successful automatic motion correction of 3D MRI data with rigid body motion in all six degrees of\\u000a freedom. An existing 2D retrospective technique is extended to 3D with a shear-based factorization of 3D rotations and simultaneous\\u000a optimization of motion corrections in all six degrees of freedom. Tests on motion corrupted 3D brain images from elderly research

Yi Su; Armando Manduca; Clifford R. Jack Jr.; E. Brian Welch; Richard L. Ehman

2001-01-01

224

Towards 3D ultrasound image based soft tissue tracking: a transrectal ultrasound prostate image alignment system.  

PubMed

The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space and thus renders optimization efficient. The method was tested on 237 prostate volumes acquired from 14 different patients for 3D to 3D and 3D to orthogonal 2D slices registration. The 3D-3D version of the algorithm converged correctly in 96.7% of all cases in 6.5s with an accuracy of 1.41mm (r.m.s.) and 3.84mm (max). The 3D to slices method yielded a success rate of 88.9% in 2.3s with an accuracy of 1.37mm (r.m.s.) and 4.3mm (max). PMID:18044549

Baumann, Michael; Mozer, Pierre; Daanen, Vincent; Troccaz, Jocelyne

2007-01-01

225

Imaging system for creating 3D block-face cryo-images of whole mice  

NASA Astrophysics Data System (ADS)

We developed a cryomicrotome/imaging system that provides high resolution, high sensitivity block-face images of whole mice or excised organs, and applied it to a variety of biological applications. With this cryo-imaging system, we sectioned cryo-preserved tissues at 2-40 ?m thickness and acquired high resolution brightfield and fluorescence images with microscopic in-plane resolution (as good as 1.2 ?m). Brightfield images of normal and pathological anatomy show exquisite detail, especially in the abdominal cavity. Multi-planar reformatting and 3D renderings allow one to interrogate 3D structures. In this report, we present brightfield images of mouse anatomy, as well as 3D renderings of organs. For BPK mice model of polycystic kidney disease, we compared brightfield cryo-images and kidney volumes to MRI. The color images provided greater contrast and resolution of cysts as compared to in vivo MRI. We note that color cryo-images are closer to what a researcher sees in dissection, making it easier for them to interpret image data. The combination of field of view, depth of field, ultra high resolution and color/fluorescence contrast enables cryo-image volumes to provide details that cannot be found through in vivo imaging or other ex vivo optical imaging approaches. We believe that this novel imaging system will have applications that include identification of mouse phenotypes, characterization of diseases like blood vessel disease, kidney disease, and cancer, assessment of drug and gene therapy delivery and efficacy and validation of other imaging modalities.

Roy, Debashish; Breen, Michael; Salvado, Olivier; Heinzel, Meredith; McKinley, Eliot; Wilson, David

2006-03-01

226

Estimating Density Gradients and Drivers from 3D Ionospheric Imaging  

NASA Astrophysics Data System (ADS)

The transition regions at the edges of the ionospheric storm-enhanced density (SED) are important for a detailed understanding of the mid-latitude physical processes occurring during major magnetic storms. At the boundary, the density gradients are evidence of the drivers that link the larger processes of the SED, with its connection to the plasmasphere and prompt-penetration electric fields, to the smaller irregularities that result in scintillations. For this reason, we present our estimates of both the plasma variation with horizontal and vertical spatial scale of 10 - 100 km and the plasma motion within and along the edges of the SED. To estimate the density gradients, we use Ionospheric Data Assimilation Four-Dimensional (IDA4D), a mature data assimilation algorithm that has been developed over several years and applied to investigations of polar cap patches and space weather storms [Bust and Crowley, 2007; Bust et al., 2007]. We use the density specification produced by IDA4D with a new tool for deducing ionospheric drivers from 3D time-evolving electron density maps, called Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE). The EMPIRE technique has been tested on simulated data from TIMEGCM-ASPEN and on IDA4D-based density estimates with ongoing validation from Arecibo ISR measurements [Datta-Barua et al., 2009a; 2009b]. We investigate the SED that formed during the geomagnetic super storm of November 20, 2003. We run IDA4D at low-resolution continent-wide, and then re-run it at high (~10 km horizontal and ~5-20 km vertical) resolution locally along the boundary of the SED, where density gradients are expected to be highest. We input the high-resolution estimates of electron density to EMPIRE to estimate the ExB drifts and field-aligned plasma velocities along the boundaries of the SED. We expect that these drivers contribute to the density structuring observed along the SED during the storm. Bust, G. S. and G. Crowley (2007), Tracking of polar cap patches using data assimilation, J. Geophys. Res., 112, A05307, doi:10.1029/2005JA011597. Bust, G. S., G. Crowley, T. W. Garner, T. L. Gaussiran II, R. W. Meggs, C. N. Mitchell, P. S. J. Spencer, P. Yin, and B. Zapfe (2007) ,Four Dimensional GPS Imaging of Space-Weather Storms, Space Weather, 5, S02003, doi:10.1029/2006SW000237. Datta-Barua, S., G. S. Bust, G. Crowley, and N. Curtis (2009a), Neutral wind estimation from 4-D ionospheric electron density images, J. Geophys. Res., 114, A06317, doi:10.1029/2008JA014004. Datta-Barua, S., G. Bust, and G. Crowley (2009b), "Estimating Model Parameters from Ionospheric Reverse Engineering (EMPIRE)," presented at CEDAR, Santa Fe, New Mexico, July 1.

Datta-Barua, S.; Bust, G. S.; Curtis, N.; Reynolds, A.; Crowley, G.

2009-12-01

227

3D imaging and wavefront sensing with a plenoptic objective  

Microsoft Academic Search

Plenoptic cameras have been developed over the last years as a passive method for 3d scanning. Several superresolution algorithms have been proposed in order to increase the resolution decrease associated with lightfield acquisition with a microlenses array. A number of multiview stereo algorithms have also been applied in order to extract depth information from plenoptic frames. Real time systems have

J. M. Rodr韌uez-Ramos; J. P. L黭e; R. L髉ez; J. G. Marichal-Hern醤dez; I. Montilla; J. Trujillo-Sevilla; B. Femen韆; M. Puga; M. L髉ez; J. J. Fern醤dez-Valdivia; F. Rosa; C. Dominguez-Conde; J. C. Sanluis; L. F. Rodr韌uez-Ramos

2011-01-01

228

Enhancement of retinal images: a critical evaluation of the technology  

NASA Astrophysics Data System (ADS)

Evaluation of retinal images is essential to modern ophthalmic care. With the advent of image processing equipment, digital recording and processing of retinal images is starting to replace the standard film based fundus photography. The ability to enhance images is cited as one of the major benefits of this expensive technology. This paper critically reviews the practices employed in the image enhancement literature. It is argued that the papers published to date have not presented convincing evidence regarding the diagnostic value of retinal image enhancement. The more elaborate studies in radiology suggest, at best, modest diagnostic improvement with enhancement. The special difficulties associated with the demonstration of an improved diagnosis in ophthalmic imaging are discussed in terms of the diagnostic task and the selection of study populations.

Peli, Eli

1993-10-01

229

Imaging Retinal Blood Flow with Laser Speckle Flowmetry  

PubMed Central

Laser speckle flowmetry (LSF) was initially developed to measure blood flow in the retina. More recently, its primary application has been to image baseline blood flow and activity-dependent changes in blood flow in the brain. We now describe experiments in the rat retina in which LSF was used in conjunction with confocal microscopy to monitor light-evoked changes in blood flow in retinal vessels. This dual imaging technique permitted us to stimulate retinal photoreceptors and measure vessel diameter with confocal microscopy while simultaneously monitoring blood flow with LSF. We found that a flickering light dilated retinal arterioles and evoked increases in retinal blood velocity with similar time courses. In addition, focal light stimulation evoked local increases in blood velocity. The spatial distribution of these increases depended on the location of the stimulus relative to retinal arterioles and venules. The results suggest that capillaries are largely unresponsive to local neuronal activity and that hemodynamic responses are mediated primarily by arterioles. The use of LSF to image retinal blood flow holds promise in elucidating the mechanisms mediating functional hyperemia in the retina and in characterizing changes in blood flow that occur during retinal pathology. PMID:20941368

Srienc, Anja I.; Kurth-Nelson, Zeb L.; Newman, Eric A.

2010-01-01

230

Motion detection and compensation in infrared retinal image sequences.  

PubMed

Infrared image data captured by non-mydriatic digital retinography systems often are used in the diagnosis and treatment of the diabetic macular edema (DME). Infrared illumination is less aggressive to the patient retina, and retinal studies can be carried out without pupil dilation. However, sequences of infrared eye fundus images of static scenes, tend to present pixel intensity fluctuations in time, and noisy and background illumination changes pose a challenge to most motion detection methods proposed in the literature. In this paper, we present a retinal motion detection method that is adaptive to background noise and illumination changes. Our experimental results indicate that this method is suitable for detecting retinal motion in infrared image sequences, and compensate the detected motion, which is relevant in retinal laser treatment systems for DME. PMID:23870497

Scharcanski, J; Schardosim, L R; Santos, D; Stuchi, A

2013-01-01

231

Retinal image restoration by means of blind deconvolution  

NASA Astrophysics Data System (ADS)

Retinal imaging plays a key role in the diagnosis and management of ophthalmologic disorders, such as diabetic retinopathy, glaucoma, and age-related macular degeneration. Because of the acquisition process, retinal images often suffer from blurring and uneven illumination. This problem may seriously affect disease diagnosis and progression assessment. Here we present a method for color retinal image restoration by means of multichannel blind deconvolution. The method is applied to a pair of retinal images acquired within a lapse of time, ranging from several minutes to months. It consists of a series of preprocessing steps to adjust the images so they comply with the considered degradation model, followed by the estimation of the point-spread function and, ultimately, image deconvolution. The preprocessing is mainly composed of image registration, uneven illumination compensation, and segmentation of areas with structural changes. In addition, we have developed a procedure for the detection and visualization of structural changes. This enables the identification of subtle developments in the retina not caused by variation in illumination or blur. The method was tested on synthetic and real images. Encouraging experimental results show that the method is capable of significant restoration of degraded retinal images.

Marrugo, Andr閟 G.; 妎rel, Michal; 妑oubek, Filip; Mill醤, Mar韆 S.

2011-11-01

232

Simulation of a new 3D imaging sensor for identifying difficult military targets  

Microsoft Academic Search

This paper reports the successful application of automatic target recognition and identification (ATR\\/I) algorithms to simulated 3D imagery of 'difficult' military targets. QinetiQ and Selex S&AS are engaged in a joint programme to build a new 3D laser imaging sensor for UK MOD. The sensor is a 3D flash system giving an image containing range and intensity information suitable for

Christophe Harvey; Jonathan Wood; Peter Randall; Graham Watson; Gordon Smith

2008-01-01

233

3D image display of fetal ultrasonic images by thin shell  

NASA Astrophysics Data System (ADS)

Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.

Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen

1999-05-01

234

Wavefront-coding technique for inexpensive and robust retinal imaging.  

PubMed

We propose a hybrid optical-digital imaging system that can provide high-resolution retinal images without wavefront sensing or correction of the spatial and dynamic variations of eye aberrations. A methodology based on wavefront coding is implemented in a fundus camera in order to obtain a high-quality image of retinal detail. Wavefront-coded systems rely simply on the use of a cubic-phase plate in the pupil of the optical system. The phase element is intended to blur images in such a way that invariance to optical aberrations is achieved. The blur is then removed by image postprocessing. Thus, the system can provide high-resolution retinal images, avoiding all the optics needed to sense and correct ocular aberration, i.e., wavefront sensors and deformable mirrors. PMID:24978788

Arines, Justo; Hernandez, Rene O; Sinzinger, Stefan; Grewe, A; Acosta, Eva

2014-07-01

235

Multimodal rigid-body registration of 3D brain images using bilateral symmetry  

E-print Network

Multimodal rigid-body registration of 3D brain images using bilateral symmetry Sylvain Prima for rigid-body registration of 3D images are probably those based on the maximisation of a similarity symmetry of the brain with respect to its interhemi- spheric fissure for intra-subject (rigid-body) mono

Paris-Sud XI, Universit茅 de

236

Automatic Detection and Segmentation of Kidneys in 3D CT Images Using Random Forests  

E-print Network

Automatic Detection and Segmentation of Kidneys in 3D CT Images Using Random Forests R麓emi Cuingnet. Kidney segmentation in 3D CT images allows extracting use- ful information for nephrologists- ments. Kidneys are localized with random forests following a coarse- to-fine strategy. Their initial

Boyer, Edmond

237

From Alternate Exposure Imaging to 3D Reconstruction of Astronomical Phenomena  

E-print Network

-paths in the short-exposure image I1 and I2 and a specialized TV-L1 optimization scheme. The second part of the talkFrom Alternate Exposure Imaging to 3D Reconstruction of Astronomical Phenomena Authors: Martin are required for faithful 3D reconstruction. In this talk we will motivate the usage of alternate exposure

Magnor, Marcus

238

Color Image Segmentation by Fuzzy Morphological Transformation of the 3D Color Histogramm  

Microsoft Academic Search

Summary form only given. We present a color image segmentation method using fuzzy mathematical morphology operators on the 3D color histogram. Segmentation consists in detecting the different modes which are present in the 3D color histogram and associated to homogeneous regions. In order to detect these modes, we show how a color image can be considered as a fuzzy set

Aymeric Gillet; Ludovic Macaire; Claudine Botte-lecocq; Jack-g閞ard Postaire

2001-01-01

239

Multichannel ultrasound current source density imaging of a 3-D dipole field  

Microsoft Academic Search

Ultrasound Current Source Density Imaging (UCSDI) potentially improves 3-D mapping of bioelectric sources in the body at high spatial resolution, which is especially important for diagnosing and guiding treatment for cardiac and neurologic disorders, including arrhythmia and epilepsy. In this study, we report 4-D imaging of a time varying electric dipole in saline. A 3-D dipole field was produced in

Zhaohui Wang; Ragnar Olafsson; Pier Ingram; Qian Li; Russell S. Witte

2010-01-01

240

PRODUCTION OF VEGETATION INFORMATION TO 3D CITY MODELS FROM SPOT SATELLITE IMAGES  

Microsoft Academic Search

The paper presents a methodology to produce forest parameters and tree instances to large area 3D maps or city models from satellite images. The process flow includes the calibration of the satellite image, the forest parameter estimation, the generation of tree instances from the forest parameters, and the integration of the tree instances to 3D city model and visualisation. Due

E. Parmes; K. Rainio

241

Voxel Similarity Measures for 3D Serial MR Brain Image Registration  

Microsoft Academic Search

We have evaluated eight different similarity measures used for rigid body registration of serial magnetic resonance (MR) brain scans. To assess their accuracy we used 33 clinical three- dimensional (3-D) serial MR images, with deformable extradural tissue excluded by manual segmentation and simulated 3-D MR images with added intensity distortion. For each measure we deter- mined the consistency of registration

Mark Holden; Derek L. G. Hill; Erika R. E. Denton; Jo M. Jarosz; Tim C. S. Cox; Torsten Rohlfing; Joanne Goodey; David J. Hawkes

2000-01-01

242

Curve skeletonization of surface-like objects in 3D images guided by voxel classification  

E-print Network

Curve skeletonization of surface-like objects in 3D images guided by voxel classification S (Naples), Italy Abstract Skeletonization is a way to reduce dimensionality of digital objects. Here, we present an algorithm that computes the curve skeleton of a surface-like object in a 3D image, i

Nystr枚m, Ingela

243

3D pulmonary CT image registration with a standard lung atlas  

Microsoft Academic Search

A 3D anatomic atlas can be used to analyze pulmonary structures in CT images. To use an atlas to guide segmentation processing, the image being analyzed must be aligned and registered with the atlas. We have developed a 3D surface- based registration technique to register pulmonary CT volumes. To demonstrate the method, we have constructed an atlas from a CT

Li Zhang; Joseph M. Reinhardt

2000-01-01

244

Comstat2 -a modern 3D image analysis environment for biofilms  

E-print Network

Comstat2 - a modern 3D image analysis environment for biofilms Martin Vorregaard s053247 Kongens for the analysis and treatment of biofilm images in 3D. Various algorithms for gathering knowledge on biofilm compatibility with an earlier version of the program. An, in this area, new method for the evaluation of biofilm

245

Radiology Lab 0: Introduction to 2D and 3D Imaging  

NSDL National Science Digital Library

This is a self-directed learning module to introduce students to basic concepts of imaging technology as well as to give students practice going between 2D and 3D imaging using everyday objects.Annotated: true

Shaffer, Kitt

2008-10-02

246

Spatio-Temporal Data Fusion for 3D+T Image Reconstruction in Cerebral Angiography  

E-print Network

This paper provides a framework for generating high resolution time sequences of 3D images that show the dynamics of cerebral blood flow. These sequences have the potential to allow image feedback during medical procedures ...

Copeland, Andrew D.

247

An automated vessel segmentation of retinal images using multiscale vesselness  

Microsoft Academic Search

The ocular fundus image can provide information on pathological changes caused by local ocular diseases and early signs of certain systemic diseases, such as diabetes and hypertension. Automated analysis and interpretation of fundus images has become a necessary and important diagnostic procedure in ophthalmology. The extraction of blood vessels from retinal images is an important and challenging task in medical

Mariem Ben Abdallah; Jihene Malek; Karl Krissian; Rached Tourki

2011-01-01

248

Detection of retinal nerve fiber layer defects in retinal fundus images using Gabor filtering  

NASA Astrophysics Data System (ADS)

Retinal nerve fiber layer defect (NFLD) is one of the most important findings for the diagnosis of glaucoma reported by ophthalmologists. However, such changes could be overlooked, especially in mass screenings, because ophthalmologists have limited time to search for a number of different changes for the diagnosis of various diseases such as diabetes, hypertension and glaucoma. Therefore, the use of a computer-aided detection (CAD) system can improve the results of diagnosis. In this work, a technique for the detection of NFLDs in retinal fundus images is proposed. In the preprocessing step, blood vessels are "erased" from the original retinal fundus image by using morphological filtering. The preprocessed image is then transformed into a rectangular array. NFLD regions are observed as vertical dark bands in the transformed image. Gabor filtering is then applied to enhance the vertical dark bands. False positives (FPs) are reduced by a rule-based method which uses the information of the location and the width of each candidate region. The detected regions are back-transformed into the original configuration. In this preliminary study, 71% of NFLD regions are detected with average number of FPs of 3.2 per image. In conclusion, we have developed a technique for the detection of NFLDs in retinal fundus images. Promising results have been obtained in this initial study.

Hayashi, Yoshinori; Nakagawa, Toshiaki; Hatanaka, Yuji; Aoyama, Akira; Kakogawa, Masakatsu; Hara, Takeshi; Fujita, Hiroshi; Yamamoto, Tetsuya

2007-03-01

249

Adaptive optics scanning laser ophthalmoscope for stabilized retinal imaging  

PubMed Central

A retinal imaging instrument that integrates adaptive optics (AO), scanning laser ophthalmoscopy (SLO), and retinal tracking components was built and tested. The system uses a Hartmann-Shack wave-front sensor (HS-WS) and MEMS-based deformable mirror (DM) for AO-correction of high-resolution, confocal SLO images. The system includes a wide-field line-scanning laser ophthalmoscope for easy orientation of the high-magnification SLO raster. The AO system corrected ocular aberrations to <0.1 ?m RMS wave-front error. An active retinal tracking with custom processing board sensed and corrected eye motion with a bandwidth exceeding 1 kHz. We demonstrate tracking accuracy down to 6 ?m RMS for some subjects (typically performance: 1015 ?m RMS). The system has the potential to become an important tool to clinicians and researchers for vision studies and the early detection and treatment of retinal diseases. PMID:19516480

Hammer, Daniel X.; Ferguson, R. Daniel; Bigelow, Chad E.; Iftimia, Nicusor V.; Ustun, Teoman E.; Burns, Stephen A.

2010-01-01

250

Ridge-based retinal image registration algorithm involving OCT fundus images  

NASA Astrophysics Data System (ADS)

This paper proposes an algorithm for retinal image registration involving OCT fundus images (OFIs). The first application of the algorithm is to register OFIs with color fundus photographs; such registration between multimodal retinal images can help correlate features across imaging modalities, which is important for both clinical and research purposes. The second application is to perform the montage of several OFIs, which allows us to construct 3D OCT images over a large field of view out of separate OCT datasets. We use blood vessel ridges as registration features. The brute force search and an Iterative Closest Point (ICP) algorithm are employed for image pair registration. Global alignment to minimize the distance between matching pixel pairs is used to obtain the montage of OFIs. Quality of OFIs is the big limitation factor of the registration algorithm. In the first experiment, the effect of manual OFI enhancement on registration was evaluated for the affine model on 11 image pairs from diseased eyes. The average root mean square error (RMSE) decreases from 58 ?m to 40 ?m. This indicates that the registration algorithm is robust to manual enhancement. In the second experiment for the montage of OFIs, the algorithm was tested on 6 sets from healthy eyes and 6 sets from diseased eyes, each set having 8 partially overlapping SD-OCT images. Visual evaluation showed that the montage performance was acceptable for normal cases, and not good for abnormal cases due to low visibility of blood vessels. The average RMSE for a typical montage case from a healthy eye is 2.3 pixels (69 ?m).

Li, Ying; Gregori, Giovanni; Knighton, Robert W.; Lujan, Brandon J.; Rosenfeld, Philip J.; Lam, Byron L.

2011-03-01

251

Flash trajectory imaging of target 3D motion  

NASA Astrophysics Data System (ADS)

We present a flash trajectory imaging technique which can directly obtain target trajectory and realize non-contact measurement of motion parameters by range-gated imaging and time delay integration. Range-gated imaging gives the range of targets and realizes silhouette detection which can directly extract targets from complex background and decrease the complexity of moving target image processing. Time delay integration increases information of one single frame of image so that one can directly gain the moving trajectory. In this paper, we have studied the algorithm about flash trajectory imaging and performed initial experiments which successfully obtained the trajectory of a falling badminton. Our research demonstrates that flash trajectory imaging is an effective approach to imaging target trajectory and can give motion parameters of moving targets.

Wang, Xinwei; Zhou, Yan; Fan, Songtao; He, Jun; Liu, Yuliang

2011-03-01

252

Quality assessment of stereoscopic 3D image compression by binocular integration behaviors.  

PubMed

The objective approaches of 3D image quality assessment play a key role for the development of compression standards and various 3D multimedia applications. The quality assessment of 3D images faces more new challenges, such as asymmetric stereo compression, depth perception, and virtual view synthesis, than its 2D counterparts. In addition, the widely used 2D image quality metrics (e.g., PSNR and SSIM) cannot be directly applied to deal with these newly introduced challenges. This statement can be verified by the low correlation between the computed objective measures and the subjectively measured mean opinion scores (MOSs), when 3D images are the tested targets. In order to meet these newly introduced challenges, in this paper, besides traditional 2D image metrics, the binocular integration behaviors-the binocular combination and the binocular frequency integration, are utilized as the bases for measuring the quality of stereoscopic 3D images. The effectiveness of the proposed metrics is verified by conducting subjective evaluations on publicly available stereoscopic image databases. Experimental results show that significant consistency could be reached between the measured MOS and the proposed metrics, in which the correlation coefficient between them can go up to 0.88. Furthermore, we found that the proposed metrics can also address the quality assessment of the synthesized color-plus-depth 3D images well. Therefore, it is our belief that the binocular integration behaviors are important factors in the development of objective quality assessment for 3D images. PMID:24569441

Lin, Yu-Hsun; Wu, Ja-Ling

2014-04-01

253

Textureless Macula Swelling Detection with Multiple Retinal Fundus Images  

Microsoft Academic Search

Retinal fundus images acquired with non-mydriatic digital fundus cameras are a versatile tool for the diagnosis of various retinal diseases. Because of the ease of use of newer camera models and their relatively low cost, these cameras can be employed by operators with limited training for telemedicine or Point-of-Care applications. We propose a novel technique that uses uncalibrated multiple-view fundus

Luca Giancardo; Fabrice Meriaudeau; Thomas Paul Karnowski; Kenneth William Tobin Jr; Enrico Grisan; Paolo Favaro; Alfredo Ruggeri; Edward Chaum

2010-01-01

254

Experimental research on 3D reconstruction through range gated laser imaging  

NASA Astrophysics Data System (ADS)

A range gated laser imaging system has been designed and developed for high precision three-dimensional imaging. The system uses a Nd:YAG electro-optical Q-switched 532nm laser as transmitter, a double microchannel plate as gated sensor, and all the components are controlled by a trigger control unit with accuracy of subnanosecond. An experimental scheme is also designed to achieve high precision imaging; a sequence of 2D "slice" images are acquired in the experiment, and these images provide the basic data for 3D reconstruction. Basing on the centroid algorithm, we have developed the 3D reconstruction algorithm, and use it to reconstruct a 3D image of target from the experimental data. We compare the 3D image with the system performance model, and the results are corresponding.

Li, Sining; Lu, Wei; Zhang, Dayong; Li, Chao; Tu, Zhipeng

2014-09-01

255

Single-shot retinal imaging with AO spectral OCT  

Microsoft Academic Search

We demonstrate for the first time an adaptive optics (AO) spectral OCT retina camera that acquires with unprecedented 3D resolution (2.9 mum lateral; 5.5 mum axial) single shot B-scans of the living human retina. The camera centers on a Michelson interferometer that consists of a superluminescent diode for line illuminating the subject's retinal; voice coil translator for controlling the optical

Yan Zhang; Jungtae Rha; Ravi S. Jonnal; Donald T. Miller

2005-01-01

256

ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images  

NASA Technical Reports Server (NTRS)

ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

2005-01-01

257

[Breast volume assessment based on 3D surface geometry: verification of the method using MR imaging].  

PubMed

Differences in breast volume and contour are subjectively estimated by surgeons. 3D surface imaging using 3D scanners provides objective breast volume quantification, but precision and accuracy of the method requires verification. Breast volumes of five test individuals were assessed using a 3D surface scanner. Magnetic resonance imaging (MRI) reference volumes were obtained to verify and compare the 3D scan measurements. The anatomical thorax wall curvature was segmented using MRI data and compared to the interpolated curvature of the posterior breast volume delimitation of 3D scan data. MRI showed higher measurement precision, mean deviation (expressed as percentage of volume) of 1.10+/-0.34% compared to 1.63+/-0.53% for the 3D scanner. Mean MRI [right (left) breasts: 638 (629)+/-143 (138) cc] and 3D scan [right (left) breasts: 493 (497)+/-112 (116) cc] breast volumes significantly correlated [right (left) breasts: r=0.982 (0.977), p=0.003 (0.004)]. The posterior thorax wall of the 3D scan model showed high agreement with the MRI thorax wall curvature [mean positive (negative) deviation: 0.33 (-0.17)+/-0.37 cm]. High correspondence and correlation of 3D scan data with MRI-based verifications support 3D surface imaging as sufficiently precise and accurate for breast volume measurements. PMID:18601619

Eder, Maximilian; Schneider, Armin; Feussner, Hubertus; Zimmermann, Alexander; H鰄nke, Christoph; Papadopulos, Nikolaos A; Kovacs, Laszlo

2008-06-01

258

3D breast image registration--a review.  

PubMed

Image registration is an important problem in breast imaging. It is used in a wide variety of applications that include better visualization of lesions on pre- and post-contrast breast MRI images, speckle tracking and image compounding in breast ultrasound images, alignment of positron emission, and standard mammography images on hybrid machines et cetera. It is a prerequisite to align images taken at different times to isolate small interval lesions. Image registration also has useful applications in monitoring cancer therapy. The field of breast image registration has gained considerable interest in recent years. While the primary focus of interest continues to be the registration of pre- and post-contrast breast MRI images, other areas like breast ultrasound registration have gained more attention in recent years. The focus of registration algorithms has also shifted from control point based semi-automated techniques, to more sophisticated voxel based automated techniques that use mutual information as a similarity measure. This paper visits the problem of breast image registration and provides an overview of the current state-of-the-art in this area. PMID:15649086

Sivaramakrishna, Radhika

2005-02-01

259

3D pulmonary airway color image reconstruction via shape from shading and virtual bronchoscopy imaging techniques  

NASA Astrophysics Data System (ADS)

The dependence on macro-optical imaging of the human body in the assessment of possible disease is rapidly increasing concurrent with, and as a direct result of, advancements made in medical imaging technologies. Assessing the pulmonary airways through bronchoscopy is performed extensively in clinical practice however remains highly subjective due to limited visualization techniques and the lack of quantitative analyses. The representation of 3D structures in 2D visualization modes, although providing an insight to the structural content of the scene, may in fact skew the perception of the structural form. We have developed two methods for visualizing the optically derived airway mucosal features whilst preserving the structural scene integrity. Shape from shading (SFS) techniques can be used to extract 3D structural information from 2D optical images. The SFS technique presented addresses many limitations previously encountered in conventional techniques resulting in high-resolution 3D color images. The second method presented to combine both color and structural information relies on combined CT and bronchoscopy imaging modalities. External imaging techniques such as CT provide a means of determining the gross structural anatomy of the pulmonary airways, however lack the important optically derived mucosal color. Virtual bronchoscopy is used to provide a direct link between the CT derived structural anatomy and the macro-optically derived mucosal color. Through utilization of a virtual and true bronchoscopy matching technique we are able to directly extract combined structurally sound 3D color segments of the pulmonary airways. Various pulmonary airway diseases are assessed and the resulting combined color and texture results are presented demonstrating the effectiveness of the presented techniques.

Suter, Melissa; Reinhardt, Joseph M.; Hoffman, Eric A.; McLennan, Geoffrey

2005-04-01

260

First images and orientation of fine structure from a 3-D seismic oceanography data set  

NASA Astrophysics Data System (ADS)

We present 3-D images of ocean fine structure from a unique industry-collected 3-D multichannel seismic dataset from the Gulf of Mexico that includes expendable bathythermograph casts for both swaths. 2-D processing reveals strong laterally continuous reflections throughout the upper ~800 m as well as a few weaker but still distinct reflections as deep as ~1100 m. We interpret the reflections to be caused by reversible fine structure from internal wave strains. Two bright reflections are traced across the 225-m-wide swath to produce reflection surface images that illustrate the 3-D nature of ocean fine structure. We show that the orientation of linear features in a reflection can be obtained by calculating the orientations of contours of reflection relief, or more robustly, by fitting a sinusoidal surface to the reflection. Preliminary 3-D processing further illustrates the potential of 3-D seismic data in interpreting images of oceanic features such as internal wave strains. This work demonstrates the viability of imaging oceanic fine structure in 3-D and shows that, beyond simply providing a way visualize oceanic fine structure, quantitative information such as the spatial orientation of features like fronts and solitons can be obtained from 3-D seismic images. We expect complete, optimized 3-D processing to improve both the signal to noise ratio and spatial resolution of our images resulting in increased options for analysis and interpretation.

Blacic, T. M.; Holbrook, W. S.

2010-04-01

261

First images and orientation of internal waves from a 3-D seismic oceanography data set  

NASA Astrophysics Data System (ADS)

We present 3-D images of ocean finestructure from a unique industry-collected 3-D multichannel seismic dataset from the Gulf of Mexico that includes expendable bathythermograpgh casts for both swaths. 2-D processing reveals strong laterally continuous reflectors throughout the upper ~800 m as well as a few weaker but still distinct reflectors as deep as ~1100 m. Two bright reflections are traced across the 225-m-wide swath to produce reflector surface images that show the 3-D structure of internal waves. We show that the orientation of internal wave crests can be obtained by calculating the orientations of contours of reflector relief. Preliminary 3-D processing further illustrates the potential of 3-D seismic data in interpreting images of oceanic features such as internal wave strains. This work demonstrates the viability of imaging oceanic finestructure in 3-D and shows that, beyond simply providing a way to see what oceanic finestructure looks like, quantitative information such as the spatial orientation of features like internal waves and solitons can be obtained from 3-D seismic images. We expect complete, optimized 3-D processing to improve both the signal to noise ratio and spatial resolution of our images resulting in increased options for analysis and interpretation.

Blacic, T. M.; Holbrook, W. S.

2009-10-01

262

Image quality of a cone beam O-arm 3D imaging system  

NASA Astrophysics Data System (ADS)

The O-arm is a cone beam imaging system designed primarily to support orthopedic surgery and is also used for image-guided and vascular surgery. Using a gantry that can be opened or closed, the O-arm can function as a 2-dimensional (2D) fluoroscopy device or collect 3-dimensional (3D) volumetric imaging data like a CT system. Clinical applications of the O-arm in spine surgical procedures, assessment of pedicle screw position, and kyphoplasty procedures show that the O-arm 3D mode provides enhanced imaging information compared to radiographs or fluoroscopy alone. In this study, the image quality of an O-arm system was quantitatively evaluated. A 20 cm diameter CATPHAN 424 phantom was scanned using the pre-programmed head protocols: small/medium (120 kVp, 100 mAs), large (120 kVp, 128 mAs), and extra-large (120 kVp, 160 mAs) in 3D mode. High resolution reconstruction mode (5125120.83 mm) was used to reconstruct images for the analysis of low and high contrast resolution, and noise power spectrum. MTF was measured using the point spread function. The results show that the O-arm image is uniform but with a noise pattern which cannot be removed by simply increasing the mAs. The high contrast resolution of the O-arm system was approximately 9 lp/cm. The system has a 10% MTF at 0.45 mm. The low-contrast resolution cannot be decided due to the noise pattern. For surgery where locations of a structure are emphasized over a survey of all image details, the image quality of the O-arm is well accepted clinically.

Zhang, Jie; Weir, Victor; Lin, Jingying; Hsiung, Hsiang; Ritenour, E. Russell

2009-02-01

263

Remote laboratory for phase-aided 3D microscopic imaging and metrology  

NASA Astrophysics Data System (ADS)

In this paper, the establishment of a remote laboratory for phase-aided 3D microscopic imaging and metrology is presented. Proposed remote laboratory consists of three major components, including the network-based infrastructure for remote control and data management, the identity verification scheme for user authentication and management, and the local experimental system for phase-aided 3D microscopic imaging and metrology. The virtual network computer (VNC) is introduced to remotely control the 3D microscopic imaging system. Data storage and management are handled through the open source project eSciDoc. Considering the security of remote laboratory, the fingerprint is used for authentication with an optical joint transform correlation (JTC) system. The phase-aided fringe projection 3D microscope (FP-3DM), which can be remotely controlled, is employed to achieve the 3D imaging and metrology of micro objects.

Wang, Meng; Yin, Yongkai; Liu, Zeyi; He, Wenqi; Li, Boqun; Peng, Xiang

2014-05-01

264

Realization of an aerial 3D image that occludes the background scenery.  

PubMed

In this paper we describe an aerial 3D image that occludes far background scenery based on coarse integral volumetric imaging (CIVI) technology. There have been many volumetric display devices that present floating 3D images, most of which have not reproduced the visual occlusion. CIVI is a kind of multilayered integral imaging and realizes an aerial volumetric image with visual occlusion by combining multiview and volumetric display technologies. The conventional CIVI, however, cannot show a deep space, for the number of layered panels is limited because of the low transmittance of each panel. To overcome this problem, we propose a novel optical design to attain an aerial 3D image that occludes far background scenery. In the proposed system, a translucent display panel with 120 Hz refresh rate is located between the CIVI system and the aerial 3D image. The system modulates between the aerial image mode and the background image mode. In the aerial image mode, the elemental images are shown on the CIVI display and the inserted translucent display is uniformly translucent. In the background image mode, the black shadows of the elemental images in a white background are shown on the CIVI display and the background scenery is displayed on the inserted translucent panel. By alternation of these two modes at 120 Hz, an aerial 3D image that visually occludes the far background scenery is perceived by the viewer. PMID:25322024

Kakeya, Hideki; Ishizuka, Shuta; Sato, Yuya

2014-10-01

265

Mammography Tomosynthesis System for High Performance 3D Imaging  

Microsoft Academic Search

Tomosynthesis provides a major advance in image quality compared to conventional projection mammography by effectively eliminating\\u000a the effects of superimposed tissue on anatomical structures of interest. Early tomosynthesis systems focused primarily on\\u000a feasibility assessment by providing 3-dimensional images to determine performance advantages. However, tomosynthesis image\\u000a quality depends strongly on three key parameters: 1) detector performance at low dose, 2) angular

Jeffrey W. Eberhard; Douglas Albagli; Andrea Schmitz; Bernhard E. H. Claus; Paul Carson; Mitchell M. Goodsitt; Heang-ping Chan; Marilyn A. Roubidoux; Jerry A. Thomas; Jacqueline Osland

2006-01-01

266

3D current source density imaging based on acoustoelectric effect: a simulation study using unipolar pulses  

PubMed Central

It is of importance to image electrical activity and properties of biological tissues. Recently hybrid imaging modality combing ultrasound scanning and source imaging through the acousto-electric (AE) effect has generated considerable interest. Such modality has the potential to provide high spatial resolution current density imaging by utilizing the pressure induced AE resistivity change confined at the ultrasound focus. In this study, we investigate a novel 3-dimensional (3D) ultrasound current source density imaging (UCSDI) approach using unipolar ultrasound pulses. Utilizing specially designed unipolar ultrasound pulses and by combining AE signals associated to the local resistivity changes at the focusing point, we are able to reconstruct the 3D current density distribution with the boundary voltage measurements obtained while performing a 3D ultrasound scan. We have shown in computer simulation that using the present method, it is feasible to image with high spatial resolution an arbitrary 3D current density distribution in an inhomogeneous conductive media. PMID:21628774

Yang, Renhuan; Li, Xu; Liu, Jun; He, Bin

2011-01-01

267

A Featureless Approach to 3D Polyhedral Building Modeling from Aerial Images  

PubMed Central

This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach. PMID:22346575

Hammoudi, Karim; Dornaika, Fadi

2011-01-01

268

One-dimensional integral imaging 3D display systems  

Microsoft Academic Search

We have developed several kinds of autostereoscopic display systems using one-dimensional integral imaging method. The integral imaging system reproduces light beams similar of those produced by a real object. Therefore our displays have continuous motion parallax. The design, fabrication, and optical evaluation of the displays have been made. By using our proprietary software, the fast playback of the CG movie

Yuzo Hirayama

2009-01-01

269

3D ACQUISITION OF ARCHAEOLOGICAL HERITAGE FROM IMAGES  

E-print Network

Marc.Pollefeys@esat.kuleuven.ac.be KEY WORDS: photogrammetry, archaelogy, heritage conservation, image-based-calibration and bundle adjustment. To allow a full surface reconstruction of the observed scene, the images are rectified is one of the sciences were annotations and precise documentation are most important because evidence

Pollefeys, Marc

270

Biologically inspired 3D scene depth recovery from stereo images  

Microsoft Academic Search

This paper proposes a biologically inspired method for depth recovery from stereo rectified images. Two principles are used in order to speed-up the image matching phase: the photoreceptor information transmission (spike generation) principle and convenient coding of neuron (pixel) neighbourhood data. This latter provides the robustness to match and reduces its calculation time. The proposed method has been validated via

Flavien Maingreaud; E. Pissaloux; C. Leroux; A. Micaelli

2004-01-01

271

Synthesis of image sequences for Korean sign language using 3D shape model  

NASA Astrophysics Data System (ADS)

This paper proposes a method for offering information and realizing communication to the deaf-mute. The deaf-mute communicates with another person by means of sign language, but most people are unfamiliar with it. This method enables to convert text data into the corresponding image sequences for Korean sign language (KSL). Using a general 3D shape model of the upper body leads to generating the 3D motions of KSL. It is necessary to construct the general 3D shape model considering the anatomical structure of the human body. To obtain a personal 3D shape model, this general model is to adjust to the personal base images. Image synthesis for KSL consists of deforming a personal 3D shape model and texture-mapping the personal images onto the deformed model. The 3D motions for KSL have the facial expressions and the 3D movements of the head, trunk, arms and hands and are parameterized for easily deforming the model. These motion parameters of the upper body are extracted from a skilled signer's motion for each KSL and are stored to the database. Editing the parameters according to the inputs of text data yields to generate the image sequences of 3D motions.

Hong, Mun-Ho; Choi, Chang-Seok; Kim, Chang-Seok; Jeon, Joon-Hyeon

1995-05-01

272

3D lidar imaging for detecting and understanding plant responses and canopy structure.  

PubMed

Understanding and diagnosing plant responses to stress will benefit greatly from three-dimensional (3D) measurement and analysis of plant properties because plant responses are strongly related to their 3D structures. Light detection and ranging (lidar) has recently emerged as a powerful tool for direct 3D measurement of plant structure. Here the use of 3D lidar imaging to estimate plant properties such as canopy height, canopy structure, carbon stock, and species is demonstrated, and plant growth and shape responses are assessed by reviewing the development of lidar systems and their applications from the leaf level to canopy remote sensing. In addition, the recent creation of accurate 3D lidar images combined with natural colour, chlorophyll fluorescence, photochemical reflectance index, and leaf temperature images is demonstrated, thereby providing information on responses of pigments, photosynthesis, transpiration, stomatal opening, and shape to environmental stresses; these data can be integrated with 3D images of the plants using computer graphics techniques. Future lidar applications that provide more accurate dynamic estimation of various plant properties should improve our understanding of plant responses to stress and of interactions between plants and their environment. Moreover, combining 3D lidar with other passive and active imaging techniques will potentially improve the accuracy of airborne and satellite remote sensing, and make it possible to analyse 3D information on ecophysiological responses and levels of various substances in agricultural and ecological applications and in observations of the global biosphere. PMID:17030540

Omasa, Kenji; Hosoi, Fumiki; Konishi, Atsumi

2007-01-01

273

Ridge-branch-based blood vessel detection algorithm for multimodal retinal images  

Microsoft Academic Search

Automatic detection of retinal blood vessels is important to medical diagnoses and imaging. With the development of imaging technologies, various modals of retinal images are available. Few of currently published algorithms are applied to multimodal retinal images. Besides, the performance of algorithms with pathologies is expected to be improved. The purpose of this paper is to propose an automatic Ridge-Branch-Based

Y. Li; N. Hutchings; R. W. Knighton; G. Gregori; B. J. Lujan; J. G. Flanagan

2009-01-01

274

The Mathematical Foundations of 3D Compton Scatter Emission Imaging  

PubMed Central

The mathematical principles of tomographic imaging using detected (unscattered) X- or gamma-rays are based on the two-dimensional Radon transform and many of its variants. In this paper, we show that two new generalizations, called conical Radon transforms, are related to three-dimensional imaging processes based on detected Compton scattered radiation. The first class of conical Radon transform has been introduced recently to support imaging principles of collimated detector systems. The second class is new and is closely related to the Compton camera imaging principles and invertible under special conditions. As they are poised to play a major role in future designs of biomedical imaging systems, we present an account of their most important properties which may be relevant for active researchers in the field. PMID:18382608

Truong, T. T.; Nguyen, M. K.; Zaidi, H.

2007-01-01

275

3D-Holoscopic Imaging: A New Dimension to Enhance Imaging in Minimally Invasive Therapy in Urologic Oncology  

PubMed Central

Abstract Background and Purpose Existing imaging modalities of urologic pathology are limited by three-dimensional (3D) representation on a two-dimensional screen. We present 3D-holoscopic imaging as a novel method of representing Digital Imaging and Communications in Medicine data images taken from CT and MRI to produce 3D-holographic representations of anatomy without special eyewear in natural light. 3D-holoscopic technology produces images that are true optical models. This technology is based on physical principles with duplication of light fields. The 3D content is captured in real time with the content viewed by multiple viewers independently of their position, without 3D eyewear. Methods We display 3D-holoscopic anatomy relevant to minimally invasive urologic surgery without the need for 3D eyewear. Results The results have demonstrated that medical 3D-holoscopic content can be displayed on commercially available multiview auto-stereoscopic display. Conclusion The next step is validation studies comparing 3D-Holoscopic imaging with conventional imaging. PMID:23216303

Aggoun, Amar; Swash, Mohammad; Grange, Philippe C.R.; Challacombe, Benjamin; Dasgupta, Prokar

2013-01-01

276

Non-contrast Enhanced MR Venography Using 3D Fresh Blood Imaging (FBI): Initial Experience  

Microsoft Academic Search

Objective: This study examined the efficacy of 3D-fresh blood imaging (FBI) in patients with venous disease in the iliac region to lower extremity. Materials and Methods: Fourteen patients with venous disease were examined (8 deep venous thrombosis (DVT) and 6 varix) by 3D-FBI and 2D-TOF MRA. All FBI images and 2D-TOF images were evaluated in terms of visualization of the

Kenichi Yokoyama; Toshiaki Nitatori; Sayuki Inaoka; Taro Takahara; Junichi Hachiya

277

Automatic 3D segmentation of ultrasound images using atlas registration and statistical texture prior  

Microsoft Academic Search

We are developing a molecular image-directed, 3D ultrasound-guided, targeted biopsy system for improved detection of prostate cancer. In this paper, we propose an automatic 3D segmentation method for transrectal ultrasound (TRUS) images, which is based on multi-atlas registration and statistical texture prior. The atlas database includes registered TRUS images from previous patients and their segmented prostate surfaces. Three orthogonal Gabor

Xiaofeng Yang; David Schuster; Viraj Master; Peter Nieh; Aaron Fenster; Baowei Fei

2011-01-01

278

Using a wireless motion controller for 3D medical image catheter interactions  

Microsoft Academic Search

State-of-the-art morphological imaging techniques usually provide high resolution 3D images with a huge number of slices. In clinical practice, however, 2D slice-based examinations are still the method of choice even for these large amounts of data. Providing intuitive interaction methods for specific 3D medical visualization applications is therefore a critical feature for clinical imaging applications. For the domain of catheter

Dime Vitanovski; Dieter Hahn; Volker Daum; Joachim Hornegger

2009-01-01

279

Computer-generated holograms for reconstructing multi 3D images by space-division recording method  

Microsoft Academic Search

In this report, computer-generated holograms (CGHs) that are able to reconstruct different 3D images in accordance with moving viewpoints are discussed as an application of electron beam printing CGHs. In previous Practical Holography conferences, image-type CGHs which are able to reconstruct 3D images under white light were reported. This time, utilizing the method of this fabrication, trial making of a

Tomohisa Hamano; Mitsuru Kitamura

2000-01-01

280

Determining 3D Flow Fields via Multi-camera Light Field Imaging  

PubMed Central

In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture 1. Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet. PMID:23486112

Truscott, Tadd T.; Belden, Jesse; Nielson, Joseph R.; Daily, David J.; Thomson, Scott L.

2013-01-01

281

Blood vessels and feature points detection on retinal images  

Microsoft Academic Search

In this paper we present a method for the automatic extraction of blood vessels from retinal images, while capturing points of intersection\\/overlap and endpoints of the vascular tree. The algorithm performance is evaluated through a comparison with handmade segmented images available on the STARE project database (STructured Analysis of the REtina). The algorithm is performed on the green channel of

Edoardo Ardizzone; Roberto Pirrone; Orazio Gambino; Salvatore Radosta

2008-01-01

282

Automatic location of optic disk in retinal images  

Microsoft Academic Search

On the research work leading to automatic analysis of retinal fundus images, the knowledge of the optic disk location is essential, and a new method to locate the optic disk automatically is proposed. The candidate regions are first determined by clustering the brightest pixels in the intensity image. Principal component analysis (PCA) is then applied to these candidate regions. The

Huiqi Li; Opas Chutatape

2001-01-01

283

Deconvolution of adaptive optics retinal images Julian C. Christou  

E-print Network

Deconvolution of adaptive optics retinal images Julian C. Christou Center for Adaptive Optics the contrast of the adaptive optics images. In this work we demonstrate that quantitative information is also by using adaptive optics1 (AO). The wave-front correction is not perfect, however. Although a diffraction

284

3D/2D convertible projection-type integral imaging using concave half mirror array.  

PubMed

We propose a new method for implementing 3D/2D convertible feature in the projection-type integral imaging by using concave half mirror array. The concave half mirror array has the partially reflective characteristic to the incident light. And the reflected term is modulated by the concave mirror array structure, while the transmitted term is unaffected. With such unique characteristic, 3D/2D conversion or even the simultaneous display of 3D and 2D images is also possible. The prototype was fabricated by the aluminum coating and the polydimethylsiloxane molding process. We could experimentally verify the 3D/2D conversion and the display of 3D image on 2D background with the fabricated prototype. PMID:20940957

Hong, Jisoo; Kim, Youngmin; Park, Soon-gi; Hong, Jong-Ho; Min, Sung-Wook; Lee, Sin-Doo; Lee, Byoungho

2010-09-27

285

An Optimized Blockwise Non Local Means Denoising Filter for 3D Magnetic Resonance Images  

Microsoft Academic Search

A critical issue in image restoration is the problem of noise removal while keeping the integrity of relevant image information. Denoising is a crucial step to increase image quality and to improve the performance of all the tasks needed for quantitative imaging analysis. The method proposed in this paper is based on a 3D optimized blockwise version of the

Pierrick Coupe; Pierre Yger; Sylvain Prima; Pierre Hellier; Charles Kervrann; Christian Barillot

2008-01-01

286

Multiresolution 3-D reconstruction from side-scan sonar images.  

PubMed

In this paper, a new method for the estimation of seabed elevation maps from side-scan sonar images is presented. The side-scan image formation process is represented by a Lambertian diffuse model, which is then inverted by a multiresolution optimization procedure inspired by expectation-maximization to account for the characteristics of the imaged seafloor region. On convergence of the model, approximations for seabed reflectivity, side-scan beam pattern, and seabed altitude are obtained. The performance of the system is evaluated against a real structure of known dimensions. Reconstruction results for images acquired by different sonar sensors are presented. Applications to augmented reality for the simulation of targets in sonar imagery are also discussed. PMID:17269632

Coiras, Enrique; Petillot, Yvan; Lane, David M

2007-02-01

287

Image processing and 3D visualization in forensic pathologic examination  

NASA Astrophysics Data System (ADS)

The use of image processing is becoming increasingly important in the evaluation of violent crime. While much work has been done in the use of these techniques for forensic purposes outside of forensic pathology, its use in the pathologic examination of wounding has been limited. We are investigating the use of image processing and three-dimensional visualization in the analysis of patterned injuries and tissue damage. While image processing will never replace classical understanding and interpretation of how injuries develop and evolve, it can be a useful tool in helping an observer notice features in an image, may help provide correlation of surface to deep tissue injury, and provide a mechanism for the development of a metric for analyzing how likely it may be that a given object may have caused a given wound. We are also exploring methods of acquiring three-dimensional data for such measurements, which is the subject of a second paper.

Oliver, William R.; Altschuler, Bruce R.

1996-02-01

288

Automated 3D whole-breast ultrasound imaging: results of a clinical pilot study  

NASA Astrophysics Data System (ADS)

We present the first clinical results of a novel fully automated 3D breast ultrasound system. This system was designed to match a Philips diffuse optical mammography system to enable straightforward coregistration of optical and ultrasound images. During a measurement, three 3D transducers scan the breast at 4 different views. The resulting 12 datasets are registered together into a single volume using spatial compounding. In a pilot study, benign and malignant masses could be identified in the 3D images, however lesion visibility is less compared to conventional breast ultrasound. Clear breast shape visualization suggests that ultrasound could support the reconstruction and interpretation of diffuse optical tomography images.

Leproux, Ana飐; van Beek, Michiel; de Vries, Ute; Wasser, Martin; Bakker, Leon; Cuisenaire, Olivier; van der Mark, Martin; Entrekin, Rob

2010-03-01

289

Mutual information as a measure of image quality for 3D dynamic lung imaging with EIT.  

PubMed

We report on a pilot study of dynamic lung electrical impedance tomography (EIT) at the University of Manchester. Low-noise EIT data at 100 frames per second were obtained from healthy male subjects during controlled breathing, followed by magnetic resonance imaging (MRI) subsequently used for spatial validation of the EIT reconstruction. The torso surface in the MR image and electrode positions obtained using MRI fiducial markers informed the construction of a 3D finite element model extruded along the caudal-distal axis of the subject. Small changes in the boundary that occur during respiration were accounted for by incorporating the sensitivity with respect to boundary shape into a robust temporal difference reconstruction algorithm. EIT and MRI images were co-registered using the open source medical imaging software, 3D Slicer. A quantitative comparison of quality of different EIT reconstructions was achieved through calculation of the mutual information with a lung-segmented MR image. EIT reconstructions using a linear shape correction algorithm reduced boundary image artefacts, yielding better contrast of the lungs, and had 10% greater mutual information compared with a standard linear EIT reconstruction. PMID:24710978

Crabb, M G; Davidson, J L; Little, R; Wright, P; Morgan, A R; Miller, C A; Naish, J H; Parker, G J M; Kikinis, R; McCann, H; Lionheart, W R B

2014-05-01

290

Minimizing user intervention in registering 2D images to 3D models  

Microsoft Academic Search

This paper proposes a novel technique to speed up the registration of 2D images to 3D models. This problem often arises in the process of digitalization of real objects, because pictures are often taken independently from the 3D geometry. Although there are a number of methods to solve the prob- lem of registration automatically, they all need some further assumptions,

Thomas Franken; Matteo Dellepiane; Fabio Ganovelli; Paolo Cignoni; Claudio Montani; Roberto Scopigno

2005-01-01

291

Developing New Image Registration Techniques and 3D Displays for Neuroimaging and Neurosurgery Yuese Zheng1  

E-print Network

, the informatics students are developing a 3D movie that shows the surgical and preoperative data overlay, whichDeveloping New Image Registration Techniques and 3D Displays for Neuroimaging and Neurosurgery the surgery should be aligned with the patient during surgery. For this surgical application a fast, effective

Zhou, Yaoqi

292

3D object retrieval using silhouette feature vector on shady images  

Microsoft Academic Search

As the amount of new information generated in the world rapidly increases, efficient search in collections of structured data, texts and multimedia objects. 3D objects are an important type of multimedia data with many applications such as medical, chemical, CAD, etc. In this paper a great method for 3D object retrieval using silhouette feature vector on shady image. The method

Raju Barskar; G. F. Ahmed

2010-01-01

293

B-Spline Registration of 3D Images with Levenberg-Marquardt Optimization  

E-print Network

-dimensional (3D) medical image volumes is to find a vector field of 3D displacements such that each point to various medical applications they exhibit a high computational complexity mainly because of the lack applications include atlas construction, atlas-based segmentation or motion estimation. In the medical area

Modersitzki, Jan

294

Adaptive Clutter Rejection for 3D Color Doppler Imaging: Preliminary Clinical Study  

Microsoft Academic Search

In three-dimensional (3D) ultrasound color Doppler imaging (CDI), effective rejection of flash artifacts caused by tissue motion (clutter) is important for improving sensitivity in visualizing blood flow in vessels. Since clutter characteristics can vary significantly during volume acquisition, a clutter rejection technique that can adapt to the underlying clutter conditions is desirable for 3D CDI. We have previously developed an

Yang Mo Yoo; Siddhartha Sikdar; Kerem Karadayi; Orpheus Kolokythas; Yongmin Kim

2008-01-01

295

The MURALE project: Image-based 3D modeling for archaeology  

E-print Network

The MURALE project: Image-based 3D modeling for archaeology Luc Van Gool1,3 , Marc Pollefeys1 and visualisation technology for archaeology. The project will put special emphasis on the usability on the site partners in particular. These comprise two methods to generate 3D models of objects, and approaches to deal

Pollefeys, Marc

296

Simulation of chemical vapor infiltration and deposition based on 3D images: a local scale approach  

E-print Network

/reaction problems; Random walks; 3D image-based modeling 1. Introduction Ceramic Matrix Composites and Carbon-Fiber infiltration of ceramic matrix composites is presented. This computational model requires a 3D representation pore. Results of infiltration of an actual fiber hal-00624480,version1-18Sep2011 Author manuscript

Boyer, Edmond

297

Finite Element Methods for Active Contour Models and Balloons for 2D and 3D Images  

Microsoft Academic Search

The use of energy-minimizing curves, known as "snakes" to extract features of interest in images has been introduced by Kass, Witkin and Terzopoulos [23]. A balloon model was introduced in [12] as a way to generalize and solve some of the problems encountered with the original method. We present a 3D generalization of the balloon model as a 3D deformable

Laurent D. Cohen; Isaac Cohen

1991-01-01

298

Linear 3D reconstruction of time-domain diffuse optical imaging differential data: improved  

E-print Network

Linear 3D reconstruction of time-domain diffuse optical imaging differential data: improved depth: juliette@nmr.mgh.harvard.edu Abstract: We present 3D linear reconstructions of time-domain (TD) diffuse the diffusion approximation for a homogeneous semi-infinite medium. The matrix is then inverted using spatially

Boas, David

299

Space Radar Image Isla Isabela in 3-D  

NASA Technical Reports Server (NTRS)

This is a three-dimensional view of Isabela, one of the Galapagos Islands located off the western coast of Ecuador, South America. This view was constructed by overlaying a Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) image on a digital elevation map produced by TOPSAR, a prototype airborne interferometric radar which produces simultaneous image and elevation data. The vertical scale in this image is exaggerated by a factor of 1.87. The SIR-C/X-SAR image was taken on the 40th orbit of space shuttle Endeavour. The image is centered at about 0.5 degree south latitude and 91 degrees west longitude and covers an area of 75 by 60 kilometers (47 by 37 miles). The radar incidence angle at the center of the image is about 20 degrees. The western Galapagos Islands, which lie about 1,200 kilometers (750 miles)west of Ecuador in the eastern Pacific, have six active volcanoes similar to the volcanoes found in Hawaii and reflect the volcanic processes that occur where the ocean floor is created. Since the time of Charles Darwin's visit to the area in 1835, there have been more than 60 recorded eruptions on these volcanoes. This SIR-C/X-SAR image of Alcedo and Sierra Negra volcanoes shows the rougher lava flows as bright features, while ash deposits and smooth pahoehoe lava flows appear dark. Vertical exaggeration of relief is a common tool scientists use to detect relationships between structure (for example, faults, and fractures) and topography. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI).

1999-01-01

300

Radar Imaging of Spheres in 3D using MUSIC  

SciTech Connect

We have shown that multiple spheres can be imaged by linear and planar EM arrays using only one component of polarization. The imaging approach involves calculating the SVD of the scattering response matrix, selecting a subset of singular values that represents noise, and evaluating the MUSIC functional. The noise threshold applied to the spectrum of singular values for optimal performance is typically around 1%. The resulting signal subspace includes more than one singular value per sphere. The presence of reflections from the ground improves height localization, even for a linear array parallel to the ground. However, the interference between direct and reflected energy modulates the field, creating periodic nulls that can obscure targets in typical images. These nulls are largely eliminated by normalizing the MUSIC functional with the broadside beam pattern of the array. The resulting images show excellent localization for 1 and 2 spheres. The performance for the 3 sphere configurations are complicated by shadowing effects and the greater range of the 3rd sphere in case 2. Two of the three spheres are easily located by MUSIC but the third is difficult to distinguish from other local maxima of the complex imaging functional. Improvement is seen when the linear array is replace with a planar array, which increases the effective aperture height. Further analysis of the singular values and their relationship to modes of scattering from the spheres, as well as better ways to exploit polarization, should improve performance. Work along these lines is currently being pursued by the authors.

Chambers, D H; Berryman, J G

2003-01-21

301

Tracking on treelike structures in 3D confocal images  

NASA Astrophysics Data System (ADS)

Confocal microscopy is qualified to perform volume scans of nerve cells with dendrites and spines. Length and diameter of the dendrite branches and the spines should be determined to analyze the influence of learning processes. A prerequisite for that is the recognition of the dendritic structure with branching off and spines. Because the microscope operates at the resolution limit the images are blurry, noisy and only poorly sampled. In contrast to other methods which are based on binary images and thinning algorithms, our method tracks and dendritic tree faster and in the gray-level domain using simple geometric models.An explicit segmentation is unnecessary and knowledge about shape and structure of the dendrite is included as a priori information. For large trees, first a low resolution scan is captured to crete a rough model. The algorithm allows to refine this model using higher resolution scans for interesting regions along the dendrite. The large unimportant areas between the dendrite branches are not scanned at high resolution to save time and disc space. In a second step, the parameters of the model are adapted to the microscope image by minim zing the deviation of the microscope image from the mode image convolved by the microscope point spread function. Features like number, diameter, length and position of the dendrite branches and spines can be easily calculated from the model. An interactive user intervention is possible at the model domain.

Herzog, Andreas; Krell, Gerald; Michaelis, Bernd; Zuschratter, Werner

1998-06-01

302

Computer-assisted 3D design software for teaching neuro-ophthalmology of the oculomotor system and training new retinal surgery techniques  

NASA Astrophysics Data System (ADS)

Purpose: To create a more effective method of demonstrating complex subject matter in ophthalmology with the use of high end, 3-D, computer aided animation and interactive multimedia technologies. Specifically, to explore the possibilities of demonstrating the complex nature of the neuroophthalmological basics of the human oculomotor system in a clear and non confusing way, and to demonstrate new forms of retinal surgery in a manner that makes the procedures easier to understand for other retinal surgeons. Methods and Materials: Using Reflektions 4.3, Monzoom Pro 4.5, Cinema 4D XL 5.03, Cinema 4D XL 8 Studio Bundle, Mediator 4.0, Mediator Pro 5.03, Fujitsu-Siemens Pentium III and IV, Gericom Webgine laptop, M.G.I. Video Wave 1.0 and 5, Micrografix Picture Publisher 6.0 and 8, Amorphium 1.0, and Blobs for Windows, we created 3-D animations showing the origin, insertion, course, main direction of pull, and auxiliary direction of pull of the six extra-ocular eye muscles. We created 3-D animations that (a) show the intra-cranial path of the relevant oculomotor cranial nerves and which muscles are supplied by them, (b) show which muscles are active in each of the ten lines of sight, (c) demonstrate the various malfunctions of oculomotor systems, as well as (d) show the surgical techniques and the challenges in radial optic neurotomies and subretinal surgeries. Most of the 3-D animations were integrated in interactive multimedia teaching programs. Their effectiveness was compared to conventional teaching methods in a comparative study performed at the University of Vienna. We also performed a survey to examine the response of students being taught with the interactive programs. We are currently in the process of placing most of the animations in an interactive web site in order to make them freely available to everyone who is interested. Results: Although learning how to use complex 3-D computer animation and multimedia authoring software can be very time consuming and frustrating, we found that once the programs are mastered they can be used to create 3-D animations that drastically improve the quality of medical demonstrations. The comparative study showed a significant advantage of using these technologies over conventional teaching methods. The feedback from medical students, doctors, and retinal surgeons was overwhelmingly positive. A strong interest was expressed to have more subjects and techniques demonstrated in this fashion. Conclusion: 3-D computer technologies should be used in the demonstration of all complex medical subjects. More effort and resources need to be given to the development of these technologies that can improve the understanding of medicine for students, doctors, and patients alike.

Glittenberg, Carl; Binder, Susanne

2004-07-01

303

3-D Scene Representation as a Collection of Images and Fundamental Matrices  

Microsoft Academic Search

: In this report, we address the problem of the prediction of newviews of a given scene from existing weakly or fully calibrated views calledreference views. Our method does not make use of a three-dimensional modelof the scene, but of the existing relations between the images. The new viewsare represented in the reference views by a viewpoint and a retinal

St閜hane Laveau; Olivier Faugeras

1994-01-01

304

3D printing based on imaging data: review of medical applications  

Microsoft Academic Search

Purpose牋Generation of graspable three-dimensional objects applied for surgical planning, prosthetics and related applications using\\u000a 3D printing or rapid prototyping is summarized and evaluated.\\u000a \\u000a \\u000a \\u000a \\u000a Materials and methods牋Graspable 3D objects overcome the limitations of 3D visualizations which can only be displayed on flat screens. 3D objects\\u000a can be produced based on CT or MRI volumetric medical images. Using dedicated post-processing algorithms, a

F. Rengier; A. Mehndiratta; H. von Tengg-Kobligk; C. M. Zechmann; R. Unterhinninghofen; H.-U. Kauczor; F. L. Giesel

2010-01-01

305

True 3D imaging with monocular cues using holographic stereography  

E-print Network

In this Letter, we derive a quantitative condition to evaluate the monocular accommodation in holographic stereograms. We find that the holographic reconstructed scene can be regarded as the true-stereo imaging when the whole scene locates in the monocular cues area, whereas not in the ghosting area and the lacking information area. In order to demonstrate them, we develop a pupil-function-based integral imaging algorithm to simulate the mono-eye observation, and set up a holographic printing system to fabricate the full-parallax holographic stereogram. Both simulation and experimental results prove our theoretical predictions.

Pu, Yi-Ying; Liu, Yuan-Zhi; Dong, Jian-Wen; Wang, He-Zhou

2010-01-01

306

Divisional 3D shape reconstruction of object image  

NASA Astrophysics Data System (ADS)

A noel approach to reconstruct shape from shading information is presented, which considers both the global and local shading. Firstly, a 2D gray-level image is divided into several simple patches with snake model. Secondly, the shape of the patches is reconstructed respectively. Thirdly, the different patches are jointed into a whole one. And at last, the mosaic surface is smoothed. According to this method, not only the distortion caused by image discontinuity but also the effects of local noises is eliminated. Results of experiments are given to illustrate the usefulness of the approach.

Xu, Dong; Xia, LiangZheng; Yang, Shizhou

2001-09-01

307

3-D capacitance density imaging of fluidized bed  

DOEpatents

A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved.

Fasching, George E. (653 Vista Pl., Morgantown, WV 26505)

1990-01-01

308

Thermal Plasma Imager (TPI): An Imaging Thermal Ion Mass and 3-D Velocity Analyzer  

NASA Astrophysics Data System (ADS)

The Thermal Plasma Imager (TPI) is an imaging thermal ion mass and 3-dimensional (3-D) velocity analyzer. It is designed to measure the instantaneous mass composition and detailed, mass-resolved, 3-dimensional, velocity distributions of thermal-energy (0.5-50 eV/q) ions on a 3-axis stabilized spacecraft. It consists of a pair of semi-toroidal deflection and fast-switching time-of-flight (TOF) electrodes, a hemispherical electrostatic analyzer (HEA), and a micro-channel plate (MCP) detector. It uses the TOF electrodes to clock the flight times of individual incident ions, and the HEA to focus ions of a given energy-per-charge and incident angle (elevation and azimuth) onto a single point on the MCP. The TOF/HEA combination produces an instantaneous and mass-resolved "image" of a 2-D cone of the 3-D velocity distribution for each ion species, and combines a sequence of concentric 2-D conical samples into a 3-D distribution covering 360 in azimuth and 120 in elevation. It is currently under development for the Enhanced Polar Outflow Probe (e-POP) and Planet-C Venus missions. It is an improved, "3-dimensional" version of the SS520-2 Thermal Suprathermal Analyzer (TSA), which samples ions in its entrance aperture plane and uses the spacecraft spin to achieve 3-D ion sampling. In this paper, we present its detailed design characteristics and prototype instrument performance, and compare these with the ion velocity measurement performances from its 2-D TSA predecessor on SS520-2.

Yau, A. W.; Amerl, P. V.; King, E. P.; Miyake, W.; Abe, T.

2003-04-01

309

Space Radar Image of Kilauea, Hawaii in 3-D  

NASA Technical Reports Server (NTRS)

This is a three-dimensional perspective view of a false-color image of the eastern part of the Big Island of Hawaii. It was produced using all three radar frequencies -- X-band, C-band and L-band -- from the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) flying on the space shuttle Endeavour, overlaid on a U.S. Geological Survey digital elevation map. Visible in the center of the image in blue are the summit crater (Kilauea Caldera) which contains the smaller Halemaumau Crater, and the line of collapse craters below them that form the Chain of Craters Road. The image was acquired on April 12, 1994 during orbit 52 of the space shuttle. The area shown is approximately 34 by 57 kilometers (21 by 35 miles) with the top of the image pointing toward northwest. The image is centered at about 155.25 degrees west longitude and 19.5 degrees north latitude. The false colors are created by displaying three radar channels of different frequency. Red areas correspond to high backscatter at L-HV polarization, while green areas exhibit high backscatter at C-HV polarization. Finally, blue shows high return at X-VV polarization. Using this color scheme, the rain forest appears bright on the image, while the green areas correspond to lower vegetation. The lava flows have different colors depending on their types and are easily recognizable due to their shapes. The flows at the top of the image originated from the Mauna Loa volcano. Kilauea volcano has been almost continuously active for more than the last 11 years. Field teams that were on the ground specifically to support these radar observations report that there was vigorous surface activity about 400 meters (one-quartermile) inland from the coast. A moving lava flow about 200 meters (650 feet) in length was observed at the time of the shuttle overflight, raising the possibility that subsequent images taken during this mission will show changes in the landscape. Currently, most of the lava that is erupted travels the 8 kilometers (5 miles) from the Pu'u O'o crater (the active vent) just outside this image to the coast through a series of lava tubes, but in the past there have been many large lava flows that have traveled this distance, destroying houses and parts of the Hawaii Volcanoes National Park. This SIR-C/X-SAR image shows two types of lava flows that are common to Hawaiian volcanoes. Pahoehoe lava flows are relatively smooth, and appear very dark blue because much of the radar energy is reflected away from the radar. In contrast other lava flows are relatively rough and bounce much of the radar energy back to the radar, making that part of the image bright blue. This radar image is valuable because it allows scientists to study an evolving lava flow field from the Pu'u O'o vent. Much of the area on the northeast side (right) of the volcano is covered with tropical rain forest, and because trees reflect a lot of the radar energy, the forest appears bright in this radar scene. The linear feature running from Kilauea Crater to the right of the image is Highway 11leading to the city of Hilo which is located just beyond the right edge of this image. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA)

1999-01-01

310

Registration and 3D visualization of large microscopy images  

NASA Astrophysics Data System (ADS)

Inactivation of the retinoblastoma gene in mouse embryos causes tissue infiltrations into critical sections of the placenta, which has been shown to affect fetal survivability. Our collaborators in cancer genetics are extremely interested in examining the three dimensional nature of these infiltrations given a stack of two dimensional light microscopy images. Three sets of wildtype and mutant placentas was sectioned serially and digitized using a commercial light microscopy scanner. Each individual placenta dataset consisted of approximately 1000 images totaling 700 GB in size, which were registered into a volumetric dataset using National Library of Medicine's (NIH/NLM) Insight Segmentation and Registration Toolkit (ITK). This paper describes our method for image registration to aid in volume visualization of tissue level intermixing for both wildtype and Rb - specimens. The registration process faces many challenges arising from the large image sizes, damages during sectioning, staining gradients both within and across sections, and background noise. These issues limit the direct application of standard registration techniques due to frequent convergence to local solutions. In this work, we develop a mixture of automated and semi-automated enhancements with ground-truth validation for the mutual information-based registration algorithm. Our final volume renderings clearly show tissue intermixing differences between both wildtype and Rb - specimens which are not obvious prior to registration.

Mosaliganti, Kishore; Pan, Tony; Sharp, Richard; Ridgway, Randall; Iyengar, Srivathsan; Gulacy, Alexandra; Wenzel, Pamela; de Bruin, Alain; Machiraju, Raghu; Huang, Kun; Leone, Gustavo; Saltz, Joel

2006-03-01

311

Model Based Detection of tubular structures in 3D images  

E-print Network

with GEMS1 . keywords: filtering, vessel detection, multiscale analysis, segmentation 1General Electric exist in the domain of vessel detection applied to 2D images. Yet, the extension of a 2D method to three of it. On one hand, if we give a simple definition of a vessel, the algorithm of detection

Paris-Sud XI, Universit茅 de

312

Depth Based Image Registration via 3D Geometric Segmentation  

E-print Network

, and Dapeng Wu Department of Electrical and Computer Engineering University of Florida Gainesville, FL 32611 is a fundamental task in computer vision and it significantly contributes to high-level computer vision depth variations exist in the images with high-rise objects. To address the parallax problem, we present

Wu, Dapeng Oliver

313

Developing 3-D Imaging Mass Spectrometry Anna C. Crecelius  

E-print Network

slices [1]. The goal of the present study is to expand this technique by adding a third dimension is developed in a first approach with printed images of mouse brain slices on paper to establish stacking are downloaded from a brain atlas (http://www.mbl.org/atlas170/atlas170_frame.html). Using PhotoShop

Bodenheimer, Bobby

314

Detection of retinal nerve fiber layer defects on retinal fundus images for early diagnosis of glaucoma  

NASA Astrophysics Data System (ADS)

Retinal nerve fiber layer defect (NFLD) is a major sign of glaucoma, which is the second leading cause of blindness in the world. Early detection of NFLDs is critical for improved prognosis of this progressive, blinding disease. We have investigated a computerized scheme for detection of NFLDs on retinal fundus images. In this study, 162 images, including 81 images with 99 NFLDs, were used. After major blood vessels were removed, the images were transformed so that the curved paths of retinal nerves become approximately straight on the basis of ellipses, and the Gabor filters were applied for enhancement of NFLDs. Bandlike regions darker than the surrounding pixels were detected as candidates of NFLDs. For each candidate, image features were determined and the likelihood of a true NFLD was determined by using the linear discriminant analysis and an artificial neural network (ANN). The sensitivity for detecting the NFLDs was 91% at 1.0 false positive per image by using the ANN. The proposed computerized system for the detection of NFLDs can be useful to physicians in the diagnosis of glaucoma in a mass screening.

Muramatsu, Chisako; Hayashi, Yoshinori; Sawada, Akira; Hatanaka, Yuji; Hara, Takeshi; Yamamoto, Tetsuya; Fujita, Hiroshi

2010-01-01

315

3D printing of intracranial artery stenosis based on the source images of magnetic resonance angiograph  

PubMed Central

Background and purpose Three dimensional (3D) printing techniques for brain diseases have not been widely studied. We attempted to 憄rint the segments of intracranial arteries based on magnetic resonance imaging. Methods Three dimensional magnetic resonance angiography (MRA) was performed on two patients with middle cerebral artery (MCA) stenosis. Using scale-adaptive vascular modeling, 3D vascular models were constructed from the MRA source images. The magnified (ten times) regions of interest (ROI) of the stenotic segments were selected and fabricated by a 3D printer with a resolution of 30 祄. A survey to 8 clinicians was performed to evaluate the accuracy of 3D printing results as compared with MRA findings (4 grades, grade 1: consistent with MRA and provide additional visual information; grade 2: consistent with MRA; grade 3: not consistent with MRA; grade 4: not consistent with MRA and provide probable misleading information). If a 3D printing vessel segment was ideally matched to the MRA findings (grade 2 or 1), a successful 3D printing was defined. Results Seven responders marked 揼rade 1 to 3D printing results, while one marked 揼rade 4. Therefore, 87.5% of the clinicians considered the 3D printing were successful. Conclusions Our pilot study confirms the feasibility of using 3D printing technique in the research field of intracranial artery diseases. Further investigations are warranted to optimize this technique and translate it into clinical practice. PMID:25333049

Liu, Jia; Li, Ming-Li; Sun, Zhao-Yong; Chen, Jie

2014-01-01

316

Automatic 3D ultrasound calibration for image guided therapy using intramodality image registration  

NASA Astrophysics Data System (ADS)

Many real time ultrasound (US) guided therapies can benefit from management of motion-induced anatomical changes with respect to a previously acquired computerized anatomy model. Spatial calibration is a prerequisite to transforming US image information to the reference frame of the anatomy model. We present a new method for calibrating 3D US volumes using intramodality image registration, derived from the 慼and-eye calibration technique. The method is fully automated by implementing data rejection based on sensor displacements, automatic registration over overlapping image regions, and a self-consistency error metric evaluated continuously during calibration. We also present a novel method for validating US calibrations based on measurement of physical phantom displacements within US images. Both calibration and validation can be performed on arbitrary phantoms. Results indicate that normalized mutual information and localized cross correlation produce the most accurate 3D US registrations for calibration. Volumetric image alignment is more accurate and reproducible than point selection for validating the calibrations, yielding <1.5 mm root mean square error, a significant improvement relative to previously reported hand-eye US calibration results. Comparison of two different phantoms for calibration and for validation revealed significant differences for validation (p = 0.003) but not for calibration (p = 0.795).

Schlosser, Jeffrey; Kirmizibayrak, Can; Shamdasani, Vijay; Metz, Steve; Hristov, Dimitre

2013-11-01

317

Pincushion correction techniques and their effects on calculated 3D positions and imaging geometries  

NASA Astrophysics Data System (ADS)

Two techniques for pincushion correction are evaluated based on their effect on calculation of the image geometry and 3D positions of object points. Images of a uniform wire mesh and a calibration phantom containing lead beads in its surface were acquired on the image intensifier TV systems in our catheterization labs. The radial mapping functions relating points in the original images and in the corrected images were determined using the mesh image. The undistorted mesh model was also used to determine and correct the distortions locally, i.e., for each square region between the mesh points. Thus, two corrected images were obtained. Images of the calibration phantom before and after correction were analyzed to determine the 3D position of the lead beads and the imaging geometry, using a calibration algorithm and the enhanced Metz-Fencil technique. In comparing the 3D positions calculated from the radially corrected and locally corrected images, the calculated 3D positions using the calibration technique vary by less than 0.6 mm in the x and y direction and less than 5.0 mm in the z direction. The uncorrected data yields differences of over 1 cm in the z direction. The 3D positions calculated using the enhanced Metz-Fencil technique appear to be more accurate when pincushion correction is applied.

Hoffmann, Kenneth R.; Chen, Yang; Esthappan, Jacqueline; Chen, Shiuh-Yung J.; Carroll, John D.

1996-04-01

318

Hands-on guide for 3D image creation for geological purposes  

NASA Astrophysics Data System (ADS)

Geological structures in outcrops or hand specimens are inherently three dimensional (3D), and therefore better understandable if viewed in 3D. While 3D models can easily be created, manipulated, and looked at from all sides on the computer screen (e.g., using photogrammetry or laser scanning data), 3D visualizations for publications or conference posters are much more challenging as they have to live in a 2D-world (i.e., on a sheet of paper). Perspective 2D visualizations of 3D models do not fully transmit the "feeling and depth of the third dimension" to the audience; but this feeling is desirable for a better examination and understanding in 3D of the structure under consideration. One of the very few possibilities to generate real 3D images, which work on a 2D display, is by using so-called stereoscopic images. Stereoscopic images are two images of the same object recorded from two slightly offset viewpoints. Special glasses and techniques have to be used to make sure that one image is seen only by one eye, and the other image is seen by the other eye, which together lead to the "3D effect". Geoscientists are often familiar with such 3D images. For example, geomorphologists traditionally view stereographic orthophotos by employing a mirror-steroscope. Nowadays, petroleum-geoscientists examine high-resolution 3D seismic data sets in special 3D visualization rooms. One of the methods for generating and viewing a stereoscopic image, which does not require a high-tech viewing device, is to create a so-called anaglyph. The principle is to overlay two images saturated in red and cyan, respectively. The two images are then viewed through red-cyan-stereoscopic glasses. This method is simple and cost-effective, but has some drawbacks in preserving colors accurately. A similar method is used in 3D movies, where polarized light or shuttering techniques are used to separate the left from the right image, which allows preserving the original colors. The advantage of red-cyan anaglyphs is their simplicity and the possibility to print them on normal paper or project them using a conventional projector. Producing 3D stereoscopic images is much easier than commonly thought. Our hands-on poster provides an easy-to-use guide for producing 3D stereoscopic images. Few simple rules-of-thumb are presented that define how photographs of any scene or object have to be shot to produce good-looking 3D images. We use the free software Stereophotomaker (http://stereo.jpn.org/eng/stphmkr) to produce anaglyphs and provide red-cyan 3D glasses for viewing them. Our hands-on poster is easy to adapt and helps any geologist to present his/her field or hand specimen photographs in a much more fashionable 3D way for future publications or conference posters.

Frehner, Marcel; Tisato, Nicola

2013-04-01

319

Space Radar Image of Mammoth, California in 3-D  

NASA Technical Reports Server (NTRS)

This is a three-dimensional perspective of Mammoth Mountain, California. This view was constructed by overlaying a Spaceborne Imaging Radar-C (SIR-C) radar image on a U.S. Geological Survey digital elevation map. Vertical exaggeration is 1.87 times. The image is centered at 37.6 degrees north, 119.0 degrees west. It was acquired from the Spaceborne Imaging Radar C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) onboard space shuttle Endeavour on its 67th orbit on April 13, 1994. In this color representation, red is C-band HV-polarization, green is C-band VV-polarization and blue is the ratio of C-band VV to C-band HV. Blue areas are smooth, and yellow areas are rock out-crops with varying amounts of snow and vegetation. Crowley Lake is in the foreground, and Highway 395 crosses in the middle of the image. Mammoth Mountain is shown in the upper right. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI).

1999-01-01

320

Space Radar Image of Long Valley, California - 3D view  

NASA Technical Reports Server (NTRS)

This is a three-dimensional perspective view of Long Valley, California by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This view was constructed by overlaying a color composite SIR-C image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle and, which then, are compared to obtain elevation information. The data were acquired on April 13, 1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR radar instrument. The color composite radar image was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is off the image to the left. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.V.(DLR), the major partner in science, operations and data processing of X-SAR.

1994-01-01

321

Space Radar Image of Long Valley, California in 3-D  

NASA Technical Reports Server (NTRS)

This three-dimensional perspective view of Long Valley, California was created from data taken by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This image was constructed by overlaying a color composite SIR-C radar image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle. The two data passes are compared to obtain elevation information. The interferometry data were acquired on April 13,1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR instrument. The color composite radar image was taken in October and was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is the large dark feature in the foreground. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.v. (DLR), the major partner in science, operations and data processing of X-SAR.

1994-01-01

322

Space Radar Image of Karakax Valley, China 3-D  

NASA Technical Reports Server (NTRS)

This three-dimensional perspective of the remote Karakax Valley in the northern Tibetan Plateau of western China was created by combining two spaceborne radar images using a technique known as interferometry. Visualizations like this are helpful to scientists because they reveal where the slopes of the valley are cut by erosion, as well as the accumulations of gravel deposits at the base of the mountains. These gravel deposits, called alluvial fans, are a common landform in desert regions that scientists are mapping in order to learn more about Earth's past climate changes. Higher up the valley side is a clear break in the slope, running straight, just below the ridge line. This is the trace of the Altyn Tagh fault, which is much longer than California's San Andreas fault. Geophysicists are studying this fault for clues it may be able to give them about large faults. Elevations range from 4000 m (13,100 ft) in the valley to over 6000 m (19,700 ft) at the peaks of the glaciated Kun Lun mountains running from the front right towards the back. Scale varies in this perspective view, but the area is about 20 km (12 miles) wide in the middle of the image, and there is no vertical exaggeration. The two radar images were acquired on separate days during the second flight of the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the space shuttle Endeavour in October 1994. The interferometry technique provides elevation measurements of all points in the scene. The resulting digital topographic map was used to create this view, looking northwest from high over the valley. Variations in the colors can be related to gravel, sand and rock outcrops. This image is centered at 36.1 degrees north latitude, 79.2 degrees east longitude. Radar image data are draped over the topography to provide the color with the following assignments: Red is L-band vertically transmitted, vertically received; green is the average of L-band vertically transmitted, vertically received and C-band vertically transmitted, vertically received; and blue is C-band vertically transmitted, vertically received. SIR-C/X-SAR, a joint mission of the German, Italian and United States space agencies, is part of NASA's Mission to Planet Earth.

1994-01-01

323

Space Radar Image of Missoula, Montana in 3-D  

NASA Technical Reports Server (NTRS)

This is a three-dimensional perspective view of Missoula, Montana, created by combining two spaceborne radar images using a technique known as interferometry. Visualizations like this are useful because they show scientists the shapes of the topographic features such as mountains and valleys. This technique helps to clarify the relationships of the different types of materials on the surface detected by the radar. The view is looking north-northeast. The blue circular area at the lower left corner is a bend of the Bitterroot River just before it joins the Clark Fork, which runs through the city. Crossing the Bitterroot River is the bridge of U.S. Highway 93. Highest mountains in this image are at elevations of 2,200 meters (7,200 feet). The city is about 975 meters (3,200 feet) above sea level. The bright yellow areas are urban and suburban zones, dark brown and blue-green areas are grasslands, bright green areas are farms, light brown and purple areas are scrub and forest, and bright white and blue areas are steep rocky slopes. The two radar images were taken on successive days by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) onboard the space shuttle Endeavour in October 1994. The digital elevation map was produced using radar interferometry, a process in which radar data are acquired on different passes of the space shuttle. The two data passes are compared to obtain elevation information. Radar image data are draped over the topography to provide the color with the following assignments: red is L-band vertically transmitted, vertically received; green is C-band vertically transmitted, vertically received; and blue are differences seen in the L-band data between the two days. This image is centered near 46.9 degrees north latitude and 114.1 degrees west longitude. No vertical exaggeration factor has been applied to the data. SIR-C/X-SAR, a joint mission of the German, Italian and United States space agencies, is part of NASA's Mission to Planet Earth program.

1994-01-01

324

Identifying diagonal cutter marks on thin wires using 3D imaging.  

PubMed

We present work on matching 2-mm-thick wires using optical 3D imaging methods. Marks on such small surfaces are difficult to match using a comparison microscope as this 2D imaging method does not provide height data about the sample surface. Moreover, these 2D microscopy images may be affected by illumination. Hence, the reference and investigated sample should be present at the same time. We employed scanning white light interferometry and confocal microscopy to provide quantitative 3D profiles for reliable comparison of samples that are unavailable for simultaneous analysis. We show that 3D profiling offers a solution by allowing illumination-independent sample comparison. We correctly identified 74 of 80 profiles using consecutive matching striae (CMS) criteria, and we were able to match samples based on profiles measured using different 3D imaging devices. The results suggest that the used methods allow matching cutter marks on thin wires, which has been difficult previously. PMID:24400830

Heikkinen, Ville Vili; Kassamakov, Ivan; Barbeau, Claude; Lehto, Sami; Reinikainen, Tapani; Haeggstr鰉, Edward

2014-01-01

325

3D Quantitative microwave imaging from sparsely measured data with Huber regularization  

E-print Network

3D Quantitative microwave imaging from sparsely measured data with Huber regularization Funing Bai information: (Send correspondence to Funing Bai) Funing Bai: E-mail: Funing.Bai@telin.ugent.be, Telephone: +32

Pizurica, Aleksandra

326

A real-time noise filtering strategy for photon counting 3D imaging lidar.  

PubMed

For a direct-detection 3D imaging lidar, the use of Geiger mode avalanche photodiode (Gm-APD) could greatly enhance the detection sensitivity of the lidar system since each range measurement requires a single detected photon. Furthermore, Gm-APD offers significant advantages in reducing the size, mass, power and complexity of the system. However the inevitable noise, including the background noise, the dark count noise and so on, remains a significant challenge to obtain a clear 3D image of the target of interest. This paper presents a smart strategy, which can filter out false alarms in the stage of acquisition of raw time of flight (TOF) data and obtain a clear 3D image in real time. As a result, a clear 3D image is taken from the experimental system despite the background noise of the sunny day. PMID:23609635

Zhang, Zijing; Zhao, Yuan; Zhang, Yong; Wu, Long; Su, Jianzhong

2013-04-22

327

An adaptive 3-D discrete cosine transform coder for medical image compression.  

PubMed

In this communication, a new three-dimensional (3-D) discrete cosine transform (DCT) coder for medical images is presented. In the proposed method, a segmentation technique based on the local energy magnitude is used to segment subblocks of the image into different energy levels. Then, those subblocks with the same energy level are gathered to form a 3-D cuboid. Finally, 3-D DCT is employed to compress the 3-D cuboid individually. Simulation results show that the reconstructed images achieve a bit rate lower than 0.25 bit per pixel even when the compression ratios are higher than 35. As compared with the results by JPEG and other strategies, it is found that the proposed method achieves better qualities of decoded images than by JPEG and the other strategies. PMID:11026596

Tai, S C; Wu, Y G; Lin, C W

2000-09-01

328

A Comparison of Simularity Measures for use in 2D-3D Medical Image Registration  

Microsoft Academic Search

A comparison of six similarity measures for use in intensity-based two-dimensional-three-dimensional (2-D-3-D) image registration is presented. The accuracy of the similarity measures are compared to a \\

Graeme P. Penney; J黵gen Weese; John A. Little; Paul Desmedt; Derek L. G. Hill; David J. Hawkes

1998-01-01

329

High-resolution 3D coherent laser radar imaging  

Microsoft Academic Search

High range-resolution active imaging requires high-bandwidth transmitters and receivers. At Lockheed Martin Coherent Technologies (LMCT), we are developing both linear Frequency Modulated Continuous Wave (FMCW) and short pulse laser radar sensors to supply the needed bandwidth. FMCW waveforms are advantageous in many applications, since target returns can be optically demodulated, mitigating the need for high-speed detectors and receiver electronics, enabling

Brian Krause; Philip Gatt; Carl Embry; Joseph Buck

2006-01-01

330

Computer-aided diagnostic detection system of venous beading in retinal images  

E-print Network

for publication Dec. 9, 1999. 1 Introduction Retinal or fundus images provide information about the blood supply and microaneurysms, where the retinal images are acquired digitally or digitized by scanning slice films into digitalComputer-aided diagnostic detection system of venous beading in retinal images Ching-Wen Yang

Chang, Chein-I

331

3D object recognition using kernel construction of phase wrapped images  

NASA Astrophysics Data System (ADS)

Kernel methods are effective machine learning techniques for many image based pattern recognition problems. Incorporating 3D information is useful in such applications. The optical profilometries and interforometric techniques provide 3D information in an implicit form. Typically phase unwrapping process, which is often hindered by the presence of noises, spots of low intensity modulation, and instability of the solutions, is applied to retrieve the proper depth information. In certain applications such as pattern recognition problems, the goal is to classify the 3D objects in the image, rather than to simply display or reconstruct them. In this paper we present a technique for constructing kernels on the measured data directly without explicit phase unwrapping. Such a kernel will naturally incorporate the 3D depth information and can be used to improve the systems involving 3D object analysis and classification.

Zhang, Hong; Su, Hongjun

2011-06-01

332

[Depiction of the cranial nerves around the cavernous sinus by 3D reversed FISP with diffusion weighted imaging (3D PSIF-DWI)].  

PubMed

To evaluate the anatomy of cranial nerves running in and around the cavernous sinus, we employed three-dimensional reversed fast imaging with steady-state precession (FISP) with diffusion weighted imaging (3D PSIF-DWI) on 3-T magnetic resonance (MR) system. After determining the proper parameters to obtain sufficient resolution of 3D PSIF-DWI, we collected imaging data of 20-side cavernous regions in 10 normal subjects. 3D PSIF-DWI provided high contrast between the cranial nerves and other soft tissues, fluid, and blood in all subjects. We also created volume-rendered images of 3D PSIF-DWI and anatomically evaluated the reliability of visualizing optic, oculomotor, trochlear, trigeminal, and abducens nerves on 3D PSIF-DWI. All 20 sets of cranial nerves were visualized and 12 trochlear nerves and 6 abducens nerves were partially identified. We also presented preliminary clinical experiences in two cases with pituitary adenomas. The anatomical relationship between the tumor and cranial nerves running in and around the cavernous sinus could be three-dimensionally comprehended by 3D PSIF-DWI and the volume-rendered images. In conclusion, 3D PSIF-DWI has great potential to provide high resolution "cranial nerve imaging", which visualizes the whole length of the cranial nerves including the parts in the blood flow as in the cavernous sinus region. PMID:21972184

Ishida, Go; Oishi, Makoto; Jinguji, Shinya; Yoneoka, Yuichiro; Sato, Mitsuya; Fujii, Yukihiko

2011-10-01

333

Real-time 3D ultrasound imaging on a next-generation media processor  

NASA Astrophysics Data System (ADS)

3D ultrasound (US) provides physicians with a better understanding of human anatomy. By manipulating the 3D US data set, physicians can observe the anatomy in 3D from a number of different view directions and obtain 2D US images that would not be possible to directly acquire with the US probe. In order for 3D US to be in widespread clinical use, creation and manipulation of the 3D US data should be done at interactive times. This is a challenging task due to the large amount of data to be processed. Our group previously reported interactive 3D US imaging using a programmable mediaprocessor, Texas Instruments TMS320C80, which has been in clinical use. In this work, we present the algorithms we have developed for real-time 3D US using a newer and more powerful mediaprocessor, called MAP-CA. MAP-CA is a very long instruction word (VLIW) processor developed for multimedia applications. It has multiple execution units, a 32-kbyte data cache and a programmable DMA controller called the data streamer (DS). A forward mapping 6 DOF (for a freehand 3D US system based on magnetic position sensor for tracking the US probe) reconstruction algorithm with zero- order interpolation is achieved in 11.8 msec (84.7 frame/sec) per 512x512 8-bit US image. For 3D visualization of the reconstructed 3D US data sets, we used volume rendering and in particular the shear-warp factorization with the maximum intensity projection (MIP) rendering. 3D visualization is achieved in 53.6 msec (18.6 frames/sec) for a 128x128x128 8-bit volume and in 410.3 msec (2.4 frames/sec) for a 256x256x256 8-bit volume.

Pagoulatos, Niko; Noraz, Frederic; Kim, Yongmin

2001-05-01

334

Auto-adjusted 3-D optic disk viewing from low-resolution stereo fundus image  

Microsoft Academic Search

Three-dimensional (3-D) visualization of the optic nerve head (optic disk) is very useful for clinical applications. It allows clinicians to measure the disk parameters more accurately and thus make the pathological diagnosis and progression monitoring easier. This paper describes an automatic, precise, 3-D optic nerve head reconstruction method from a pair of stereo images for which efficient steps including sparse-image

Juan Xu; Opas Chutatape

2006-01-01

335

A 3-D visualization method for image-guided brain surgery  

Microsoft Academic Search

This paper deals with a 3D methodology for brain tumor image-guided surgery. The methodology is based on development of a visualization process that mimics the human surgeon behavior and decision-making. In particular, it originally constructs a 3D representation of a tumor by using the segmented version of the 2D MRI images. Then it develops an optimal path for the tumor

Nikolaos G. Bourbakis; Mariette Awad

2003-01-01

336

Application of Medical Imaging Software to 3D Visualization of Astronomical Data  

Microsoft Academic Search

The AstroMed project at Harvard University's Initiative in Innovative Computing (IIC) is working on improved visualization and data-sharing solutions that are applicable to the fields of both astronomy and medicine. The current focus is on the application of medical image visualization and analysis techniques to three-dimensional (3D) astronomical data. The 3D Slicer and OsiriX medical imaging tools have been used

M. Borkin; A. Goodman; M. Halle; D. Alan

2007-01-01

337

Time-efficient computations for topological functions in 3D images  

Microsoft Academic Search

An important issue in 3D image processing is the identification of points (voxels) which could be altered while leaving the topology unchanged-such points are referred to as simple points. We need time-efficient algorithms for identifying such points, since such computations are typically evoked over many iterations on large 3D images. We report new very fast algorithms for computing functions which

Richard W. Hall; Chih-yuan Hu

1995-01-01

338

Portable, low-priced retinal imager for eye disease screening  

NASA Astrophysics Data System (ADS)

The objective of this project was to develop and demonstrate a portable, low-priced, easy to use non-mydriatic retinal camera for eye disease screening in underserved urban and rural locations. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities or other economically stressed healthcare facilities. Our approach for Smart i-Rx is based primarily on a significant departure from current generations of desktop and hand-held commercial retinal cameras as well as those under development. Our techniques include: 1) Exclusive use of off-the-shelf components; 2) Integration of retinal imaging device into low-cost, high utility camera mount and chin rest; 3) Unique optical and illumination designed for small form factor; and 4) Exploitation of autofocus technology built into present digital SLR recreational cameras; and 5) Integration of a polarization technique to avoid the corneal reflex. In a prospective study, 41 out of 44 diabetics were imaged successfully. No imaging was attempted on three of the subjects due to noticeably small pupils (less than 2mm). The images were of sufficient quality to detect abnormalities related to diabetic retinopathy, such as microaneurysms and exudates. These images were compared with ones taken non-mydriatically with a Canon CR-1 Mark II camera. No cases identified as having DR by expert retinal graders were missed in the Smart i-Rx images.

Soliz, Peter; Nemeth, Sheila; VanNess, Richard; Barriga, E. S.; Zamora, Gilberto

2014-02-01

339

Regularized Estimation of Retinal Vascular Oxygen Tension From Phosphorescence Images  

PubMed Central

The level of retinal oxygenation is potentially an important cue to the onset or presence of some common retinal diseases. An improved method for assessing oxygen tension in retinal blood vessels from phosphorescence lifetime imaging data is reported in this paper. The optimum estimate for phosphorescence lifetime and oxygen tension is obtained by regularizing the least-squares (LS) method. The estimation method is implemented with an iterative algorithm to minimize a regularized LS cost function. The effectiveness of the proposed method is demonstrated by applying it to simulated data as well as image data acquired from rat retinas. The method is shown to yield estimates that are robust to noise and whose variance is lower than that obtained with the classical LS method. PMID:19389690

Ansari, Rashid; Wanek, Justin; Yetik, Imam Samil; Shahidi, Mahnaz

2010-01-01

340

Anesthesiology training using 3D imaging and virtual reality  

NASA Astrophysics Data System (ADS)

Current training for regional nerve block procedures by anesthesiology residents requires expert supervision and the use of cadavers; both of which are relatively expensive commodities in today's cost-conscious medical environment. We are developing methods to augment and eventually replace these training procedures with real-time and realistic computer visualizations and manipulations of the anatomical structures involved in anesthesiology procedures, such as nerve plexus injections (e.g., celiac blocks). The initial work is focused on visualizations: both static images and rotational renderings. From the initial results, a coherent paradigm for virtual patient and scene representation will be developed.

Blezek, Daniel J.; Robb, Richard A.; Camp, Jon J.; Nauss, Lee A.

1996-04-01

341

3D segmentation and image annotation for quantitative diagnosis in lung CT images with pulmonary lesions  

NASA Astrophysics Data System (ADS)

Pulmonary nodules and ground glass opacities are highly significant findings in high-resolution computed tomography (HRCT) of patients with pulmonary lesion. The appearances of pulmonary nodules and ground glass opacities show a relationship with different lung diseases. According to corresponding characteristic of lesion, pertinent segment methods and quantitative analysis are helpful for control and treat diseases at an earlier and potentially more curable stage. Currently, most of the studies have focused on two-dimensional quantitative analysis of these kinds of deceases. Compared to two-dimensional images, three-dimensional quantitative analysis can take full advantage of isotropic image data acquired by using thin slicing HRCT in space and has better quantitative precision for clinical diagnosis. This presentation designs a computer-aided diagnosis component to segment 3D disease areas of nodules and ground glass opacities in lung CT images, and use AIML (Annotation and image makeup language) to annotate the segmented 3D pulmonary lesions with information of quantitative measurement which may provide more features and information to the radiologists in clinical diagnosis.

Li, Suo; Zhu, Yanjie; Sun, Jianyong; Zhang, Jianguo

2013-03-01

342

A physics-based coordinate transformation for 3-D image matching  

Microsoft Academic Search

Many image matching schemes are based on mapping coordinate locations, such as the locations of landmarks, in one image to corresponding locations in a second image. A new approach to this mapping (coordinate transformation), called the elastic body spline (EBS), is described. The spline is based on a physical model of a homogeneous, isotropic three-dimensional (3-D) elastic body. The model

Malcolm H. Davis; Alireza Khotanzad; Duane P. Flamig; Steven E. Harms

1997-01-01

343

Focal Cortical Dysplasia Segmentation in 3D Magnetic Resonance Images of the Human Brain  

Microsoft Academic Search

In this work we present an image processing pipeline for automatic segmentation of focal cortical dysplasia lesions in 3D magnetic resonance images of the human brain. Dys- plasia lesions are a common cause of refractory epilepsy, especially in children, and their treatment often involve sur- gical intervention. To achieve this pipeline we developed several new image processing techniques, procedures and

Felipe P. G. Bergo; Alexandre X. Falc

344

Rigid Registration of Freehand 3D Ultrasound and CT-Scan Kidney Images  

E-print Network

Rigid Registration of Freehand 3D Ultrasound and CT-Scan Kidney Images Antoine Leroy1, Pierre Mozer carried out [3], in which the kidney surface, segmented from CT and localized US images, was registered of the kidney onto a high-quality abdominal CT volume. The final algorithm uses image preprocessing in both

Boyer, Edmond

345

Non-rigid 2D-3D Medical Image Registration using Markov Random Fields  

E-print Network

guided surgeries, as laparoscopic or endoscopic [1], and brain surgeries [2] use such images. In those)) and intra-operative 2D images are used to guide surgeons during the procedure. 2D-3D registration plays nonrigid registration algorithms used to register histological section images to human brain MRI. A feature

Paris-Sud XI, Universit茅 de

346

Adaptive Multiresolution Denoising Filter for 3D MR Images Pierrick Coup1  

E-print Network

Adaptive Multiresolution Denoising Filter for 3D MR Images Pierrick Coup茅1 , Jos茅 V. Manjon2 , Montserrat Robles2 , D. Louis Collins1 . 1 McConnell Brain Imaging Centre, Montr茅al Neurological Institute Introduction In MR imaging, denoising is an important issue in some clinical uses. Recently a new filter has

Boyer, Edmond

347

Enhanced 3D Perception using Super-Resolution and Saturation Control Techniques for Solar Images  

Microsoft Academic Search

Anaglyphs are an interesting way of generating stereoscopic images, especially in a cost-efficient and technically simple way. An anaglyph is generated by combining stereo pair of images for left and right scenes with appropriate offset with respect to each other, where each image is shown using a different color in order to reflect the 3D effect for the users who

348

3D Computation of Gray Level Co-occurrence in Hyperspectral Image Cubes  

E-print Network

3D Computation of Gray Level Co-occurrence in Hyperspectral Image Cubes Fuan Tsai, Chun-Kai Chang of two remote sens- ing hyperspectral image cubes and comparing their performance with conventional GLCM co-occurrence matrix) to a three-dimensional form. The objective was to treat hyperspectral image

Tsai, Fuan "Alfonso"

349

3D optical microscopy method based on synthetic aperture integral imaging  

NASA Astrophysics Data System (ADS)

In this paper, we propose a 3D optical microscopy system based on synthetic aperture integral imaging (SAII) and apply it to a surface extraction application. In the proposed system, the micro-object is optically magnified in optical microscope system and the elemental images of magnified micro-object are recorded using the SAII sensing method. The recorded elemental images are used to reconstruct a set of 3D slice images using the computational reconstruction algorithm based on ray back-projection. We obtain the surface extraction of micro-object based on the estimation of depth by using the block matching algorithm between the elemental image and a set of the reconstructed 3D slice images. The longitudinal and lateral resolutions are analyzed for the proposed system. To demonstrate our system, we carry out the preliminary experiments of a micro-object and the results are presented. [Figure not available: see fulltext.

Lee, Joon-Jae; Shin, Donghak; Lee, Byung-Gook; Yoo, Hoon

2012-12-01

350

A Molecular Image-directed, 3D Ultrasound-guided Biopsy System for the Prostate  

PubMed Central

Systematic transrectal ultrasound (TRUS)-guided biopsy is the standard method for a definitive diagnosis of prostate cancer. However, this biopsy approach uses two-dimensional (2D) ultrasound images to guide biopsy and can miss up to 30% of prostate cancers. We are developing a molecular image-directed, three-dimensional (3D) ultrasound image-guided biopsy system for improved detection of prostate cancer. The system consists of a 3D mechanical localization system and software workstation for image segmentation, registration, and biopsy planning. In order to plan biopsy in a 3D prostate, we developed an automatic segmentation method based wavelet transform. In order to incorporate PET/CT images into ultrasound-guided biopsy, we developed image registration methods to fuse TRUS and PET/CT images. The segmentation method was tested in ten patients with a DICE overlap ratio of 92.4% 1.1 %. The registration method has been tested in phantoms. The biopsy system was tested in prostate phantoms and 3D ultrasound images were acquired from two human patients. We are integrating the system for PET/CT directed, 3D ultrasound-guided, targeted biopsy in human patients. PMID:22708023

Fei, Baowei; Schuster, David M.; Master, Viraj; Akbari, Hamed; Fenster, Aaron; Nieh, Peter

2012-01-01

351

Digital breast tomosynthesis image reconstruction using 2D and 3D total variation minimization  

PubMed Central

Background Digital breast tomosynthesis (DBT) is an emerging imaging modality which produces three-dimensional radiographic images of breast. DBT reconstructs tomographic images from a limited view angle, thus data acquired from DBT is not sufficient enough to reconstruct an exact image. It was proven that a sparse image from a highly undersampled data can be reconstructed via compressed sensing (CS) techniques. This can be done by minimizing the l1 norm of the gradient of the image which can also be defined as total variation (TV) minimization. In tomosynthesis imaging problem, this idea was utilized by minimizing total variation of image reconstructed by algebraic reconstruction technique (ART). Previous studies have largely addressed 2-dimensional (2D) TV minimization and only few of them have mentioned 3-dimensional (3D) TV minimization. However, quantitative analysis of 2D and 3D TV minimization with ART in DBT imaging has not been studied. Methods In this paper two different DBT image reconstruction algorithms with total variation minimization have been developed and a comprehensive quantitative analysis of these two methods and ART has been carried out: The first method is ART?+?TV2D where TV is applied to each slice independently. The other method is ART?+?TV3D in which TV is applied by formulating the minimization problem 3D considering all slices. Results A 3D phantom which roughly simulates a breast tomosynthesis image was designed to evaluate the performance of the methods both quantitatively and qualitatively in the sense of visual assessment, structural similarity (SSIM), root means square error (RMSE) of a specific layer of interest (LOI) and total error values. Both methods show superior results in reducing out-of-focus slice blur compared to ART. Conclusions Computer simulations show that ART + TV3D method substantially enhances the reconstructed image with fewer artifacts and smaller error rates than the other two algorithms under the same configuration and parameters and it provides faster convergence rate. PMID:24172584

2013-01-01

352

Digital holography particle image velocimetry for the measurement of 3D t-3c flows  

NASA Astrophysics Data System (ADS)

In this paper a digital in-line holographic recording and reconstruction system was set up and used in the particle image velocimetry for the 3D t-3c (the three-component (3c), velocity vector field measurements in a three-dimensional (3D), space field with time history ( t)) flow measurements that made up of the new full-flow field experimental technique梔igital holographic particle image velocimetry (DHPIV). The traditional holographic film was replaced by a CCD chip that records instantaneously the interference fringes directly without the darkroom processing, and the virtual image slices in different positions were reconstructed by computation using Fresnel-Kirchhoff integral method from the digital holographic image. Also a complex field signal filter (analyzing image calculated by its intensity and phase from real and image parts in fast fourier transform (FFT)) was applied in image reconstruction to achieve the thin focus depth of image field that has a strong effect with the vertical velocity component resolution. Using the frame-straddle CCD device techniques, the 3c velocity vector was computed by 3D cross-correlation through space interrogation block matching through the reconstructed image slices with the digital complex field signal filter. Then the 3D-3c-velocity field (about 20 000 vectors), 3D-streamline and 3D-vorticiry fields, and the time evolution movies (30 field/s) for the 3D t-3c flows were displayed by the experimental measurement using this DHPIV method and techniques.

Shen, Gongxin; Wei, Runjie

2005-10-01

353

Reconstructing photorealistic 3D models from image sequence using domain decomposition method  

NASA Astrophysics Data System (ADS)

In the fields of industrial design, artistic design and heritage conservation, physical objects are usually digitalized by reverse engineering through some 3D scanning methods. Structured light and photogrammetry are two main methods to acquire 3D information, and both are expensive. Even if these expensive instruments are used, photorealistic 3D models are seldom available. In this paper, a new method to reconstruction photorealistic 3D models using a single camera is proposed. A square plate glued with coded marks is used to place the objects, and a sequence of about 20 images is taken. From the coded marks, the images are calibrated, and a snake algorithm is used to segment object from the background. A rough 3d model is obtained using shape from silhouettes algorithm. The silhouettes are decomposed into a combination of convex curves, which are used to partition the rough 3d model into some convex mesh patches. For each patch, the multi-view photo consistency constraints and smooth regulations are expressed as a finite element formulation, which can be resolved locally, and the information can be exchanged along the patches boundaries. The rough model is deformed into a fine 3d model through such a domain decomposition finite element method. The textures are assigned to each element mesh, and a photorealistic 3D model is got finally. A toy pig is used to verify the algorithm, and the result is exciting.

Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei

2009-11-01

354

3D digital stereophotogrammetry: a practical guide to facial image acquisition  

PubMed Central

The use of 3D surface imaging technology is becoming increasingly common in craniofacial clinics and research centers. Due to fast capture speeds and ease of use, 3D digital stereophotogrammetry is quickly becoming the preferred facial surface imaging modality. These systems can serve as an unparalleled tool for craniofacial surgeons, proving an objective digital archive of the patient's face without exposure to radiation. Acquiring consistent high-quality 3D facial captures requires planning and knowledge of the limitations of these devices. Currently, there are few resources available to help new users of this technology with the challenges they will inevitably confront. To address this deficit, this report will highlight a number of common issues that can interfere with the 3D capture process and offer practical solutions to optimize image quality. PMID:20667081

2010-01-01

355

Evaluation of image compression for computer-aided diagnosis of breast tumors in 3D sonography  

NASA Astrophysics Data System (ADS)

Medical imaging examinations form the basis for physicians diagnosing diseases, as evidenced by the increasing use of digital medical images for picture archiving and communications systems (PACS). However, with enlarged medical image databases and rapid growth of patients' case reports, PACS requires image compression to accelerate the image transmission rate and conserve disk space for diminishing implementation costs. For this purpose, JPEG and JPEG2000 have been accepted as legal formats for the digital imaging and communications in medicine (DICOM). The high compression ratio is felt to be useful for medical imagery. Therefore, this study evaluates the compression ratios of JPEG and JPEG2000 standards for computer-aided diagnosis (CAD) of breast tumors in 3-D medical ultrasound (US) images. The 3-D US data sets with various compression ratios are compressed using the two efficacious image compression standards. The reconstructed data sets are then diagnosed by a previous proposed CAD system. The diagnostic accuracy is measured based on receiver operating characteristic (ROC) analysis. Namely, the ROC curves are used to compare the diagnostic performance of two or more reconstructed images. Analysis results ensure a comparison of the compression ratios by using JPEG and JPEG2000 for 3-D US images. Results of this study provide the possible bit rates using JPEG and JPEG2000 for 3-D breast US images.

Chen, We-Min; Huang, Yu-Len; Tao, Chi-Chuan; Chen, Dar-Ren; Moon, Woo-Kyung

2006-03-01

356

High speed detection of retinal blood vessels in fundus image using phase congruency  

Microsoft Academic Search

Detection of blood vessels in retinal fundus image is the preliminary step to diagnose several retinal diseases. There exist\\u000a several methods to automatically detect blood vessels from retinal image with the aid of different computational methods.\\u000a However, all these methods require lengthy processing time. The method proposed here acquires binary vessels from a RGB retinal\\u000a fundus image in almost real

M. Ashraful Amin; Hong Yan

2011-01-01

357

Deformation analysis of 3D tagged cardiac images using an optical flow method  

PubMed Central

Background This study proposes and validates a method of measuring 3D strain in myocardium using a 3D Cardiovascular Magnetic Resonance (CMR) tissue-tagging sequence and a 3D optical flow method (OFM). Methods Initially, a 3D tag MR sequence was developed and the parameters of the sequence and 3D OFM were optimized using phantom images with simulated deformation. This method then was validated in-vivo and utilized to quantify normal sheep left ventricular functions. Results Optimizing imaging and OFM parameters in the phantom study produced sub-pixel root-mean square error (RMS) between the estimated and known displacements in the x (RMSx = 0.62 pixels (0.43 mm)), y (RMSy = 0.64 pixels (0.45 mm)) and z (RMSz = 0.68 pixels (1 mm)) direction, respectively. In-vivo validation demonstrated excellent correlation between the displacement measured by manually tracking tag intersections and that generated by 3D OFM (R ? 0.98). Technique performance was maintained even with 20% Gaussian noise added to the phantom images. Furthermore, 3D tracking of 3D cardiac motions resulted in a 51% decrease in in-plane tracking error as compared to 2D tracking. The in-vivo function studies showed that maximum wall thickening was greatest in the lateral wall, and increased from both apex and base towards the mid-ventricular region. Regional deformation patterns are in agreement with previous studies on LV function. Conclusion A novel method was developed to measure 3D LV wall deformation rapidly with high in-plane and through-plane resolution from one 3D cine acquisition. PMID:20353600

2010-01-01

358

The pulsed all fiber laser application in the high-resolution 3D imaging LIDAR system  

NASA Astrophysics Data System (ADS)

An all fiber laser with master-oscillator-power-amplifier (MOPA) configuration at 1064nm/1550nm for the high-resolution three-dimensional (3D) imaging light detection and ranging (LIDAR) system was reported. The pulsewidth and the repetition frequency could be arbitrarily tuned 1ns~10ns and 10KHz~1MHz, and the peak power exceeded 100kW could be obtained with the laser. Using this all fiber laser in the high-resolution 3D imaging LIDAR system, the image resolution of 1024x1024 and the distance precision of +/-1.5 cm was obtained at the imaging distance of 1km.

Gao, Cunxiao; Zhu, Shaolan; Niu, Linquan; Feng, Li; He, Haodong; Cao, Zongying

2014-05-01

359

Simultaneous whole-animal 3D-imaging of neuronal activity using light field microscopy  

E-print Network

3D functional imaging of neuronal activity in entire organisms at single cell level and physiologically relevant time scales faces major obstacles due to trade-offs between the size of the imaged volumes, and spatial and temporal resolution. Here, using light-field microscopy in combination with 3D deconvolution, we demonstrate intrinsically simultaneous volumetric functional imaging of neuronal population activity at single neuron resolution for an entire organism, the nematode Caenorhabditis elegans. The simplicity of our technique and possibility of the integration into epi-fluoresence microscopes makes it an attractive tool for high-speed volumetric calcium imaging.

Prevedel, R; Hoffmann, M; Pak, N; Wetzstein, G; Kato, S; Schr鰀el, T; Raskar, R; Zimmer, M; Boyden, E S; Vaziri, A

2014-01-01

360

Segmentation of vertebral bodies in CT and MR images based on 3D deterministic models  

NASA Astrophysics Data System (ADS)

The evaluation of vertebral deformations is of great importance in clinical diagnostics and therapy of pathological conditions affecting the spine. Although modern clinical practice is oriented towards the computed tomography (CT) and magnetic resonance (MR) imaging techniques, as they can provide a detailed 3D representation of vertebrae, the established methods for the evaluation of vertebral deformations still provide only a two-dimensional (2D) geometrical description. Segmentation of vertebrae in 3D may therefore not only improve their visualization, but also provide reliable and accurate 3D measurements of vertebral deformations. In this paper we propose a method for 3D segmentation of individual vertebral bodies that can be performed in CT and MR images. Initialized with a single point inside the vertebral body, the segmentation is performed by optimizing the parameters of a 3D deterministic model of the vertebral body to achieve the best match of the model to the vertebral body in the image. The performance of the proposed method was evaluated on five CT (40 vertebrae) and five T2-weighted MR (40 vertebrae) spine images, among them five are normal and five are pathological. The results show that the proposed method can be used for 3D segmentation of vertebral bodies in CT and MR images and that the proposed model can describe a variety of vertebral body shapes. The method may be therefore used for initializing whole vertebra segmentation or reliably describing vertebral body deformations.

妕ern, Darko; Vrtovec, Toma; Pernu, Franjo; Likar, Bo歵jan

2011-03-01

361

Simulation of a new 3D imaging sensor for identifying difficult military targets  

NASA Astrophysics Data System (ADS)

This paper reports the successful application of automatic target recognition and identification (ATR/I) algorithms to simulated 3D imagery of 'difficult' military targets. QinetiQ and Selex S&AS are engaged in a joint programme to build a new 3D laser imaging sensor for UK MOD. The sensor is a 3D flash system giving an image containing range and intensity information suitable for targeting operations from fast jet platforms, and is currently being integrated with an ATR/I suite for demonstration and testing. The sensor has been extensively modelled and a set of high fidelity simulated imagery has been generated using the CAMEO-SIM scene generation software tool. These include a variety of different scenarios (varying range, platform altitude, target orientation and environments), and some 'difficult' targets such as concealed military vehicles. The ATR/I algorithms have been tested on this image set and their performance compared to 2D passive imagery from the airborne trials using a Wescam MX-15 infrared sensor and real-time ATR/I suite. This paper outlines the principles behind the sensor model and the methodology of 3D scene simulation. An overview of the 3D ATR/I programme and algorithms is presented, and the relative performance of the ATR/I against the simulated image set is reported. Comparisons are made to the performance of typical 2D sensors, confirming the benefits of 3D imaging for targeting applications.

Harvey, Christophe; Wood, Jonathan; Randall, Peter; Watson, Graham; Smith, Gordon

2008-04-01

362

3D temporal subtraction on multislice CT images using nonlinear warping technique  

NASA Astrophysics Data System (ADS)

The detection of very subtle lesions and/or lesions overlapped with vessels on CT images is a time consuming and difficult task for radiologists. In this study, we have developed a 3D temporal subtraction method to enhance interval changes between previous and current multislice CT images based on a nonlinear image warping technique. Our method provides a subtraction CT image which is obtained by subtraction of a previous CT image from a current CT image. Reduction of misregistration artifacts is important in the temporal subtraction method. Therefore, our computerized method includes global and local image matching techniques for accurate registration of current and previous CT images. For global image matching, we selected the corresponding previous section image for each current section image by using 2D cross-correlation between a blurred low-resolution current CT image and a blurred previous CT image. For local image matching, we applied the 3D template matching technique with translation and rotation of volumes of interests (VOIs) which were selected in the current and the previous CT images. The local shift vector for each VOI pair was determined when the cross-correlation value became the maximum in the 3D template matching. The local shift vectors at all voxels were determined by interpolation of shift vectors of VOIs, and then the previous CT image was nonlinearly warped according to the shift vector for each voxel. Finally, the warped previous CT image was subtracted from the current CT image. The 3D temporal subtraction method was applied to 19 clinical cases. The normal background structures such as vessels, ribs, and heart were removed without large misregistration artifacts. Thus, interval changes due to lung diseases were clearly enhanced as white shadows on subtraction CT images.

Ishida, Takayuki; Katsuragawa, Shigehiko; Kawashita, Ikuo; Kim, Hyounseop; Itai, Yoshinori; Awai, Kazuo; Li, Qiang; Doi, Kunio

2007-03-01

363

Compressed Sensing Reconstruction for Whole-Heart Imaging with 3D Radial Trajectories: A GPU Implementation  

PubMed Central

A disadvantage of 3D isotropic acquisition in whole-heart coronary MRI is the prolonged data acquisition time. Isotropic 3D radial trajectories allow undersampling of k-space data in all three spatial dimensions, enabling accelerated acquisition of the volumetric data. Compressed sensing (CS) reconstruction can provide further acceleration in the acquisition by removing the incoherent artifacts due to undersampling and improving the image quality. However, the heavy computational overhead of the CS reconstruction has been a limiting factor for its application. In this paper, a parallelized implementation of an iterative CS reconstruction method for 3D radial acquisitions using a commercial graphics processing unit (GPU) is presented. The execution time of the GPU-implemented CS reconstruction was compared with that of the C++ implementation and the efficacy of the undersampled 3D radial acquisition with CS reconstruction was investigated in both phantom and whole-heart coronary data sets. Subsequently, the efficacy of CS in suppressing streaking artifacts in 3D whole-heart coronary MRI with 3D radial imaging and its convergence properties were studied. The CS reconstruction provides improved image quality (in terms of vessel sharpness and suppression of noise-like artifacts) compared with the conventional 3D gridding algorithm and the GPU implementation greatly reduces the execution time of CS reconstruction yielding 3454 times speed-up compared with C++ implementation. PMID:22392604

Nam, Seunghoon; Ak鏰kaya, Mehmet; Basha, Tamer; Stehning, Christian; Manning, Warren J.; Tarokh, Vahid; Nezafat, Reza

2012-01-01

364

Study on Construction of 3d Building Based on Uav Images  

NASA Astrophysics Data System (ADS)

Based on the characteristics of Unmanned Aerial Vehicle (UAV) system for low altitude aerial photogrammetry and the need of three dimensional (3D)city modeling, a method of fast 3D building modeling using the images from UAV carrying four-combined camera is studied. Firstly, by contrasting and analyzing the mosaic structures of the existing four-combined cameras, a new type of four-combined camera with special design of overlap images is designed, which improves the self-calibration function to achieve the high precision imaging by automatically eliminating the error of machinery deformation and the time lag with every exposure, and further reduce the weight of the imaging system. Secondly, several-angle images including vertical images and oblique images gotten by the UAV system are used for the detail measure of building surfaces and the texture extraction. Finally, two tests that are aerial photography with large scale mapping of 1:1000 and 3D building construction in Shandong University of Science and Technology and aerial photography with large scale mapping of 1:500 and 3D building construction in Henan University of Urban Construction, provide authentication model for construction of 3D building based on combined wide-angle camera images from UAV system. It is demonstrated that the UAV system for low altitude aerial photogrammetry can be used in the construction of 3D building production, and the technology solution in this paper offers a new, fast and technical plan for the 3D expression of the city landscape, fine modeling and visualization.

Xie, F.; Lin, Z.; Gui, D.; Lin, H.

2012-07-01

365

Comparison of retinal image quality with spherical and customized aspheric  

E-print Network

retinal image quality, despite the misalignments that accompany cataract surgery. To test this hypothesis power calculation after corneal refractive surgery: double-K method," J. Cataract Refract. Surg. 29 calculations after refractive surgery," J. Cataract Refract. Surg. 31(3), 562锟570 (2005). 13. B. Seitz and A

Dainty, Chris

366

Automatic Detection of Anatomical Structures in Digital Fundus Retinal Images  

Microsoft Academic Search

This paper proposes a novel system for the automatic detection of important anatomical structures such as the Optic Disc (OD), Blood Vessels and Macula in digital fundus retinal images. The novelty is in extraction of blood vessels and localization of macula. OD localization is done using Principle Component Analysis (PCA) followed by an active contour based approach for accurate seg-

Anantha Vidya Sagar; S. Balasubramanian; V. Chandrasekaran

2007-01-01

367

Retinal images: Blood vessel segmentation by threshold probing  

Microsoft Academic Search

An automated system for screening and diagnosis of diabetic retinopathy should segment blood vessels from colored retinal image to assist the ophthalmologists. We present a method for blood vessel enhancement and segmentation. This paper proposes a wavelet based method for vessel enhancement, piecewise threshold probing and adaptive thresholding for vessel localization and segmentation respectively. The method is tested on publicly

M. Usman Akram; Aasia Khanum

2010-01-01

368

On the Small Vessel Detection in High Resolution Retinal Images  

Microsoft Academic Search

In this paper, we proposed a new scheme for detection of small blood vessels in retinal images. A novel filter called Gabor variance filter and a modified histogram equalization technique are developed to enhance the contrast between vessels and background. Vessel segmentation is then performed on the enhanced map using thresholding and branch pruning based on the vessel structures. The

Ming Zhang; Di Wu; Jyh-Charn Liu

2005-01-01

369

Directional Local Contrast Based Blood Vessel Detection in Retinal Images  

Microsoft Academic Search

In this paper, we proposed a novel algorithm to detect blood vessels on retinal images. By using directional local contrast as its detection feature, our algorithm is highly sensitive, fast and accurate. The algorithm only needs integral computing with very simple parameter adjustments and highly suitable for parallelization. It is much more robust to illumination conditions than intensity based counterparts

Ming Zhang; Jyh-charn Liu

2007-01-01

370

Judging an unfamiliar object's distance from its retinal image size.  

PubMed

How do we know how far an object is? If an object's size is known, its retinal image size can be used to judge its distance. To some extent, the retinal image size of an unfamiliar object can also be used to judge its distance, because some object sizes are more likely than others. To examine whether assumptions about object size are used to judge distance, we had subjects indicate the distance of virtual cubes in complete darkness. In separate sessions, the simulated cube size either varied slightly or considerably across presentations. Most subjects indicated a further distance when the simulated cube was smaller, showing that they used retinal image size to judge distance. The cube size that was considered to be most likely depended on the simulated cubes on previous trials. Moreover, subjects relied twice as strongly on retinal image size when the range of simulated cube sizes was small. We conclude that the variability in the perceived cube sizes on previous trials influences the range of sizes that are considered to be likely. PMID:21859822

Sousa, Rita; Brenner, Eli; Smeets, Jeroen B J

2011-01-01

371

3D X-ray imaging methods in support catheter ablations of cardiac arrhythmias.  

PubMed

Cardiac arrhythmias are a very frequent illness. Pharmacotherapy is not very effective in persistent arrhythmias and brings along a number of risks. Catheter ablation has became an effective and curative treatment method over the past 20爕ears. To support complex arrhythmia ablations, the 3D X-ray cardiac cavities imaging is used, most frequently the 3D reconstruction of CT images. The 3D cardiac rotational angiography (3DRA) represents a modern method enabling to create CT like 3D images on a standard X-ray machine equipped with special software. Its advantage lies in the possibility to obtain images during the procedure, decreased radiation dose and reduction of amount of the contrast agent. The left atrium model is the one most frequently used for complex atrial arrhythmia ablations, particularly for atrial fibrillation. CT data allow for creation and segmentation of 3D models of all cardiac cavities. Recently, a research has been made proving the use of 3DRA to create 3D models of other cardiac (right ventricle, left ventricle, aorta) and non-cardiac structures (oesophagus). They can be used during catheter ablation of complex arrhythmias to improve orientation during the construction of 3D electroanatomic maps, directly fused with 3D electroanatomic systems and/or fused with fluoroscopy. An intensive development in the 3D model creation and use has taken place over the past years and they became routinely used during catheter ablations of arrhythmias, mainly atrial fibrillation ablation procedures. Further development may be anticipated in the future in both the creation and use of these models. PMID:24964905

St醨ek, Zden?k; Lehar, Franti歟k; Je, Ji?; Wolf, Ji?; Nov醟, Miroslav

2014-10-01

372

A molecular image-directed, 3D ultrasound-guided biopsy system for the prostate  

NASA Astrophysics Data System (ADS)

Systematic transrectal ultrasound (TRUS)-guided biopsy is the standard method for a definitive diagnosis of prostate cancer. However, this biopsy approach uses two-dimensional (2D) ultrasound images to guide biopsy and can miss up to 30% of prostate cancers. We are developing a molecular image-directed, three-dimensional (3D) ultrasound imageguided biopsy system for improved detection of prostate cancer. The system consists of a 3D mechanical localization system and software workstation for image segmentation, registration, and biopsy planning. In order to plan biopsy in a 3D prostate, we developed an automatic segmentation method based wavelet transform. In order to incorporate PET/CT images into ultrasound-guided biopsy, we developed image registration methods to fuse TRUS and PET/CT images. The segmentation method was tested in ten patients with a DICE overlap ratio of 92.4% +/- 1.1 %. The registration method has been tested in phantoms. The biopsy system was tested in prostate phantoms and 3D ultrasound images were acquired from two human patients. We are integrating the system for PET/CT directed, 3D ultrasound-guided, targeted biopsy in human patients.

Fei, Baowei; Schuster, David M.; Master, Viraj; Akbari, Hamed; Fenster, Aaron; Nieh, Peter

2012-02-01

373

Free-Breathing 3D Whole Heart Black Blood Imaging with Motion Sensitized Driven Equilibrium  

PubMed Central

Purpose To assess the efficacy and robustness of motion sensitized driven equilibrium (MSDE) for blood suppression in volumetric 3D whole heart cardiac MR. Materials and Methods To investigate the efficacy of MSDE on blood suppression and myocardial SNR loss on different imaging sequences. 7 healthy adult subjects were imaged using 3D ECG-triggered MSDE-prep T1-weighted turbo spin echo (TSE), and spoiled gradient echo (GRE), after optimization of MSDE parameters in a pilot study of 5 subjects. Imaging artifacts, myocardial and blood SNR were assessed. Subsequently, the feasibility of isotropic spatial resolution MSDE-prep black-blood was assessed in 6 subjects. Finally, 15 patients with known or suspected cardiovascular disease were recruited to be imaged using conventional multi-slice 2D DIR TSE imaging sequence and 3D MSDE-prep spoiled GRE. Results The MSDE-prep yields significant blood suppression (75-92%), enabling a volumetric 3D black-blood assessment of the whole heart with significantly improved visualization of the chamber walls. The MSDE-prep also allowed successful acquisition of black-blood images with isotropic spatial resolution. In the patient study, 3D black-blood MSDE-prep and DIR resulted in similar blood suppression in LV and RV walls but the MSDE prep had superior myocardial signal and wall sharpness. Conclusion MSDE-prep allows volumetric black-blood imaging of the heart. PMID:22517477

Srinivasan, Subashini; Hu, Peng; Kissinger, Kraig V.; Goddu, Beth; Goepfert, Lois; Schmidt, Ehud J.; Kozerke, Sebastian; Nezafat, Reza

2012-01-01

374

An open-source deconvolution software package for 3-D quantitative fluorescence microscopy imaging  

PubMed Central

Summary Deconvolution techniques have been widely used for restoring the 3-D quantitative information of an unknown specimen observed using a wide-field fluorescence microscope. Deconv, an open-source deconvolution software package, was developed for 3-D quantitative fluorescence microscopy imaging and was released under the GNU Public License. Deconv provides numerical routines for simulation of a 3-D point spread function and deconvolution routines implemented three constrained iterative deconvolution algorithms: one based on a Poisson noise model and two others based on a Gaussian noise model. These algorithms are presented and evaluated using synthetic images and experimentally obtained microscope images, and the use of the library is explained. Deconv allows users to assess the utility of these deconvolution algorithms and to determine which are suited for a particular imaging application. The design of Deconv makes it easy for deconvolution capabilities to be incorporated into existing imaging applications. PMID:19941558

SUN, Y.; DAVIS, P.; KOSMACEK, E. A.; IANZINI, F.; MACKEY, M. A.

2010-01-01

375

Automatic 3D segmentation of ultrasound images using atlas registration and statistical texture prior  

NASA Astrophysics Data System (ADS)

We are developing a molecular image-directed, 3D ultrasound-guided, targeted biopsy system for improved detection of prostate cancer. In this paper, we propose an automatic 3D segmentation method for transrectal ultrasound (TRUS) images, which is based on multi-atlas registration and statistical texture prior. The atlas database includes registered TRUS images from previous patients and their segmented prostate surfaces. Three orthogonal Gabor filter banks are used to extract texture features from each image in the database. Patient-specific Gabor features from the atlas database are used to train kernel support vector machines (KSVMs) and then to segment the prostate image from a new patient. The segmentation method was tested in TRUS data from 5 patients. The average surface distance between our method and manual segmentation is 1.61 +/- 0.35 mm, indicating that the atlas-based automatic segmentation method works well and could be used for 3D ultrasound-guided prostate biopsy.

Yang, Xiaofeng; Schuster, David; Master, Viraj; Nieh, Peter; Fenster, Aaron; Fei, Baowei

2011-03-01

376

Introducing the depth transfer curve for 3D capture system characterization  

Microsoft Academic Search

3D technology has recently made a transition from movie theaters to consumer electronic devices such as 3D cameras and camcorders. In addition to what 2D imaging conveys, 3D content also contains information regarding the scene depth. Scene depth is simulated through the strongest brain depth cue, namely retinal disparity. This can be achieved by capturing an image by horizontally separated

Sergio R. Goma; Kalin Atanassov; Vikas Ramachandra

2011-01-01

377

Effects of point configuration on the accuracy in 3D reconstruction from biplane images  

SciTech Connect

Two or more angiograms are being used frequently in medical imaging to reconstruct locations in three-dimensional (3D) space, e.g., for reconstruction of 3D vascular trees, implanted electrodes, or patient positioning. A number of techniques have been proposed for this task. In this simulation study, we investigate the effect of the shape of the configuration of the points in 3D (the 'cloud' of points) on reconstruction errors for one of these techniques developed in our laboratory. Five types of configurations (a ball, an elongated ellipsoid (cigar), flattened ball (pancake), flattened cigar, and a flattened ball with a single distant point) are used in the evaluations. For each shape, 100 random configurations were generated, with point coordinates chosen from Gaussian distributions having a covariance matrix corresponding to the desired shape. The 3D data were projected into the image planes using a known imaging geometry. Gaussian distributed errors were introduced in the x and y coordinates of these projected points. Gaussian distributed errors were also introduced into the gantry information used to calculate the initial imaging geometry. The imaging geometries and 3D positions were iteratively refined using the enhanced-Metz-Fencil technique. The image data were also used to evaluate the feasible R-t solution volume. The 3D errors between the calculated and true positions were determined. The effects of the shape of the configuration, the number of points, the initial geometry error, and the input image error were evaluated. The results for the number of points, initial geometry error, and image error are in agreement with previously reported results, i.e., increasing the number of points and reducing initial geometry and/or image error, improves the accuracy of the reconstructed data. The shape of the 3D configuration of points also affects the error of reconstructed 3D configuration; specifically, errors decrease as the 'volume' of the 3D configuration increases, as would be intuitively expected, and shapes with larger spread, such as spherical shapes, yield more accurate reconstructions. These results are in agreement with an analysis of the solution volume of feasible geometries and could be used to guide selection of points for reconstruction of 3D configurations from two views.

Dmochowski, Jacek; Hoffmann, Kenneth R.; Singh, Vikas; Xu Jinhui; Nazareth, Daryl P. [Department of Mathematics and Statistics, UNC Charlotte, 9201 University City Boulevard, Charlotte, North Carolina 28223-0001 (United States); Department of Neurosurgery, Toshiba Stroke Center, University at Buffalo, Buffalo, New York 14214 (United States); Department of Computer Science, University at Buffalo, Buffalo, New York 14260 (United States); Toshiba Stroke Center, University at Buffalo, Buffalo, New York 14214 (United States)

2005-09-15

378

Flatbed-type 3D display systems using integral imaging method  

NASA Astrophysics Data System (ADS)

We have developed prototypes of flatbed-type autostereoscopic display systems using one-dimensional integral imaging method. The integral imaging system reproduces light beams similar of those produced by a real object. Our display architecture is suitable for flatbed configurations because it has a large margin for viewing distance and angle and has continuous motion parallax. We have applied our technology to 15.4-inch displays. We realized horizontal resolution of 480 with 12 parallaxes due to adoption of mosaic pixel arrangement of the display panel. It allows viewers to see high quality autostereoscopic images. Viewing the display from angle allows the viewer to experience 3-D images that stand out several centimeters from the surface of the display. Mixed reality of virtual 3-D objects and real objects are also realized on a flatbed display. In seeking reproduction of natural 3-D images on the flatbed display, we developed proprietary software. The fast playback of the CG movie contents and real-time interaction are realized with the aid of a graphics card. Realization of the safety 3-D images to the human beings is very important. Therefore, we have measured the effects on the visual function and evaluated the biological effects. For example, the accommodation and convergence were measured at the same time. The various biological effects are also measured before and after the task of watching 3-D images. We have found that our displays show better results than those to a conventional stereoscopic display. The new technology opens up new areas of application for 3-D displays, including arcade games, e-learning, simulations of buildings and landscapes, and even 3-D menus in restaurants.

Hirayama, Yuzo; Nagatani, Hiroyuki; Saishu, Tatsuo; Fukushima, Rieko; Taira, Kazuki

2006-10-01

379

Image denoising with multiple layer block matching and 3D filtering  

NASA Astrophysics Data System (ADS)

Block Matching and 3-D Filtering (BM3D) algorithm is currently considered as one of the most successful denoising algorithms. Despite its excellent results, BM3D still has room for improvements. Image details and sharp edges, such as text in document images are challenging, as they usually do not produce sparse representations under the linear transformations. Various artifacts such as ringing and blurring can be introduced as a result. This paper proposes a Multiple Layer BM3D (MLBM3D) denoising algorithm. The basic idea is to decompose image patches that contain high contrast details into multiple layers, and each layer is then collaboratively filtered separately. The algorithm contains a Basic Estimation step and a Final Estimation step. The first (Basic Estimation) step is identical to the one in BM3D. In the second (Final Estimation) step, image groups are determined to be single-layer or multilayer. A single-layer group is filtered in a same manner as in BM3D. For a multi-layer group, each image patch within the group is decomposed with the three-layer model. All the top layers in the group are stacked and collaboratively filtered. So are the bottom layers. The filtered top and bottom layers are re-assembled to form the estimation of the blocks. The final estimation of the image is obtained by aggregating estimations of all blocks, including both single-layer and multi-layer ones. The proposed algorithm shows excellent results, particularly for the images containing high contrast edges.

Fan, Zhigang

2014-03-01

380

3D Imaging for hand gesture recognition: Exploring the software-hardware interaction of current technologies  

NASA Astrophysics Data System (ADS)

Interaction with 3D information is one of the fundamental and most familiar tasks in virtually all areas of engineering and science. Several recent technological advances pave the way for developing hand gesture recognition capabilities available to all, which will lead to more intuitive and efficient 3D user interfaces (3DUI). These developments can unlock new levels of expression and productivity in all activities concerned with the creation and manipulation of virtual 3D shapes and, specifically, in engineering design. Building fully automated systems for tracking and interpreting hand gestures requires robust and efficient 3D imaging techniques as well as potent shape classifiers. We survey and explore current and emerging 3D imaging technologies, and focus, in particular, on those that can be used to build interfaces between the users' hands and the machine. The purpose of this paper is to categorize and highlight the relevant differences between these existing 3D imaging approaches in terms of the nature of the information provided, output data format, as well as the specific conditions under which these approaches yield reliable data. Furthermore we explore the impact of each of these approaches on the computational cost and reliability of the required image processing algorithms. Finally we highlight the main challenges and opportunities in developing natural user interfaces based on hand gestures, and conclude with some promising directions for future research. [Figure not available: see fulltext.

Periverzov, Frol; Ilie?, Horea T.

2012-09-01

381

Dynamic visual image modeling for 3D synthetic scenes in agricultural engineering  

NASA Astrophysics Data System (ADS)

The dynamic visual image modeling for 3D synthetic scenes by using dynamic multichannel binocular visual image based on the mobile self-organizing network. Technologies of 3D modeling synthetic scenes have been widely used in kinds of industries. The main purpose of this paper is to use multiple networks of dynamic visual monitors and sensors to observe an unattended area, to use the advantages of mobile network in rural areas for improving existing mobile network information service further and providing personalized information services. The goal of displaying is to provide perfect representation of synthetic scenes. Using low-power dynamic visual monitors and temperature/humidity sensor or GPS installed in the node equipment, monitoring data will be sent at scheduled time. Then through the mobile self-organizing network, 3D model is rebuilt by synthesizing the returned images. On this basis, we formalize a novel algorithm for multichannel binocular visual 3D images based on fast 3D modeling. Taking advantage of these low prices mobile, mobile self-organizing networks can get a large number of video from where is not suitable for human observation or unable to reach, and accurately synthetic 3D scene. This application will play a great role in promoting its application in agriculture.

Gao, Li; Yan, Juntao; Li, Xiaobo; Ji, Yatai; Li, Xin

382

A Novel 3D Building Damage Detection Method Using Multiple Overlapping UAV Images  

NASA Astrophysics Data System (ADS)

In this paper, a novel approach is presented that applies multiple overlapping UAV imagesto building damage detection. Traditional building damage detection method focus on 2D changes detection (i.e., those only in image appearance), whereas the 2D information delivered by the images is often not sufficient and accurate when dealing with building damage detection. Therefore the detection of building damage in 3D feature of scenes is desired. The key idea of 3D building damage detection is the 3D Change Detection using 3D point cloud obtained from aerial images through Structure from motion (SFM) techniques. The approach of building damage detection discussed in this paper not only uses the height changes of 3D feature of scene but also utilizes the image's shape and texture feature. Therefore, this method fully combines the 2D and 3D information of the real world to detect the building damage. The results, tested through field study, demonstrate that this method is feasible and effective in building damage detection. It has also shown that the proposed method is easily applicable and suited well for rapid damage assessment after natural disasters.

Sui, H.; Tu, J.; Song, Z.; Chen, G.; Li, Q.

2014-09-01

383

Improvement of BM3D Algorithm and Employment to Satellite and CFA Images Denoising  

E-print Network

This paper proposes a new procedure in order to improve the performance of block matching and 3-D filtering (BM3D) image denoising algorithm. It is demonstrated that it is possible to achieve a better performance than that of BM3D algorithm in a variety of noise levels. This method changes BM3D algorithm parameter values according to noise level, removes prefiltering, which is used in high noise level; therefore Peak Signal-to-Noise Ratio (PSNR) and visual quality get improved, and BM3D complexities and processing time are reduced. This improved BM3D algorithm is extended and used to denoise satellite and color filter array (CFA) images. Output results show that the performance has upgraded in comparison with current methods of denoising satellite and CFA images. In this regard this algorithm is compared with Adaptive PCA algorithm, that has led to superior performance for denoising CFA images, on the subject of PSNR and visual quality. Also the processing time has decreased significantly.

Pakdelazar', 'Omid

2011-01-01

384

Computer-generated holograms for reconstructing multi 3D images by space-division recording method  

NASA Astrophysics Data System (ADS)

In this report, computer-generated holograms (CGHs) that are able to reconstruct different 3D images in accordance with moving viewpoints are discussed as an application of electron beam printing CGHs. In previous Practical Holography conferences, image-type CGHs which are able to reconstruct 3D images under white light were reported. This time, utilizing the method of this fabrication, trial making of a CGH which is capable of reconstructing three different 3D images was implemented. To achieve this, the angle selectivity of the incident light to reconstruct the images was applied. In other words, to reconstruct three different images, three reference waves of which each incident angle was varied were selected and three kinds of interferogram data were merged to complete a multi-recorded CGH. The recorded images are horizontal-parallax-only images so that the CGH can be composed of some elemental CGHs which are horizontally-sliced, and so synthesizing three CGHs is achieved by placing elemental CGHs of three CGHs mutually. Therefore, a multi-recorded CGH by space-dividing was fabricated experimentally. As a result of fabricating a 10 mm X 18 mm sized CGH, it has been confirmed that three different 3D images can be observed separately in accordance with moving view points.

Hamano, Tomohisa; Kitamura, Mitsuru

2000-03-01

385

Fractal analysis of the retinal vascular network in fundus images.  

PubMed

Complexity of the retinal vascular network is quantified through the measurement of fractal dimension. A computerized approach enhances and segments the retinal vasculature in digital fundus images with an accuracy of 94% in comparison to the gold standard of manual tracing. Fractal analysis was performed on skeletonized versions of the network in 40 images from a study of stroke. Mean fractal dimension was found to be 1.398 (with standard deviation 0.024) from 20 images of the hypertensives sub-group and 1.408 (with standard deviation 0.025) from 18 images of the non-hypertensives subgroup. No evidence of a significant difference in the results was found for this sample size. However, statistical analysis showed that to detect a significant difference at the level seen in the data would require a larger sample size of 88 per group. PMID:18003503

Macgillivray, T J; Patton, N; Doubal, F N; Graham, C; Wardlaw, J M

2007-01-01

386

Hyperspectral retinal imaging with a spectrally tunable light source  

NASA Astrophysics Data System (ADS)

Hyperspectral retinal imaging can measure oxygenation and identify areas of ischemia in human patients, but the devices used by current researchers are inflexible in spatial and spectral resolution. We have developed a flexible research prototype consisting of a DLP-based spectrally tunable light source coupled to a fundus camera to quickly explore the effects of spatial resolution, spectral resolution, and spectral range on hyperspectral imaging of the retina. The goal of this prototype is to (1) identify spectral and spatial regions of interest for early diagnosis of diseases such as glaucoma, age-related macular degeneration (AMD), and diabetic retinopathy (DR); and (2) define required specifications for commercial products. In this paper, we describe the challenges and advantages of using a spectrally tunable light source for hyperspectral retinal imaging, present clinical results of initial imaging sessions, and describe how this research can be leveraged into specifying a commercial product.

Francis, Robert P.; Zuzak, Karel J.; Ufret-Vincenty, Rafael

2011-03-01

387

Real-Depth imaging: a new (no glasses) 3D imaging technology with video\\/data projection applications  

Microsoft Academic Search

Floating Images, Inc. has developed the software and hardware for anew, patent pending, 'floating 3D, off-the- screen-experience' display technology. This technology has the potential to become the next standard for home and arcade video games, computers, corporate presentations, Internet\\/Intranet viewing, and television. Current '3D Graphics' technologies are actually flat on screen. Floating Images technology actually produce images at different depths

Eugene Dolgoff

1997-01-01

388

Detection, identification and tracking of biological micro/nano organisms by computational 3D optical imaging  

NASA Astrophysics Data System (ADS)

In this paper we present an overview of our work on a method to provide three-dimensional (3D) identification and tracking of biological micro/nano-organisms. This approach connects digital holographic microscopy and statistical methods for cell identification. For 3D data acquisition of living biological microorganisms, a filtered white light source, LED or laser diode beam propagates through a biological microorganism and the transversely and longitudinally magnified Gabor hologram pattern of the biological microorganism by microscope objective is optically recorded with a CCD camera interfaced with a computer. 3D imaging of the biological microorganism from the magnified Gabor hologram pattern is obtained by applying the computational Fresnel propagation algorithm. For identification and tracking of the biological microorganism, statistical approaches based on statistical estimation and inference algorithms are developed to the segmented holographic 3D image. Overviews of analytical frameworks are discussed and experimental results are presented.

Javidi, Bahram; Moon, Inkyu; Daneshpanah, Mehdi

2010-08-01

389

Development of an Integrated Hyperspectral Imager and 3D-Flash LADAR for Terrestrial Characterization  

Microsoft Academic Search

The characterization of terrestrial ecosystems using remote sensing technology has a long history with using multi-spectral imagers for vegetation classification indices, ecosystem health, and change detection. Traditional multi-band imagers are now being replaced with more advanced hyperspectral imagers, which offer finer spectral resolution and more specific characterization of terrestrial reflectances. Recently, 3- dimensional (3D) imaging technologies, such as radar interferometry

A. L. Swanson; S. Sandor-Leahy; J. Shepanski; C. Wong; C. Bracikowski; L. Abelson; M. Helmlinger; D. Bauer; M. Folkman

2009-01-01

390

Development of an Integrated Hyperspectral Imager and 3D-Flash Lidar for Terrestrial Characterization  

Microsoft Academic Search

The characterization of terrestrial ecosystems using remote sensing technology has a long history with using multi-spectral imagers for vegetation classification indices, ecosystem health, and change detection. Traditional multi-band imagers are now being replaced with more advanced hyperspectral imagers, which offer finer spectral resolution and more specific characterization of terrestrial reflectances. Recently, 3-dimensional (3D) imaging technologies, such as radar interferometry and

L. Abelson; A. L. Swanson; S. Sandor-Leahy; J. Shepanski; C. Wong; M. Helmlinger; M. Folkman

2009-01-01

391

Midsagittal plane extraction from brain images based on 3D SIFT  

NASA Astrophysics Data System (ADS)

Midsagittal plane (MSP) extraction from 3D brain images is considered as a promising technique for human brain symmetry analysis. In this paper, we present a fast and robust MSP extraction method based on 3D scale-invariant feature transform (SIFT). Unlike the existing brain MSP extraction methods, which mainly rely on the gray similarity, 3D edge registration or parameterized surface matching to determine the fissure plane, our proposed method is based on distinctive 3D SIFT features, in which the fissure plane is determined by parallel 3D SIFT matching and iterative least-median of squares plane regression. By considering the relative scales, orientations and flipped descriptors between two 3D SIFT features, we propose a novel metric to measure the symmetry magnitude for 3D SIFT features. By clustering and indexing the extracted SIFT features using a k-dimensional tree (KD-tree) implemented on graphics processing units, we can match multiple pairs of 3D SIFT features in parallel and solve the optimal MSP on-the-fly. The proposed method is evaluated by synthetic and in vivo datasets, of normal and pathological cases, and validated by comparisons with the state-of-the-art methods. Experimental results demonstrated that our method has achieved a real-time performance with better accuracy yielding an average yaw angle error below 0.91 and an average roll angle error no more than 0.89.

Wu, Huisi; Wang, Defeng; Shi, Lin; Wen, Zhenkun; Ming, Zhong

2014-03-01

392

Midsagittal plane extraction from brain images based on 3D SIFT.  

PubMed

Midsagittal plane (MSP) extraction from 3D brain images is considered as a promising technique for human brain symmetry analysis. In this paper, we present a fast and robust MSP extraction method based on 3D scale-invariant feature transform (SIFT). Unlike the existing brain MSP extraction methods, which mainly rely on the gray similarity, 3D edge registration or parameterized surface matching to determine the fissure plane, our proposed method is based on distinctive 3D SIFT features, in which the fissure plane is determined by parallel 3D SIFT matching and iterative least-median of squares plane regression. By considering the relative scales, orientations and flipped descriptors between two 3D SIFT features, we propose a novel metric to measure the symmetry magnitude for 3D SIFT features. By clustering and indexing the extracted SIFT features using a k-dimensional tree (KD-tree) implemented on graphics processing units, we can match multiple pairs of 3D SIFT features in parallel and solve the optimal MSP on-the-fly. The proposed method is evaluated by synthetic and in vivo datasets, of normal and pathological cases, and validated by comparisons with the state-of-the-art methods. Experimental results demonstrated that our method has achieved a real-time performance with better accuracy yielding an average yaw angle error below 0.91 and an average roll angle error no more than 0.89. PMID:24583964

Wu, Huisi; Wang, Defeng; Shi, Lin; Wen, Zhenkun; Ming, Zhong

2014-03-21

393

Hybrid atlas-based and image-based approach for segmenting 3D brain MRIs  

NASA Astrophysics Data System (ADS)

This work is a contribution to the problem of localizing key cerebral structures in 3D MRIs and its quantitative evaluation. In pursuing it, the cooperation between an image-based segmentation method and a hierarchical deformable registration approach has been considered. The segmentation relies on two main processes: homotopy modification and contour decision. The first one is achieved by a marker extraction stage where homogeneous 3D regions of an image, I(s), from the data set are identified. These regions, M(I), are obtained combining information from deformable atlas, achieved by the warping of eight previous labeled maps on I(s). Then, the goal of the decision stage is to precisely locate the contours of the 3D regions set by the markers. This contour decision is performed by a 3D extension of the watershed transform. The anatomical structures taken into consideration and embedded into the atlas are brain, ventricles, corpus callosum, cerebellum, right and left hippocampus, medulla and midbrain. The hybrid method operates fully automatically and in 3D, successfully providing segmented brain structures. The quality of the segmentation has been studied in terms of the detected volume ratio by using kappa statistic and ROC analysis. Results of the method are shown and validated on a 3D MRI phantom. This study forms part of an on-going long term research aiming at the creation of a 3D probabilistic multi-purpose anatomical brain atlas.

Bueno, Gloria; Musse, Olivier; Heitz, Fabrice; Armspach, Jean-Paul

2001-07-01

394

Computer-aided 3D-shape construction of hearts from CT images for rapid prototyping  

NASA Astrophysics Data System (ADS)

By developing a computer-aided modeling system, the 3D shapes of infant's heart have been constructed interactively from quality-limited CT images for rapid prototyping of biomodels. The 3D model was obtained by following interactive steps: (1) rough region cropping, (2) outline extraction in each slice with locally-optimized threshold, (3) verification and correction of outline overlap, (4) 3D surface generation of inside wall, (5) connection of inside walls, (6) 3D surface generation of outside wall, (7) synthesis of self-consistent 3D surface. The manufactured biomodels revealed characteristic 3D shapes of heart such as left atrium and ventricle, aortic arch and right auricle. Their real shape of cavity and vessel is suitable for surgery planning and simulation. It is a clear advantage over so-called "blood-pool" model which is massive and often found in 3D visualization of CT images as volume rendering perspective. The developed system contributed both to quality improvement and to modeling-time reduction, which may suggest a practical approach to establish a routine process for manufacturing heart biomodels. Further study on the system performance is now still in progress.

Fukuzawa, Masayuki; Kato, Yutaro; Nakamori, Nobuyuki; Ozawa, Seiichiro; Shiraishi, Isao

2012-03-01

395

Analysis and Processing the 3D-Range-Image-Data for Robot Monitoring  

NASA Astrophysics Data System (ADS)

Industrial robots are commonly used for physically stressful jobs in complex environments. In any case collisions with heavy and high dynamic machines need to be prevented. For this reason the operational range has to be monitored precisely, reliably and meticulously. The advantage of the SwissRanger SR-3000 is that it delivers intensity images and 3D-information simultaneously of the same scene that conveniently allows 3D-monitoring. Due to that fact automatic real time collision prevention within the robots working space is possible by working with 3D-coordinates.

Kohoutek, Tobias

2008-09-01

396

A correction method for range walk error in photon counting 3D imaging LIDAR  

NASA Astrophysics Data System (ADS)

A correction method for the range walk error is presented in this paper, which is based on a priori modeling and suitable for the GmAPD photon counting three-dimensional(3D) imaging LIDAR. The range walk error is mainly brought in by the fluctuation in number of photons in the laser echo pulse. In this paper, the priori model of range walk error was established, and the function relationship between the range walk error and the laser pulse response rate was determined using the numerical fitting. With this function, the range walk error of original 3D range image was forecasted and the corresponding compensated image of range walk error was obtained to correct the original 3D range image. The experimental results showed that the correction method could reduce the range walk error effectively, and it is particularly suitable for the case that there are significant differences of material properties or reflection characteristics in the scene.

He, Weiji; Sima, Boyu; Chen, Yunfei; Dai, Huidong; Chen, Qian; Gu, Guohua

2013-11-01

397

Synthesis of 3D Model of a Magnetic Field-Influenced Body from a Single Image  

NASA Technical Reports Server (NTRS)

A method for recovery of a 3D model of a cloud-like structure that is in motion and deforming but approximately governed by magnetic field properties is described. The method allows recovery of the model from a single intensity image in which the structure's silhouette can be observed. The method exploits envelope theory and a magnetic field model. Given one intensity image and the segmented silhouette in the image, the method proceeds without human intervention to produce the 3D model. In addition to allowing 3D model synthesis, the method's capability to yield a very compact description offers further utility. Application of the method to several real-world images is demonstrated.

Wang, Cuilan; Newman, Timothy; Gallagher, Dennis

2006-01-01

398

3D topography of biologic tissue by multiview imaging and structured light illumination  

NASA Astrophysics Data System (ADS)

Obtaining three-dimensional (3D) information of biologic tissue is important in many medical applications. This paper presents two methods for reconstructing 3D topography of biologic tissue: multiview imaging and structured light illumination. For each method, the working principle is introduced, followed by experimental validation on a diabetic foot model. To compare the performance characteristics of these two imaging methods, a coordinate measuring machine (CMM) is used as a standard control. The wound surface topography of the diabetic foot model is measured by multiview imaging and structured light illumination methods respectively and compared with the CMM measurements. The comparison results show that the structured light illumination method is a promising technique for 3D topographic imaging of biologic tissue.

Liu, Peng; Zhang, Shiwu; Xu, Ronald

2014-02-01

399

3D image copyright protection based on cellular automata transform and direct smart pixel mapping  

NASA Astrophysics Data System (ADS)

We propose a three-dimensional (3D) watermarking system with the direct smart pixel mapping algorithm to improve the resolution of the reconstructed 3D watermark plane images. The depth-converted elemental image array (EIA) is obtained through the computational pixel mapping method. In the watermark embedding process, the depth-converted EIA is first scrambled by using the Arnold transform, which is then embedded in the middle frequency of the cellular automata (CA) transform. Compared with conventional computational integral imaging reconstruction (CIIR) methods, this proposed scheme gives us a higher resolution of the reconstructed 3D plane images by using the quality-enhanced depth-converted EIA. The proposed method, which can obtain many transform planes for embedding watermark data, uses CA transforms with various gateway values. To prove the effectiveness of the proposed method, we present the results of our preliminary experiments.

Li, Xiao-Wei; Kim, Seok-Tae; Lee, In-Kwon

2014-10-01

400

Tracing of Central Serous Retinopathy from Retinal Fundus Images  

Microsoft Academic Search

The Fundus images of retina of human eye can provide valuable information about human health. In this respect, one can systematically\\u000a assess digital retinal photographs, to predict various diseases. This eliminates the need for manual assessment of ophthalmic\\u000a images in diagnostic practices. This work studies how the changes in the retina caused by Central Serous Retinopathy (CSR)\\u000a can be detected

J. David; A. Sukesh Kumar; V. Viji

401

Accurate Measurement of Satellite Antenna Surface Using 3D Digital Image Correlation Technique  

Microsoft Academic Search

Application of the three-dimensional digital image correlation technique (3D DIC) to the accurate measurement of full-field surface profile of a 730 mm-diameter carbon fibre composite satellite antenna is investigated in this article. The basic principles of the 3D DIC technique are described. The measured profile was compared with the one measured with a three-dimensional coordinate measuring machine. The results clearly

B. Pan; H. Xie; L. Yang; Z. Wang

2009-01-01

402

Connectivity-preserving parallel operators in 2D and 3D images  

Microsoft Academic Search

Connectivity preservation is a concern in the design of parallel reduction processes for 2D and 3D image processing algorithms. Algorithm designers need efficient and available connectivity preservation tasks to prove algorithm correctness. Although efficient 2D tests are known, efficient 3D tests still need to be developed. We review earlier results for 2D connectivity preservation tests and demonstrate several 'design spaces'

Richard W. Hall

1993-01-01

403

Toward 3D Vision from Range Images: An Optimization Framework and Parallel Networks  

Microsoft Academic Search

We propose a unified approach to solve low, intermediate and high level computer visionproblems for 3D object recognition from range images. All three levels of computationare cast in an optimization framework and can be implemented on neural network stylearchitecture. In the low level computation, the tasks are to estimate curvature images fromthe input range data. Subsequent processing at the intermediate

Stan Z. Li

1992-01-01

404

Real-time Upper Body Detection and 3D Pose Estimation in Monoscopic Images  

E-print Network

Real-time Upper Body Detection and 3D Pose Estimation in Monoscopic Images Antonio S. Micilotta in monoscopic images. The approach consists of two parts. Firstly the location of a human is identified by a probabalistic assembly of detected body parts. Detectors for the face, torso and hands are learnt using ada

Bowden, Richard

405

AN APPROACH FOR INTERSUBJECT ANALYSIS OF 3D BRAIN IMAGES BASED ON CONFORMAL GEOMETRY  

E-print Network

AN APPROACH FOR INTERSUBJECT ANALYSIS OF 3D BRAIN IMAGES BASED ON CONFORMAL GEOMETRY Guangyu Zou Emission Tomography (PET) and Diffusion Tensor Imaging (DTI) have accelerated brain research in many aspects. In order to better understand the synergy of the many processes involved in normal brain function

Hua, Jing

406

3-D Reconstruction of Medical Image Using Wavelet Transform and Snake Model  

E-print Network

diagnostic, virtual surgery system, plastic and artificial limb surgery, radiotherapy planning, and teaching on contours and polygons. Image segmentation is an important technology in the digital human body. Many3-D Reconstruction of Medical Image Using Wavelet Transform and Snake Model Jinyong Cheng, Yihui

Aickelin, Uwe

407

Head Modeling from Pictures and Morphing in 3D with Image Metamorphosis based on triangulation  

E-print Network

Head Modeling from Pictures and Morphing in 3D with Image Metamorphosis based on triangulation WON with texture metamorphosis. There are various approaches to reconstruct a realistic person using a Laser animation. Other techniques for metamorphosis, or "morphing", involve the transformation between 2D images

Lee, WonSook

408

3D imaging of diatoms with ion-abrasion scanning electron microscopy  

Microsoft Academic Search

Ion-abrasion scanning electron microscopy (IASEM) takes advantage of focused ion beams to abrade thin sections from the surface of bulk specimens, coupled with SEM to image the surface of each section, enabling 3D reconstructions of subcellular architecture at ?30nm resolution. Here, we report the first application of IASEM for imaging a biomineralizing organism, the marine diatom Thalassiosira pseudonana. Diatoms have

Mark Hildebrand; Sang Kim; Dan Shi; Keana Scott; Sriram Subramaniam

2009-01-01

409

AUTOMATED MODELING OF 3D BUILDING ROOFS USING IMAGE AND LIDAR DATA  

E-print Network

AUTOMATED MODELING OF 3D BUILDING ROOFS USING IMAGE AND LIDAR DATA N. Demir* , E. Baltsavias)@geod.baug.ethz.ch Commission IV, WG IV/2 KEY WORDS: Buildings, Multispectral classification, LiDAR data, DSM/DTM, Edge Matching as well as classification of multispectral images, elevation data and vertical LiDAR point density

Schindler, Konrad

410

Extraction and alignment of facial regions for 3D facial image-based recognition  

Microsoft Academic Search

Face recognition is the process of identifying a person using his image as biometric data. 3D image-based face recognition is expected to overcome many problems that are faced in traditional 2D face recognition, such as the lack of explicit shape information, pose and lighting variations. However, as a relatively new technology, it does face a number of challenges among them

Naoufel Werghi; Harish Bhaskar; Ali Hasan Ali Mohammed Malek

2011-01-01

411

Adaptive Multiresolution Non-Local Means Filter for 3D MR Image Denoising  

E-print Network

1 Adaptive Multiresolution Non-Local Means Filter for 3D MR Image Denoising Pierrick Coup麓e1 , Jos麓eal Neurological Institute, McGill University 3801, University Street, Montr麓eal, Canada H3A 2B4 b Resonance (MR) images. Based on an adaptive soft wavelet coefficient mixing, the proposed filter implicitly

Paris-Sud XI, Universit茅 de

412

Focal Cortical Dysplasia Segmentation in 3D Magnetic Resonance Images of the Human Brain  

E-print Network

.G. Bergo Dept. of Neurology 颅 FCM 颅 University of Campinas (UNICAMP) C.P. 6111 颅 13083-970 颅 Campinas 颅 SP-of-the-art, is applicable to MR images of children. 1. Introduction Focal cortical displasia (FCD) is a malformation amount of data in a typical 3D MR (magnetic resonance) image. In this work we present a method

Lewiner, Thomas (Thomas Lewiner)

413

Pyramidal flux in an anisotropic diffusion scheme for enhancing structures in 3D images  

Microsoft Academic Search

Pyramid based methods in image processing provide a helpful framework for accelerating the propagation of information over large spatial domains, increasing the efficiency for large scale applications. Combined with an anisotropic diffusion scheme tailored to preserve the boundaries at a given level, an efficient way for enhancing large structures in 3D images is presented. In our approach, the partial differential

Oscar Acosta; Hans Frimmel; Aaron Fenster; Olivier Salvado; S閎astien Ourselin

2008-01-01

414

DATABASE GUIDED DETECTION OF ANATOMICAL LANDMARK POINTS IN 3D IMAGES OF THE HEART  

E-print Network

DATABASE GUIDED DETECTION OF ANATOMICAL LANDMARK POINTS IN 3D IMAGES OF THE HEART Thomas Karavides1] for the detection and measurement of fetal structures in ultrasound images. Lu et al. [5] presented a classification-like feature types and steerable features [6]. In our study, a classification based method was developed

415

Characterizing and reducing crosstalk in printed anaglyph stereoscopic 3D images  

NASA Astrophysics Data System (ADS)

The anaglyph three-dimensional (3D) method is a widely used technique for presenting stereoscopic 3D images. Its primary advantages are that it will work on any full-color display and only requires that the user view the anaglyph image using a pair of anaglyph 3D glasses with usually one lens tinted red and the other lens tinted cyan. A common image quality problem of anaglyph 3D images is high levels of crosstalk-the incomplete isolation of the left and right image channels such that each eye sees a "ghost" of the opposite perspective view. In printed anaglyph images, the crosstalk levels are often very high-much higher than when anaglyph images are presented on emissive displays. The sources of crosstalk in printed anaglyph images are described and a simulation model is developed that allows the amount of printed anaglyph crosstalk to be estimated based on the spectral characteristics of the light source, paper, ink set, and anaglyph glasses. The model is validated using a visual crosstalk ranking test, which indicates good agreement. The model is then used to consider scenarios for the reduction of crosstalk in printed anaglyph systems and finds a number of options that are likely to reduce crosstalk considerably.

Woods, Andrew J.; Harris, Chris R.; Leggo, Dean B.; Rourke, Tegan M.

2013-04-01

416

PARAMETRIC REGRESSION OF 3D MEDICAL IMAGES THROUGH THE EXPLORATION OF NON-PARAMETRIC REGRESSION MODELS  

E-print Network

PARAMETRIC REGRESSION OF 3D MEDICAL IMAGES THROUGH THE EXPLORATION OF NON-PARAMETRIC REGRESSION. As predictors, the regres- sion model uses patient-specific metadata (e.g. age, weight, body mass index, etc at the greater trochanter. (b) End point between condyles. (c) CCD angle. Currently in medical image regression

Paris-Sud XI, Universit茅 de

417