Sample records for 3-d retinal imaging

  1. Exact surface registration of retinal surfaces from 3-D optical coherence tomography images.

    PubMed

    Lee, Sieun; Lebed, Evgeniy; Sarunic, Marinko V; Beg, Mirza Faisal

    2015-02-01

    Nonrigid registration of optical coherence tomography (OCT) images is an important problem in studying eye diseases, evaluating the effect of pharmaceuticals in treating vision loss, and performing group-wise cross-sectional analysis. High dimensional nonrigid registration algorithms required for cross-sectional and longitudinal analysis are still being developed for accurate registration of OCT image volumes, with the speckle noise in images presenting a challenge for registration. Development of algorithms for segmentation of OCT images to generate surface models of retinal layers has advanced considerably and several algorithms are now available that can segment retinal OCT images into constituent retinal surfaces. Important morphometric measurements can be extracted if accurate surface registration algorithm for registering retinal surfaces onto corresponding template surfaces were available. In this paper, we present a novel method to perform multiple and simultaneous retinal surface registration, targeted to registering surfaces extracted from ocular volumetric OCT images. This enables a point-to-point correspondence (homology) between template and subject surfaces, allowing for a direct, vertex-wise comparison of morphometric measurements across subject groups. We demonstrate that this approach can be used to localize and analyze regional changes in choroidal and nerve fiber layer thickness among healthy and glaucomatous subjects, allowing for cross-sectional population wise analysis. We also demonstrate the method's ability to track longitudinal changes in optic nerve head morphometry, allowing for within-individual tracking of morphometric changes. This method can also, in the future, be used as a precursor to 3-D OCT image registration to better initialize nonrigid image registration algorithms closer to the desired solution. PMID:25312906

  2. Probabilistic intra-retinal layer segmentation in 3-D OCT images using global shape regularization.

    PubMed

    Rathke, Fabian; Schmidt, Stefan; Schnörr, Christoph

    2014-07-01

    With the introduction of spectral-domain optical coherence tomography (OCT), resulting in a significant increase in acquisition speed, the fast and accurate segmentation of 3-D OCT scans has become evermore important. This paper presents a novel probabilistic approach, that models the appearance of retinal layers as well as the global shape variations of layer boundaries. Given an OCT scan, the full posterior distribution over segmentations is approximately inferred using a variational method enabling efficient probabilistic inference in terms of computationally tractable model components: Segmenting a full 3-D volume takes around a minute. Accurate segmentations demonstrate the benefit of using global shape regularization: We segmented 35 fovea-centered 3-D volumes with an average unsigned error of 2.46 ± 0.22 ?m as well as 80 normal and 66 glaucomatous 2-D circular scans with errors of 2.92 ± 0.5 ?m and 4.09 ± 0.98 ?m respectively. Furthermore, we utilized the inferred posterior distribution to rate the quality of the segmentation, point out potentially erroneous regions and discriminate normal from pathological scans. No pre- or postprocessing was required and we used the same set of parameters for all data sets, underlining the robustness and out-of-the-box nature of our approach. PMID:24835184

  3. Adaptive-optics optical coherence tomography for high-resolution and high-speed 3D retinal in vivo imaging

    Microsoft Academic Search

    Robert J. Zawadzki; Steven M. Jones; Scot S. Olivier; Mingtao Zhao; Bradley A. Bower; Joseph A. Izatt; Stacey Choi; Sophie Laut; John S. Werner

    2005-01-01

    We have combined Fourier-domain optical coherence tomography (FD-OCT) with a closed-loop adaptive optics (AO) system using a Hartmann-Shack wavefront sensor and a bimorph deformable mirror. The adaptive optics system measures and corrects the wavefront aberration of the human eye for improved lateral resolution (~4 mum) of retinal images, while maintaining the high axial resolution (~6 mum) of stand alone OCT.

  4. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  5. Velocity-resolved 3D retinal microvessel imaging using single-pass flow imaging spectral domain optical coherence tomography?

    PubMed Central

    Tao, Yuankai K.; Kennedy, Kristen M.; Izatt, Joseph A.

    2009-01-01

    We demonstrate in vivo velocity-resolved, volumetric bidirectional blood flow imaging in human retina using single-pass flow imaging spectral domain optical coherence tomography (SPFI-SDOCT). This technique uses previously described methods for separating moving and non-moving scatterers within a depth by using a modified Hilbert transform. Additionally, a moving spatial frequency window is applied, creating a stack of depth-resolved images of moving scatterers, each representing a finite velocity range. The resulting velocity reconstruction is validated with and strongly correlated to velocities measured with conventional Doppler OCT in flow phantoms. In vivo velocity-resolved flow mapping is acquired in healthy human retina and demonstrate the measurement of vessel size, peak velocity, and total foveal blood flow with OCT. PMID:19259254

  6. Introduction ! Segmentation provides a means of visualizing and measuring specific retinal layers in 3D spectral domain (SD-

    E-print Network

    Miller, Gary L.

    -dimensional spectral domain optical coherence tomography (3D SDOCT) images and to evaluate its performance such as adaptive contours, which tend to fail in the presence of retinal pathology. Financial disclosure: D

  7. 3-D threat image projection

    NASA Astrophysics Data System (ADS)

    Yildiz, Yesna O.; Abraham, Douglas Q.; Agaian, Sos; Panetta, Karen

    2008-02-01

    Automated Explosive Detection Systems utilizing Computed Tomography perform a series X-ray scans of passenger bags being checked in at the airport, and produce various 2-D projection images and 3-D volumetric images of the bag. The determination as to whether the passenger bag contains an explosive and needs to be searched manually is performed through trained Transportation Security Administration screeners following an approved protocol. In order to keep the screeners vigilant with regards to screening quality, the Transportation Security Administration has mandated the use of Threat Image Projection on 2-D projection X-ray screening equipment used at all US airports. These algorithms insert visual artificial threats into images of the normal passenger bags in order to test the screeners with regards to their screening efficiency and their screening quality at determining threats. This technology for 2-D X-ray system is proven and is widespread amongst multiple manufacturers of X-ray projection systems. Until now, Threat Image Projection has been unsuccessful at being introduced into 3-D Automated Explosive Detection Systems for numerous reasons. The failure of these prior attempts are mainly due to imaging queues that the screeners pickup on, and therefore make it easy for the screeners to discern the presence of the threat image and thus defeating the intended purpose. This paper presents a novel approach for 3-D Threat Image Projection for 3-D Automated Explosive Detection Systems. The method presented here is a projection based approach where both the threat object and the bag remain in projection sinogram space. Novel approaches have been developed for projection based object segmentation, projection based streak reduction used for threat object isolation along with scan orientation independence and projection based streak generation for an overall realistic 3-D image. The algorithms are prototyped in MatLab and C++ and demonstrate non discernible 3-D threat image insertion into various luggage, and non discernable streak patterns for 3-D images when compared to actual scanned images.

  8. 3-D threat image projection

    Microsoft Academic Search

    Yesna O. Yildiz; Douglas Q. Abraham; Sos Agaian; Karen Panetta

    2008-01-01

    Automated Explosive Detection Systems utilizing Computed Tomography perform a series X-ray scans of passenger bags being checked in at the airport, and produce various 2-D projection images and 3-D volumetric images of the bag. The determination as to whether the passenger bag contains an explosive and needs to be searched manually is performed through trained Transportation Security Administration screeners following

  9. Static 3D image space

    NASA Astrophysics Data System (ADS)

    Koudsi, Badia; Sluss, Jim J., Jr.

    2010-02-01

    As three-dimensional (3D) techniques continue to evolve from their humble beginnings-nineteenth century stereo photographs and twentieth century movies and holographs, the urgency for advancement in 3D display is escalating, as the need for widespread application in medical imaging, baggage scanning, gaming, television and movie display, and military strategizing increases. The most recent 3D developments center upon volumetric display, which generate 3D images within actual 3D space. More specifically, CSpace volumetric display generates a truly natural 3D image consisting of perceived width, height, and depth within the confines of physical space. Wireframe graphics enable viewers a 360-degree display without the use of additional visual aids. In this paper, research detailing the selection and testing of several rare earth, single-doped, fluoride crystals, namely 1%Er: NYF4, 2%Er: NYF4, 3%Er: NYF4 , 2%Er:KY3F10, and 2%Er:YLF, is introduced. These materials are the basis for CSpace display in a two-step twofrequency up-Conversion process. Significant determinants were tested and identified to aid in the selection of a suitable medium. Results show that 2%Er: NYF4 demonstrates good optical emitted power. Its superior level of brightness makes it the most suitable candidate for CSpace display. Testing also proved 2%Er: KY3F10 crystal might be a viable medium.

  10. Retinal imaging in uveitis

    PubMed Central

    Gupta, Vishali; Al-Dhibi, Hassan A.; Arevalo, J. Fernando

    2014-01-01

    Ancillary investigations are the backbone of uveitis workup for posterior segment inflammations. They help in establishing the differential diagnosis and making certain diagnosis by ruling out certain pathologies and are a useful aid in monitoring response to therapy during follow-up. These investigations include fundus photography including ultra wide field angiography, fundus autofluorescence imaging, fluorescein angiography, optical coherence tomography and multimodal imaging. This review aims to be an overview describing the role of these retinal investigations for posterior uveitis. PMID:24843301

  11. 3D Computational Ghost Imaging

    E-print Network

    Sun, Baoqing; Bowman, Richard; Vittert, Liberty E; Welsh, Stephen S; Bowman, Ardrian; Padgett, Miles J

    2013-01-01

    Computational ghost imaging retrieves the spatial information of a scene using a single pixel detector. By projecting a series of known random patterns and measuring the back reflected intensity for each one, it is possible to reconstruct a 2D image of the scene. In this work we overcome previous limitations of computational ghost imaging and capture the 3D spatial form of an object by using several single pixel detectors in different locations. From each detector we derive a 2D image of the object that appears to be illuminated from a different direction, using only a single digital projector as illumination. Comparing the shading of the images allows the surface gradient and hence the 3D form of the object to be reconstructed. We compare our result to that obtained from a stereo- photogrammetric system utilizing multiple high resolution cameras. Our low cost approach is compatible with consumer applications and can readily be extended to non-visible wavebands.

  12. 3D Computational Ghost Imaging

    E-print Network

    Baoqing Sun; Matthew P. Edgar; Richard Bowman; Liberty E. Vittert; Stephen S. Welsh; Ardrian Bowman; Miles J. Padgett

    2013-05-15

    Computational ghost imaging retrieves the spatial information of a scene using a single pixel detector. By projecting a series of known random patterns and measuring the back reflected intensity for each one, it is possible to reconstruct a 2D image of the scene. In this work we overcome previous limitations of computational ghost imaging and capture the 3D spatial form of an object by using several single pixel detectors in different locations. From each detector we derive a 2D image of the object that appears to be illuminated from a different direction, using only a single digital projector as illumination. Comparing the shading of the images allows the surface gradient and hence the 3D form of the object to be reconstructed. We compare our result to that obtained from a stereo- photogrammetric system utilizing multiple high resolution cameras. Our low cost approach is compatible with consumer applications and can readily be extended to non-visible wavebands.

  13. Consistent stylization of stereoscopic 3D images

    Microsoft Academic Search

    Lesley Northam; Paul Asente; Craig S. Kaplan

    2012-01-01

    The application of stylization filters to photographs is common, Instagram being a popular recent example. These image manipulation applications work great for 2D images. However, stereoscopic 3D cameras are increasingly available to consumers (Nintendo 3DS, Fuji W3 3D, HTC Evo 3D). How will users apply these same stylizations to stereoscopic images?

  14. 3D Imaging Technology Conference & Applications Workshop

    E-print Network

    Aristomenis, Antoniadis

    2nd London 3D Imaging Technology Conference & Applications Workshop 3D scanning and vertical, Greece, bilalis@dpem.tuc.gr Abstract. The new 3D scanning technology had changed the way and opened new from some 3D scanning approaches, which were applied for the first time in the southern part of Europe

  15. Two-photon in vivo imaging of retinal microstructures

    NASA Astrophysics Data System (ADS)

    Schejter, Adi; Farah, Nairouz; Shoham, Shy

    2014-02-01

    Non-invasive fluorescence retinal imaging in small animals is an important requirement in an array of translational vision applications. Two-photon imaging has the potential for long-term investigation of healthy and diseased retinal function and structure in vivo. Here, we demonstrate that two-photon microscopy through a mouse's pupil can yield high-quality optically sectioned fundus images. By remotely scanning using an electronically tunable lens we acquire highly-resolved 3D fluorescein angiograms. These results provide an important step towards various applications that will benefit from the use of infrared light, including functional imaging of retinal responses to light stimulation.

  16. 3D Modeling From 2D Images

    Microsoft Academic Search

    Lana Madracevic; Stjepan Sogoric

    2010-01-01

    This article will give an overview of the methods of transition from the set of images into 3D model. Direct method of creating 3D model using 3D software will be described. Creating photorealistic 3D models from a set of photographs is challenging problem in computer vision because the technology is still in its development stage while the demands for 3D

  17. Analysis of posterior retinal layers in spectral optical coherence tomography images of the normal retina and retinal pathologies

    E-print Network

    Szkulmowski, Maciej

    We present a computationally efficient, semiautomated method for analysis of posterior retinal layers in three-dimensional (3-D) images obtained by spectral optical coherence tomography (SOCT). The method consists of two ...

  18. 3D Remote sensing images online refining

    Microsoft Academic Search

    Hengjian Tong; Yun Zhang; Zhenfeng Shao

    2009-01-01

    A depth perception in binocular vision is based on horizontal parallax. However, if horizontal parallax is outside a certain range, for example, if parallax is larger than eye separation, then diplopia occurs and the 3D depth illusion collapses. Errors may be produced during the procedure of automatic image matching, epipolar image resampling and color 3D anaglyph image generating. Moreover, because

  19. Retinal Image Quality During Accommodation

    PubMed Central

    López-Gil, N.; Martin, J.; Liu, T.; Bradley, A.; Díaz-Muñoz, D.; Thibos, L.

    2013-01-01

    Purpose We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Methods Subjects viewed a monochromatic (552nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Results Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Conclusions Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye’s higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. PMID:23786386

  20. Topological Repairing of 3D Digital Images

    Microsoft Academic Search

    Marcelo Siqueira; Longin Jan Latecki; Nicholas J. Tustison; Jean H. Gallier; James C. Gee

    2008-01-01

    We present here a new randomized algorithm for repairing the topology of objects represented by 3D binary digital images.\\u000a By “repairing the topology”, we mean a systematic way of modifying a given binary image in order to produce a similar binary\\u000a image which is guaranteed to be well-composed. A 3D binary digital image is said to be well-composed if, and only

  1. Automated three-dimensional choroidal vessel segmentation of 3D 1060 nm OCT retinal data.

    PubMed

    Kaji?, Vedran; Esmaeelpour, Marieh; Glittenberg, Carl; Kraus, Martin F; Honegger, Joachim; Othara, Richu; Binder, Susanne; Fujimoto, James G; Drexler, Wolfgang

    2013-01-01

    A fully automated, robust vessel segmentation algorithm has been developed for choroidal OCT, employing multiscale 3D edge filtering and projection of "probability cones" to determine the vessel "core", even in the tomograms with low signal-to-noise ratio (SNR). Based on the ideal vessel response after registration and multiscale filtering, with computed depth related SNR, the vessel core estimate is dilated to quantify the full vessel diameter. As a consequence, various statistics can be computed using the 3D choroidal vessel information, such as ratios of inner (smaller) to outer (larger) choroidal vessels or the absolute/relative volume of choroid vessels. Choroidal vessel quantification can be displayed in various forms, focused and averaged within a special region of interest, or analyzed as the function of image depth. In this way, the proposed algorithm enables unique visualization of choroidal watershed zones, as well as the vessel size reduction when investigating the choroid from the sclera towards the retinal pigment epithelium (RPE). To the best of our knowledge, this is the first time that an automatic choroidal vessel segmentation algorithm is successfully applied to 1060 nm 3D OCT of healthy and diseased eyes. PMID:23304653

  2. Intra-retinal layer segmentation of 3D optical coherence tomography using coarse grained diffusion map

    PubMed Central

    Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D.; Sonka, Milan

    2013-01-01

    Optical coherence tomography (OCT) is a powerful and noninvasive method for retinal imaging. In this paper, we introduce a fast segmentation method based on a new variant of spectral graph theory named diffusion maps. The research is performed on spectral domain (SD) OCT images depicting macular and optic nerve head appearance. The presented approach does not require edge-based image information in localizing most of boundaries and relies on regional image texture. Consequently, the proposed method demonstrates robustness in situations of low image contrast or poor layer-to-layer image gradients. Diffusion mapping applied to 2D and 3D OCT datasets is composed of two steps, one for partitioning the data into important and less important sections, and another one for localization of internal layers. In the first step, the pixels/voxels are grouped in rectangular/cubic sets to form a graph node. The weights of the graph are calculated based on geometric distances between pixels/voxels and differences of their mean intensity. The first diffusion map clusters the data into three parts, the second of which is the area of interest. The other two sections are eliminated from the remaining calculations. In the second step, the remaining area is subjected to another diffusion map assessment and the internal layers are localized based on their textural similarities. The proposed method was tested on 23 datasets from two patient groups (glaucoma and normals). The mean unsigned border positioning errors (mean ± SD) was 8.52 ± 3.13 and 7.56 ± 2.95 ?m for the 2D and 3D methods, respectively. PMID:23837966

  3. Nonlaser-based 3D surface imaging

    SciTech Connect

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  4. Image Analysis for AutomaticImage Analysis for Automatic PhenotypingPhenotyping 3D Imaging3D Imaging

    E-print Network

    Glasbey, Chris

    Image Analysis for AutomaticImage Analysis for Automatic PhenotypingPhenotyping 3D Imaging3D to the red bottle · Need to recover the 3D geometry of the bottle for comparisons in size and shape Source: focus length from Wikipedia #12;High Resolution Colour Laser Scanner · A polychromatic RGB laser source

  5. Interactive 3-D Patient-Image Registration

    Microsoft Academic Search

    Charles A. Pelizzari; K. K. Tan; David N. Levin; George T. Y. Chen; J. Balter

    1991-01-01

    A method has been developed which allows accurate registration of 3D image data sets of the head, such as CT or MRI, with with the anatomy of the actual patient. Once registration is accomplished, the patient and image spaces may be interactively explored, and any point or volume of interest in either space instantly transformed to the other. This paper

  6. Acquisition and applications of 3D images

    NASA Astrophysics Data System (ADS)

    Sterian, Paul; Mocanu, Elena

    2007-08-01

    The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

  7. Locating the Optic Disc in Retinal Images

    Microsoft Academic Search

    Mira Park; Jesse S. Jin; Suhuai Luo

    2006-01-01

    We present a method to automatically outline the optic disc in a retinal image. Our method for finding the optic disc is based on the properties of the optic disc using simple image processing algorithms which include thresholding, detection of object roundness and circle detection by Hough transformation. Our method is able to recognize the retinal images with general properties

  8. Model Based Segmentation for Retinal Fundus Images

    E-print Network

    Bhalerao, Abhir

    to measure the size of the optic disc and fovea and estimate the leakage of blood into the retina (exudateModel Based Segmentation for Retinal Fundus Images Li Wang and Abhir Bhalerao Department presents a method for detecting and measuring the vascular structures of retinal images. Features

  9. 3D seismic imaging, example of 3D area in the middle of Banat

    Microsoft Academic Search

    S. Antic

    2009-01-01

    3D seismic imaging was carried out in the 3D seismic volume situated in the middle of Banat region in Serbia. The 3D area is about 300 km square. The aim of 3D investigation was defining geology structures and techtonics especially in Mesozoik complex. The investigation objects are located in depth from 2000 to 3000 m. There are number of wells

  10. 3D Imaging Symposium, Friday 11:00 3D APPROACHES IN PALEOANTHROPOLOGY USING GEOMETRIC MOR-

    E-print Network

    Delson, Eric

    3D Imaging Symposium, Friday 11:00 3D APPROACHES IN PALEOANTHROPOLOGY USING GEOMETRIC MOR, Eugene, OR; ROSENBERGER, Alfred, Brooklyn College/CUNY, Brooklyn, NY The emergence of 3D GM (geometric- ble to easily collect data in a true 3D sense, such as sets of homologous landmarks or com- plete

  11. 3D MR imaging in real time

    NASA Astrophysics Data System (ADS)

    Guttman, Michael A.; McVeigh, Elliot R.

    2001-05-01

    A system has been developed to produce live 3D volume renderings from an MR scanner. Whereas real-time 2D MR imaging has been demonstrated by several groups, 3D volumes are currently rendered off-line to gain greater understanding of anatomical structures. For example, surgical planning is sometimes performed by viewing 2D images or 3D renderings from previously acquired image data. A disadvantage of this approach is misregistration which could occur if the anatomy changes due to normal muscle contractions or surgical manipulation. The ability to produce volume renderings in real-time and present them in the magnet room could eliminate this problem, and enable or benefit other types of interventional procedures. The system uses the data stream generated by a fast 2D multi- slice pulse sequence to update a volume rendering immediately after a new slice is available. We demonstrate some basic types of user interaction with the rendering during imaging at a rate of up to 20 frames per second.

  12. Texture anisotropy in 3-D images

    Microsoft Academic Search

    Vassili A. Kovalev; Maria Petrou; Yaroslav S. Bondar

    1999-01-01

    Two approaches to the characterization of three-dimensional (3-D) textures are presented: one based on gradient vectors and one on generalized co-occurrence matrices. They are investigated with the help of simulated data for their behavior in the presence of noise and for various values of the parameters they depend on. They are also applied to several medical volume images characterized by

  13. Adaptive optics-optical coherence tomography for in vivo retinal imaging: effects of spectral bandwidth on image quality

    Microsoft Academic Search

    Robert J. Zawadzki; Steven M. Jones; Mingtao Zhao; Stacey S. Choi; Sophie S. Laut; Scot S. Olivier; Joseph A. Izatt; John S. Werner

    2006-01-01

    Adaptive Optics - Optical Coherence Tomography (AO-OCT) has demonstrated a promising improvement in lateral resolution for retinal imaging compared to standard OCT. Recent developments in Fourier-domain OCT technology allow AO-OCT instruments to acquire three-dimensional (3D) retinal structures with high speed and high \\

  14. 3D goes digital: from stereoscopy to modern 3D imaging techniques

    NASA Astrophysics Data System (ADS)

    Kerwien, N.

    2014-11-01

    In the 19th century, English physicist Charles Wheatstone discovered stereopsis, the basis for 3D perception. His construction of the first stereoscope established the foundation for stereoscopic 3D imaging. Since then, many optical instruments were influenced by these basic ideas. In recent decades, the advent of digital technologies revolutionized 3D imaging. Powerful readily available sensors and displays combined with efficient pre- or post-processing enable new methods for 3D imaging and applications. This paper draws an arc from basic concepts of 3D imaging to modern digital implementations, highlighting instructive examples from its 175 years of history.

  15. Pattern based 3D image Steganography

    NASA Astrophysics Data System (ADS)

    Thiyagarajan, P.; Natarajan, V.; Aghila, G.; Prasanna Venkatesan, V.; Anitha, R.

    2013-03-01

    This paper proposes a new high capacity Steganographic scheme using 3D geometric models. The novel algorithm re-triangulates a part of a triangle mesh and embeds the secret information into newly added position of triangle meshes. Up to nine bits of secret data can be embedded into vertices of a triangle without causing any changes in the visual quality and the geometric properties of the cover image. Experimental results show that the proposed algorithm is secure, with high capacity and low distortion rate. Our algorithm also resists against uniform affine transformations such as cropping, rotation and scaling. Also, the performance of the method is compared with other existing 3D Steganography algorithms. [Figure not available: see fulltext.

  16. Teat Morphology Characterization With 3D Imaging.

    PubMed

    Vesterinen, Heidi M; Corfe, Ian J; Sinkkonen, Ville; Iivanainen, Antti; Jernvall, Jukka; Laakkonen, Juha

    2014-11-01

    The objective of this study was to visualize, in a novel way, the morphological characteristics of bovine teats to gain a better understanding of the detailed teat morphology. We applied silicone casting and 3D digital imaging in order to obtain a more detailed image of the teat structures than that seen in previous studies. Teat samples from 65 dairy cows over 12 months of age were obtained from cows slaughtered at an abattoir. The teats were classified according to the teat condition scoring used in Finland and the lengths of the teat canals were measured. Silicone molds were made from the external teat surface surrounding the teat orifice and from the internal surface of the teat consisting of the papillary duct, Fürstenberg's rosette, and distal part of the teat cistern. The external and internal surface molds of 35 cows were scanned with a 3D laser scanner. The molds and the digital 3D models were used to evaluate internal and external teat surface morphology. A number of measurements were taken from the silicone molds. The 3D models reproduced the morphology of the teats accurately with high repeatability. Breed didn't correlate with the teat classification score. The rosette was found to have significant variation in its size and number of mucosal folds. The internal surface morphology of the rosette did not correlate with the external surface morphology of the teat implying that it is relatively independent of milking parameters that may impact the teat canal and the external surface of the teat. Anat Rec, 2014. © 2014 Wiley Periodicals, Inc. PMID:25382725

  17. Ames Lab 101: Real-Time 3D Imaging

    ScienceCinema

    Zhang, Song

    2012-08-29

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  18. Ames Lab 101: Real-Time 3D Imaging

    SciTech Connect

    Zhang, Song

    2010-01-01

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  19. Computational 3D and reflectivity imaging with high photon efficiency

    E-print Network

    Shin, Dongeek

    2014-01-01

    Imaging the 3D structure and reflectivity of a scene can be done using photon-counting detectors. Traditional imagers of this type typically require hundreds of detected photons per pixel for accurate 3D and reflectivity ...

  20. Correction of motion artifacts and scanning beam distortions in 3D ophthalmic optical coherence tomography imaging

    NASA Astrophysics Data System (ADS)

    Zawadzki, Robert J.; Fuller, Alfred R.; Choi, Stacey S.; Wiley, David F.; Hamann, Bernd; Werner, John S.

    2007-02-01

    The ability to obtain true three-dimensional (3D) morphology of the retinal structures is essential for future clinical and experimental studies. It becomes especially critical if the measurements acquired with different instruments need to be compared, or precise volumetric data are needed for monitoring and treatment of retinal disease. On the other hand, it is well understood that optical coherence tomography (OCT) images are distorted by several factors. Only limited work has been performed to eliminate these problems in ophthalmic retinal imaging, perhaps because they are less evident in the more common 2D representation mode of time-domain OCT. With recent progress in imaging speed of Fourier domain - OCT (Fd-OCT) techniques, however, 3D OCT imaging is more frequently being used, thereby exposing problems that have been ignored previously. In this paper we propose possible solutions to minimize and compensate for artifacts caused by subject eye and head motion, and distortions caused by the geometry of the scanning optics. The first is corrected by cross-correlation based B-scan registration techniques; the second is corrected by incorporating the geometry of the scanning beam into custom volume rendering software. Retinal volumes of optical nerve head (ONH) and foveal regions of healthy volunteer, with and without corrections, are presented. Finally, some common factors that may lead to increased distortions of the ophthalmic OCT image such as refractive error or position of the subject's head are discussed.

  1. Post-Rendering 3D Image Warping: Visibility, Reconstruction,

    E-print Network

    North Carolina at Chapel Hill, University of

    Post-Rendering 3D Image Warping: Visibility, Reconstruction, and Performance for Depth #12;ABSTRACT William R. Mark POST-RENDERING 3D IMAGE WARPING: VISIBILITY, RECONSTRUCTION Opportunity A rmative Action Institution. #12;-- #12;POST-RENDERING 3D IMAGE WARPING: VISIBILITY

  2. 3D RECONSTRUCTION FROM A SINGLE IMAGE Diego Rother

    E-print Network

    3D RECONSTRUCTION FROM A SINGLE IMAGE By Diego Rother and Guillermo Sapiro IMA Preprint Series. 1 3D Reconstruction from a Single Image Diego Rother and Guillermo Sapiro Abstract-- A probabilistic framework for 3D object reconstruction from a single image is introduced in this work. First

  3. 3D imaging and ranging by time-correlated single

    E-print Network

    Buller, Gerald S.

    3D imaging and ranging by time-correlated single photon counting by A. M. Wallace, 6. S. Buller and A. C. Walker 3D imaging is an important tool for metrology and reverse engineering of components and architecturalsurveying. In this article, we review briefly the principal methods in current use for 3D imaging

  4. 3D snakes for the segmentation of buried mines in 3D acoustic images

    Microsoft Academic Search

    Dominique Attali; J. Chanussot; R. Areste; S. Guyonic

    2005-01-01

    In this paper, we describe some image processing techniques for the analysis of 3D acoustical data. More specifically, the 3D images are segmented using a deformable template (3D snake). This iterative algorithm provides a triangulated surface of the echo generated by buried underwater mines. The segmentation result can then be used for recognition\\/classification of the detected object purpose. The proposed

  5. Make3D: Learning 3D Scene Structure from a Single Still Image

    Microsoft Academic Search

    Ashutosh Saxena; Min Sun; Andrew Y. Ng

    2009-01-01

    We consider the problem of estimating detailed 3D structure from a single still image of an unstructured environment. Our goal is to create 3D models that are both quantitatively accurate as well as visually pleasing. For each small homogeneous patch in the image, we use a Markov random field (MRF) to infer a set of \\

  6. Retinal imaging using adaptive optics technology?

    PubMed Central

    Kozak, Igor

    2014-01-01

    Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effect of wave front distortions. Retinal imaging using AO aims to compensate for higher order aberrations originating from the cornea and the lens by using deformable mirror. The main application of AO retinal imaging has been to assess photoreceptor cell density, spacing, and mosaic regularity in normal and diseased eyes. Apart from photoreceptors, the retinal pigment epithelium, retinal nerve fiber layer, retinal vessel wall and lamina cribrosa can also be visualized with AO technology. Recent interest in AO technology in eye research has resulted in growing number of reports and publications utilizing this technology in both animals and humans. With the availability of first commercially available instruments we are making transformation of AO technology from a research tool to diagnostic instrument. The current challenges include imaging eyes with less than perfect optical media, formation of normative databases for acquired images such as cone mosaics, and the cost of the technology. The opportunities for AO will include more detailed diagnosis with description of some new findings in retinal diseases and glaucoma as well as expansion of AO into clinical trials which has already started. PMID:24843304

  7. Automatic Detection, Segmentation and Classification of Retinal Horizontal Neurons in Large-scale 3D Confocal Imagery

    SciTech Connect

    Karakaya, Mahmut [ORNL; Kerekes, Ryan A [ORNL; Gleason, Shaun Scott [ORNL; Martins, Rodrigo [St. Jude Children's Research Hospital; Dyer, Michael [St. Jude Children's Research Hospital

    2011-01-01

    Automatic analysis of neuronal structure from wide-field-of-view 3D image stacks of retinal neurons is essential for statistically characterizing neuronal abnormalities that may be causally related to neural malfunctions or may be early indicators for a variety of neuropathies. In this paper, we study classification of neuron fields in large-scale 3D confocal image stacks, a challenging neurobiological problem because of the low spatial resolution imagery and presence of intertwined dendrites from different neurons. We present a fully automated, four-step processing approach for neuron classification with respect to the morphological structure of their dendrites. In our approach, we first localize each individual soma in the image by using morphological operators and active contours. By using each soma position as a seed point, we automatically determine an appropriate threshold to segment dendrites of each neuron. We then use skeletonization and network analysis to generate the morphological structures of segmented dendrites, and shape-based features are extracted from network representations of each neuron to characterize the neuron. Based on qualitative results and quantitative comparisons, we show that we are able to automatically compute relevant features that clearly distinguish between normal and abnormal cases for postnatal day 6 (P6) horizontal neurons.

  8. Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction Yongying Gao and Hayder Radha

    E-print Network

    Radha, Hayder

    Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction Yongying Gao and Hayder Radha that operates directly in 3-D space, and it is based on 3-D scene reconstruction. Unlike existing multi-view image coding schemes, in which the 3-D scene information of the images to be encoded is represented

  9. Efficient 3-D Scene Visualization by Image Extrapolation

    Microsoft Academic Search

    Tomás Werner; Tomás Pajdla; Václav Hlavác

    1998-01-01

    . Image-based scene representation is believed to be an alternativeto the 3-D model reconstruction and rendering. In attemptto compare generality of image-based and model-based approaches weargue that it is plausible to distinguish three approaches to 3-D scenevisualization: image interpolation, image extrapolation, and 3-D modelreconstruction and rendering. We advocate that image extrapolation isa useful trade-off between simple but limited interpolation and

  10. Enhancing retinal images by nonlinear registration

    E-print Network

    Molodij, Guillaume; Glanc, Marie; Chenegros, Guillaume

    2014-01-01

    Being able to image the human retina in high resolution opens a new era in many important fields, such as pharmacological research for retinal diseases, researches in human cognition, nervous system, metabolism and blood stream, to name a few. In this paper, we propose to share the knowledge acquired in the fields of optics and imaging in solar astrophysics in order to improve the retinal imaging at very high spatial resolution in the perspective to perform a medical diagnosis. The main purpose would be to assist health care practitioners by enhancing retinal images and detect abnormal features. We apply a nonlinear registration method using local correlation tracking to increase the field of view and follow structure evolutions using correlation techniques borrowed from solar astronomy technique expertise. Another purpose is to define the tracer of movements after analyzing local correlations to follow the proper motions of an image from one moment to another, such as changes in optical flows that would be o...

  11. Imaging a Sustainable Future in 3D

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.; Kanngieser, E.

    2012-07-01

    It is the intention of this paper, to contribute to a sustainable future by providing objective object information based on 3D photography as well as promoting 3D photography not only for scientists, but also for amateurs. Due to the presentation of this article by CIPA Task Group 3 on "3D Photographs in Cultural Heritage", the presented samples are masterpieces of historic as well as of current 3D photography concentrating on cultural heritage. In addition to a report on exemplarily access to international archives of 3D photographs, samples for new 3D photographs taken with modern 3D cameras, as well as by means of a ground based high resolution XLITE staff camera and also 3D photographs taken from a captive balloon and the use of civil drone platforms are dealt with. To advise on optimum suited 3D methodology, as well as to catch new trends in 3D, an updated synoptic overview of the 3D visualization technology, even claiming completeness, has been carried out as a result of a systematic survey. In this respect, e.g., today's lasered crystals might be "early bird" products in 3D, which, due to lack in resolution, contrast and color, remember to the stage of the invention of photography.

  12. Parts, Image, and Sketch based 3D Modeling Method

    Microsoft Academic Search

    Thomas Stahovich; Mario Costa Sousa; Jun Murakawa; Ilmi Yoon; Tracie Hong; Edward Lank

    2006-01-01

    Despite their many benefits, challenges exist in the creation of 3D models, particularly for individual not currently skilled with 3D modeling software. To address this, w e explore the creation of 3D modeling software for non-domain experts that uses a hierarchical parts database of generic 3D models, and deforms models into specific related target objects using image guided 3D model

  13. Abstract Title: Image Informatics Tools for the Analysis of Retinal Images

    E-print Network

    California at Santa Barbara, University of

    Abstract Title: Image Informatics Tools for the Analysis of Retinal Images Presentation Start Barbara, Santa Barbara, CA. Keywords: 682 retinal detachment, 541 image processing, 543 imaging/image and quantitative analysis of retinal images, and to test these methods on a large retinal image database. Methods

  14. Toward a compact underwater structured light 3-D imaging system

    E-print Network

    Dawson, Geoffrey E

    2013-01-01

    A compact underwater 3-D imaging system based on the principles of structured light was created for classroom demonstration and laboratory research purposes. The 3-D scanner design was based on research by the Hackengineer ...

  15. Image Selection for 3d Measurement Based on Network Design

    NASA Astrophysics Data System (ADS)

    Fuse, T.; Harada, R.

    2015-05-01

    3D models have been widely used by spread of many available free-software. On the other hand, enormous images can be easily acquired, and images are utilized for creating the 3D models recently. However, the creation of 3D models by using huge amount of images takes a lot of time and effort, and then efficiency for 3D measurement are required. In the efficiency strategy, the accuracy of the measurement is also required. This paper develops an image selection method based on network design that means surveying network construction. The proposed method uses image connectivity graph. By this, the image selection problem is regarded as combinatorial optimization problem and the graph cuts technique can be applied. Additionally, in the process of 3D reconstruction, low quality images and similarity images are extracted and removed. Through the experiments, the significance of the proposed method is confirmed. Potential to efficient and accurate 3D measurement is implied.

  16. Interactive display and analysis of 3-D medical images

    Microsoft Academic Search

    R. A. Robb; C. Barillot

    1989-01-01

    The ANALYZE software system, which permits detailed investigation and evaluation of 3-D biomedical images, is discussed. ANALYZE can be used with 3-D imaging modalities based on X-ray computed tomography, radionuclide emission tomography, ultrasound tomography, and magnetic resonance imaging. The package is unique in its synergistic integration of fully interactive modules for direct display, manipulation, and measurement of multidimensional image data.

  17. Integrated optical 3D digital imaging based on DSP scheme

    Microsoft Academic Search

    Xiaodong Wang; Xiang Peng; Bruce Z. Gao

    2008-01-01

    We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry

  18. Digital imaging-based retinal photocoagulation system

    NASA Astrophysics Data System (ADS)

    Barrett, Steven F.; Wright, Cameron H. G.; Oberg, Erik D.; Rockwell, Benjamin A.; Cain, Clarence P.; Rylander, Henry G., III; Welch, Ashley J.

    1997-05-01

    Researchers at the USAF Academy and the University of Texas are developing a computer-assisted retinal photocoagulation system for the treatment of retinal disorders (i.e. diabetic retinopathy, retinal tears). Currently, ophthalmologists manually place therapeutic retinal lesions, an acquired technique that is tiring for both the patient and physician. The computer-assisted system under development can rapidly and safely place multiple therapeutic lesions at desired locations on the retina in a matter of seconds. Separate prototype subsystems have been developed to control lesion depth during irradiation and lesion placement to compensate for retinal movement. Both subsystems have been successfully demonstrated in vivo on pigmented rabbits using an argon continuous wave laser. Two different design approaches are being pursued to combine the capabilities of both subsystems: a digital imaging-based system and a hybrid analog-digital system. This paper will focus on progress with the digital imaging-based prototype system. A separate paper on the hybrid analog-digital system, `Hybrid Retinal Photocoagulation System', is also presented in this session.

  19. Location of Optical Disc in Retinal Image

    Microsoft Academic Search

    D. Santhi; D. Manimegalai

    2007-01-01

    This paper proposes a method to automatically locate the optic disc in a retinal image. Our method of finding the optic disc is based on the properties of the optic disc using simple image processing algorithms which include multilevel thresholding, Morphological process detection of object roundness and circle detection by circle fitting method. The proposed method is able to recognize

  20. 3D widefield light microscope image reconstruction without dyes

    NASA Astrophysics Data System (ADS)

    Larkin, S.; Larson, J.; Holmes, C.; Vaicik, M.; Turturro, M.; Jurkevich, A.; Sinha, S.; Ezashi, T.; Papavasiliou, G.; Brey, E.; Holmes, T.

    2015-03-01

    3D image reconstruction using light microscope modalities without exogenous contrast agents is proposed and investigated as an approach to produce 3D images of biological samples for live imaging applications. Multimodality and multispectral imaging, used in concert with this 3D optical sectioning approach is also proposed as a way to further produce contrast that could be specific to components in the sample. The methods avoid usage of contrast agents. Contrast agents, such as fluorescent or absorbing dyes, can be toxic to cells or alter cell behavior. Current modes of producing 3D image sets from a light microscope, such as 3D deconvolution algorithms and confocal microscopy generally require contrast agents. Zernike phase contrast (ZPC), transmitted light brightfield (TLB), darkfield microscopy and others can produce contrast without dyes. Some of these modalities have not previously benefitted from 3D image reconstruction algorithms, however. The 3D image reconstruction algorithm is based on an underlying physical model of scattering potential, expressed as the sample's 3D absorption and phase quantities. The algorithm is based upon optimizing an objective function - the I-divergence - while solving for the 3D absorption and phase quantities. Unlike typical deconvolution algorithms, each microscope modality, such as ZPC or TLB, produces two output image sets instead of one. Contrast in the displayed image and 3D renderings is further enabled by treating the multispectral/multimodal data as a feature set in a mathematical formulation that uses the principal component method of statistics.

  1. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    SciTech Connect

    Morimoto, A.K.; Bow, W.J.; Strong, D.S. [and others

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  2. 3D image quality of 200-inch glasses-free 3D display system

    NASA Astrophysics Data System (ADS)

    Kawakita, M.; Iwasawa, S.; Sakai, M.; Haino, Y.; Sato, M.; Inoue, N.

    2012-03-01

    We have proposed a glasses-free three-dimensional (3D) display for displaying 3D images on a large screen using multi-projectors and an optical screen consisting of a special diffuser film with a large condenser lens. To achieve high presence communication with natural large-screen 3D images, we numerically analyze the factors responsible for degrading image quality to increase the image size. A major factor that determines the 3D image quality is the arrangement of component units, such as the projector array and condenser lens, as well as the diffuser film characteristics. We design and fabricate a prototype 200-inch glasses-free 3D display system on the basis of the numerical results. We select a suitable diffuser film, and we combine it with an optimally designed condenser lens. We use 57 high-definition projector units to obtain viewing angles of 13.5°. The prototype system can display glasses-free 3D images of a life-size car using natural parallax images.

  3. Progress in 3-D Multiperspective Display by Integral Imaging

    Microsoft Academic Search

    RaÚl Martinez-Cuenca; Genaro Saavedra; Manuel Martinez-Corral; Bahram Javidi

    2009-01-01

    Three-dimensional (3-D) imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture and display true 3-D color images, has been seen as the right technology for 3-D viewing for audiences of more than one person. Due to the advanced degree of its development, InI technology could be

  4. Color image segmentation based on 3-D clustering: morphological approach

    Microsoft Academic Search

    Sang Ho Park; Il Dong Yun; Sang Uk Lee

    1998-01-01

    In this paper, a new segmentation algorithm for color images based on mathematical morphology is presented. Color image segmentation is essentially a clustering process in 3-D color space, but the characteristics of clusters vary severely, according to the type of images and color coordinates. Hence, the methodology employs the scheme of thresholding the difference of Gaussian smoothed 3-D histogram to

  5. A 3D image processing method for manufacturing process automation

    Microsoft Academic Search

    Dongming Zhao; Songtao Li

    2005-01-01

    Three-dimensional (3D) image processing provides a useful tool for machine vision applications. Typically a 3D vision system is divided into data acquisition, low-level processing, object representation and matching. In this paper, a 3D object pose estimation method is developed for an automated manufacturing assembly process. The experimental results show that the 3D pose estimation method produces accurate geometrical information for

  6. 3D seismic imaging, example of 3D area in the middle of Banat

    NASA Astrophysics Data System (ADS)

    Antic, S.

    2009-04-01

    3D seismic imaging was carried out in the 3D seismic volume situated in the middle of Banat region in Serbia. The 3D area is about 300 km square. The aim of 3D investigation was defining geology structures and techtonics especially in Mesozoik complex. The investigation objects are located in depth from 2000 to 3000 m. There are number of wells in this area but they are not enough deep to help in the interpretation. It was necessary to get better seismic image in deeper area. Acquisition parameters were satisfactory (good quality of input parameters, length of input data was 5 s, fold was up to 4000 %) and preprocessed data was satisfied. GeoDepth is an integrated system for 3D velocity model building and for 3D seismic imaging. Input data for 3D seismic imaging consist of preprocessing data sorted to CMP gathers and RMS stacking velocity functions. Other type of input data are geological information derived from well data, time migrated images and time migrated maps. Workflow for this job was: loading and quality control the input data (CMP gathers and velocity), creating initial RMS Velocity Volume, PSTM, updating the RMS Velocity Volume, PSTM, building the Initial Interval Velocity Model, PSDM, updating the Interval Velocity Model, PSDM. In the first stage the attempt is to derive initial velocity model as simple as possible as.The higher frequency velocity changes are obtained in the updating stage. The next step, after running PSTM, is the time to depth conversion. After the model is built, we generate a 3D interval velocity volume and run 3D pre-stack depth migration. The main method for updating velocities is 3D tomography. The criteria used in velocity model determination are based on the flatness of pre-stack migrated gathers or the quality of the stacked image. The standard processing ended with poststack 3D time migration. Prestack depth migration is one of the powerful tool available to the interpretator to develop an accurate velocity model and get good seismic image. A comparison of a time and depth migrated sections highlights the improvements in imaging quality. On depth migrated section imaging and fault resolution is improved and is easer to get more complex and realistic geological model.

  7. Serial 3D Imaging Mass Spectrometry at Its Tipping Point.

    PubMed

    Palmer, Andrew D; Alexandrov, Theodore

    2015-04-21

    Since biology is by and large a 3-dimensional phenomenon, it is hardly surprising that 3D imaging has had a significant impact on many challenges in the life sciences. Imaging mass spectrometry (MS) is a spatially resolved label-free analytical technique that recently maturated into a powerful tool for in situ localization of hundreds of molecular species. Serial 3D imaging MS reconstructs 3D molecular images from serial sections imaged with mass spectrometry. As such, it provides a novel 3D imaging modality inheriting the advantages of imaging MS. Serial 3D imaging MS has been steadily developing over the past decade, and many of the technical challenges have been met. Essential tools and protocols were developed, in particular to improve the reproducibility of sample preparation, speed up data acquisition, and enable computationally intensive analysis of the big data generated. As a result, experimental data is starting to emerge that takes advantage of the extra spatial dimension that 3D imaging MS offers. Most studies still focus on method development rather than on exploring specific biological problems. The future success of 3D imaging MS requires it to find its own niche alongside existing 3D imaging modalities through finding applications that benefit from 3D imaging and at the same time utilize the unique chemical sensitivity of imaging mass spectrometry. This perspective critically reviews the challenges encountered during the development of serial-sectioning 3D imaging MS and discusses the steps needed to tip it from being an academic curiosity into a tool of choice for answering biological and medical questions. PMID:25817912

  8. Retinal Location of the Preferred Retinal Locus Relative to the Fovea in Scanning Laser Ophthalmoscope Images

    Microsoft Academic Search

    GEORGE T. TIMBERLAKE; MANOJ K. SHARMA; SUSAN A. GROSE; DENISE V. GOBERT; JOHN M. GAUCH; JOSEPH H. MAINO

    2005-01-01

    Purpose: It is difficult to determine the position of a preferred retinal locus (PRL) relative to the fovea in scanning laser ophthalmoscope (SLO) images as a result of disease-related retinal morphologic changes that obscure the fovea. To overcome this problem, we developed a method for determining retinal foveal position based on normal fixation position relative to the optic disk. The

  9. Recovering 3D Human Pose from Monocular Images

    E-print Network

    Boyer, Edmond

    a learning-based method for recovering 3D human body pose from single images and monocular image sequences a model that directly recovers pose estimates from observable image quantities. In particular, example-based for training image(s) similar to the given input image and interpolating from their poses [5], [18], [22], [26

  10. Optic Disc Segmentation in Retinal Images

    Microsoft Academic Search

    Radim Chrástek; Matthias Wolf; Klaus Donath; Georg Michelson; Heinrich Niemann

    2002-01-01

    Abstract: Retinal images give unique diagnostic information not onlyabout eye disease but about other organs as well [1]. To give the physiciansa tool for objective quantitative assessment of the retina, automatedmethods have been developed. In this paper an automated methodfor the optic disc segmentation is presented. The method consists of 4steps: localization of the optic disc, nonlinearltering, Canny edge detectorand

  11. The Interpretation of a Moving Retinal Image

    Microsoft Academic Search

    H. C. Longuet-Higgins; K. Prazdny

    1980-01-01

    It is shown that from a monocular view of a rigid, textured, curved surface it is possible, in principle, to determine the gradient of the surface at any point, and the motion of the eye relative to it, from the velocity field of the changing retinal image, and its first and second spatial derivatives. The relevant equations are redundant, thus

  12. Comparison of 3D Deformable Models For in vivo Measurements of Mouse Embryo from 3D Ultrasound Images

    E-print Network

    Paris-Sud XI, Université de

    Comparison of 3D Deformable Models For in vivo Measurements of Mouse Embryo from 3D Ultrasound in the analysis of the 3D shape of mouse embryo from 3D ultrasound (US) images acquired using an experimental ultrasonic system. Two approaches for the 3D segmentation of mouse embryo are evaluated. The first one

  13. Optical processing for 3D digital imaging

    Microsoft Academic Search

    D. J. Brady

    2000-01-01

    Conventional optical imaging systems perform both information sensing and image formation functions. The optical system is generally designed to implement processing for image formation with a goal of optimizing analog image quality measures. Digital image involves a fundamental paradigm shift in which the “image” is no longer synonymous with the focal plane field distribution. A digital system may be designed

  14. Segmentation of Retinal Arteries in Adaptive Optics Images

    E-print Network

    Paris-Sud XI, Université de

    Segmentation of Retinal Arteries in Adaptive Optics Images Nicolas Lermé, Florence Rossant for automatically segmenting the walls of retinal arteries in adaptive optics images. To the best of our knowledge. INTRODUCTION The diseases affecting the retinal blood vessels of small diameter ( 150µm) such as arterial

  15. AOTF-based 3D spectral imaging system

    NASA Astrophysics Data System (ADS)

    Pozhar, Vitold; Machihin, Alexander

    2012-05-01

    The problem of 3D spectral imaging with random spectral access is discussed. Proposed solution is based on dual-channel double acousto-optical (AO) monochromator. Each of two AO cells in it has two spatially separated entrance pupils for transmission of stereoscopic images. In such a scheme spectral drift of the image doesn't appear, while spectral and spatial distortion is minimal. 3D spectral imaging based on this monochromator and Abbe stereomicroscope is described. Possible applications of proposed AOTF-based 3D imaging spectrometer are discussed.

  16. 3D Thermography Imaging Standardization Technique for Inflammation Diagnosis

    E-print Network

    Nebel, Jean-Christophe

    3D Thermography Imaging Standardization Technique for Inflammation Diagnosis Xiangyang Ju1 Jean and veterinary communities as a necessary or effective tool in inflammation and tumor detection. Here, our be quantified. Keywords: Thermography, Thermogram Standardization, 3D imaging, Inflammation 1. INTRODUCTION All

  17. HIGH PERFORMANCE 3-D IMAGE RECONSTRUCTION FOR MOLECULAR STRUCTURE DETERMINATION

    E-print Network

    Geddes, Cameron Guy Robinson

    HIGH PERFORMANCE 3-D IMAGE RECONSTRUCTION FOR MOLECULAR STRUCTURE DETERMINATION JULIANNE CHUNG with more than 15,000 CPUs. Key words. Cryo-EM, 3-D image reconstruction, parallel computing 1. Introduction. In the post-genomic era, high-resolution determination of pro- tein structures becomes extremely important

  18. Model-Based Interpretation of 3D Medical Images

    Microsoft Academic Search

    A. Hill; A. Thornham; C. J. Taylor

    1993-01-01

    The automatic segmentation and labelling of anatomical structures in 3D medical images is a challenging task of practical importance. We describe a model-based approach which allows robust and accurate interpretation using explicit anatomical knowledge. Our method is based on the extension to 3D of Point Distribution Mo- dels (PDMs) and associated image search algorithms. A combination of global, Genetic Algorithm

  19. 3D Finite Element Meshing from Imaging Data

    Microsoft Academic Search

    Yongjie Zhang; Chandrajit Bajaj; Bong-Soo Sohn

    2004-01-01

    This paper describes an algorithm to extract adaptive and quality 3D meshes directly from volumetric imag- ing data. The extracted tetrahedral and hexahedral meshes are extensively used in the Finite Element Method (FEM). A top-down octree subdivision coupled with the dual contouring method is used to rapidly extract adaptive 3D finite element meshes with correct topology from volumetric imaging data.

  20. Chemical specificity in 3D imaging with multiplex CARS microscopy

    Microsoft Academic Search

    J. M. Schins; M. Muller

    2002-01-01

    We demonstrate the three-dimensional (3D) imaging capabilities and chemical specificity of multiplex coherent anti-Stokes Raman scattering microscopy. The simultaneous acquisition of a significant part of the vibrational spectrum at each specimen position permits straightforward differentiation among chemical species. 3D imaging is illustrated with a lipid multilamellar vesicle, and lateral and axial resolutions are determined.

  1. 3D photoacoustic imaging system with 4F acoustic lens

    Microsoft Academic Search

    En Jen; Hsintien Lin; Huihua Kenny Chiang

    2011-01-01

    Photoacoustic imaging (PAI) has several advantages in achieving high-contrast imaging of blood vessels or tumors over conventional ultrasound imaging. However, most PAI systems use conventional linear array transducers and need complex algorithms to reconstruct photoacoustic (PA) tomography or three-dimensional images. In this research, we successfully demonstrated the use of a 4F acoustic lens to realize 3D PAI system. The 3D

  2. Octahedral transforms for 3-D image processing.

    PubMed

    Lenz, Reiner; Latorre Carmona, Pedro

    2009-12-01

    The octahedral group is one of the finite subgroups of the rotation group in 3-D Euclidean space and a symmetry group of the cubic grid. Compression and filtering of 3-D volumes are given as application examples of its representation theory. We give an overview over the finite subgroups of the 3-D rotation group and their classification. We summarize properties of the octahedral group and basic results from its representation theory. Wide-sense stationary processes are processes with group theoretical symmetries whose principal components are closely related to the representation theory of their symmetry group. Linear filter systems are defined as projection operators and symmetry-based filter systems are generalizations of the Fourier transforms. The algorithms are implemented in Maple/Matlab functions and worksheets. In the experimental part, we use two publicly available MRI volumes. It is shown that the assumption of wide-sense stationarity is realistic and the true principal components of the correlation matrix are very well approximated by the group theoretically predicted structure. We illustrate the nature of the different types of filter systems, their invariance and transformation properties. Finally, we show how thresholding in the transform domain can be used in 3-D signal processing. PMID:19674954

  3. Multi-Aperture 3D Imaging Systems

    Microsoft Academic Search

    Joseph C. Marron; R. L. Kendrick

    2008-01-01

    With a multi-aperture imaging system, one creates a large imaging aperture by combining the light from a series of distributed telescopes. In doing this, one can construct a fine-resolution imaging system with reduced volume. We present work on multi-aperture, active imaging systems that use coherent detection and digital image formation. In such a system, the image formation process incorporates digital

  4. Extra-retinal signals support the estimation of 3D motion.

    PubMed

    Welchman, Andrew E; Harris, Julie M; Brenner, Eli

    2009-03-01

    In natural settings, our eyes tend to track approaching objects. To estimate motion, the brain should thus take account of eye movements, perhaps using retinal cues (retinal slip of static objects) or extra-retinal signals (motor commands). Previous work suggests that extra-retinal ocular vergence signals do not support the perceptual judgments. Here, we re-evaluate this conclusion, studying motion judgments based on retinal slip and extra-retinal signals. We find that (1) each cue can be sufficient, and, (2) retinal and extra-retinal signals are combined, when estimating motion-in-depth. This challenges the accepted view that observers are essentially blind to eye vergence changes. PMID:19264090

  5. Developing 3-D Imaging Mass Spectrometry

    Microsoft Academic Search

    Anna C. Crecelius; D. Shannon Cornett; Betsy Williams; Bobby Bodenheimer; Benoit Dawant; Richard M. Caprioli

    2003-01-01

    PhotoShop? the downloaded images are converted to a series of model sections by color coding the section periphery and the corpus callosum of each image blue and red, respectively. The colored regions are extracted from the original image and printed at a 1:1 scale on paper. A digital camera is used to record an optical image from each of the

  6. Colour Retinal Image Enhancement Based on Domain Knowledge

    Microsoft Academic Search

    Gopal Datt Joshi; Jayanthi Sivaswamy

    2008-01-01

    Retinal images are widely used to manually or automat- ically detect and diagnose many diseases. Due to the com- plex imaging setup, there is a large luminosity and con- trast variability within and across images. Here, we use the knowledge of the imaging geometry and propose an enhancement method for colour retinal images, with a fo- cus on contrast improvement

  7. 3D Cell Culture Imaging with Digital Holographic Microscopy

    NASA Astrophysics Data System (ADS)

    Dimiduk, Thomas; Nyberg, Kendra; Almeda, Dariela; Koshelva, Ekaterina; McGorty, Ryan; Kaz, David; Gardel, Emily; Auguste, Debra; Manoharan, Vinothan

    2011-03-01

    Cells in higher organisms naturally exist in a three dimensional (3D) structure, a fact sometimes ignored by in vitro biological research. Confinement to a two dimensional culture imposes significant deviations from the native 3D state. One of the biggest obstacles to wider use of 3D cultures is the difficulty of 3D imaging. The confocal microscope, the dominant 3D imaging instrument, is expensive, bulky, and light-intensive; live cells can be observed for only a short time before they suffer photodamage. We present an alternative 3D imaging techinque, digital holographic microscopy, which can capture 3D information with axial resolution better than 2 ?m in a 100 ?m deep volume. Capturing a 3D image requires only a single camera exposure with a sub-millisecond laser pulse, allowing us to image cell cultures using five orders of magnitude less light energy than with confocal. This can be done with hardware costing ~ 1000. We use the instrument to image growth of MCF7 breast cancer cells and p. pastoras yeast. We acknowledge support from NSF GRFP.

  8. Image performance evaluation of a 3D surgical imaging platform

    NASA Astrophysics Data System (ADS)

    Petrov, Ivailo E.; Nikolov, Hristo N.; Holdsworth, David W.; Drangova, Maria

    2011-03-01

    The O-arm (Medtronic Inc.) is a multi-dimensional surgical imaging platform. The purpose of this study was to perform a quantitative evaluation of the imaging performance of the O-arm in an effort to understand its potential for future nonorthopedic applications. Performance of the reconstructed 3D images was evaluated, using a custom-built phantom, in terms of resolution, linearity, uniformity and geometrical accuracy. Both the standard (SD, 13 s) and high definition (HD, 26 s) modes were evaluated, with the imaging parameters set to image the head (120 kVp, 100 mAs and 150 mAs, respectively). For quantitative noise characterization, the images were converted to Hounsfield units (HU) off-line. Measurement of the modulation transfer function revealed a limiting resolution (at 10% level) of 1.0 mm-1 in the axial dimension. Image noise varied between 15 and 19 HU for the HD and SD modes, respectively. Image intensities varied linearly over the measured range, up to 1300 HU. Geometric accuracy was maintained in all three dimensions over the field of view. The present study has evaluated the performance characteristics of the O-arm, and demonstrates feasibility for use in interventional applications and quantitative imaging tasks outside those currently targeted by the manufacturer. Further improvements to the reconstruction algorithms may further enhance performance for lower-contrast applications.

  9. 3D laser imaging for concealed object identification

    NASA Astrophysics Data System (ADS)

    Berechet, Ion; Berginc, Gérard; Berechet, Stefan

    2014-09-01

    This paper deals with new optical non-conventional 3D laser imaging. Optical non-conventional imaging explores the advantages of laser imaging to form a three-dimensional image of the scene. 3D laser imaging can be used for threedimensional medical imaging, topography, surveillance, robotic vision because of ability to detect and recognize objects. In this paper, we present a 3D laser imaging for concealed object identification. The objective of this new 3D laser imaging is to provide the user a complete 3D reconstruction of the concealed object from available 2D data limited in number and with low representativeness. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different interfaces of the scene of interest and from experimental results. We show the global 3D reconstruction procedures capable to separate objects from foliage and reconstruct a threedimensional image of the considered object. In this paper, we present examples of reconstruction and completion of three-dimensional images and we analyse the different parameters of the identification process such as resolution, the scenario of camouflage, noise impact and lacunarity degree.

  10. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  11. Analysis Of Selected Volumetric 3-D Imaging Systems

    NASA Astrophysics Data System (ADS)

    Balasubramonian, K.; Gunasekaran, S.; Rajappan, K. P.; Nithiyanandam, N.

    1983-12-01

    A simple tri-orthogonal image pickup system for generating 3-D image of an object is analyzed for its performance and characteristics. Some standard solid objects such as sphere, cone and tetrahedran are used for system evaluation. Also an optimal design procedue for tri-depth sectional image pickup system for compatible volumetric 3-D TV is presented. Further, as an alternative to the varifocal mirror technique for volumetric TV, a simple varifocal electrooptic lens system is proposed.

  12. Endoscopic exploration and measurement in 3D radiological images

    Microsoft Academic Search

    Krishnan Ramaswamy; William E. Higgins

    1996-01-01

    A high-resolution 3D radiological image provides a virtual copy of the anatomy that can be used as an input to a computer-based image-analysis system. In particular, useful information can be obtained by interactively navigating through a 3D radiological image in a manner similar to an endoscopic examination. Traditional endoscopy, though, only provides views in restricted regions (e.g., inside hollow passages)

  13. 3D imaging: how to achieve highest accuracy

    Microsoft Academic Search

    Thomas Luhmann

    2011-01-01

    The generation of 3D information from images is a key technology in many different areas, e.g. in 3D modeling and representation of architectural or heritage objects, in human body motion tracking and scanning, in 3D scene analysis of traffic scenes, in industrial applications and many more. The basic concepts rely on mathematical representations of central perspective viewing as they are

  14. Freehand 3d optoacoustic imaging of vasculature

    Microsoft Academic Search

    Marc Fournelle; Holger Hewener; C. Gunther; H. Fonfara; H.-J. Welsch; R. Lemor

    2009-01-01

    Optoacoustic techniques allow imaging of tissue structures with optical contrast and acoustical resolution. This modality is ideal for visualization of blood vessels since haemoglobin is one of the best-absorbing tissue chromophores. It therefore can provide vasculature images with much higher contrast than pure ultrasound. If compared with standard techniques for blood imaging such as Doppler ultrasound, the major advantage of

  15. Hyperspectral image compression with modified 3D SPECK

    Microsoft Academic Search

    Ruzelita Ngadiran; Said Boussakta; Ahmed Bouridane; Bayan Syarif

    2010-01-01

    Hyperspectral image consist of a set of contiguous images bands collected by a hyperspectral sensor. The large amount of data of hyperspectral images emphasizes the importance of efficient compression for storage and transmission. This paper proposes the simplified version of the three dimensional Set Partitioning Embedded bloCK (3D SPECK) algorithm for lossy compression of hyperspectral image. A three dimensional discrete

  16. Mapping textures on 3D geometric model using reflectance image

    E-print Network

    Ikeuchi, Katsushi

    Mapping textures on 3D geometric model using reflectance image Ryo Kurazume M. D. Wheeler Katsushi calibra- tion method for the texture mapping; the method utilizes reflectance images and iterative pose using reflectance images, which are given as side products of range images for most of the range sen

  17. MR image denoising method for brain surface 3D modeling

    NASA Astrophysics Data System (ADS)

    Zhao, De-xin; Liu, Peng-jie; Zhang, De-gan

    2014-11-01

    Three-dimensional (3D) modeling of medical images is a critical part of surgical simulation. In this paper, we focus on the magnetic resonance (MR) images denoising for brain modeling reconstruction, and exploit a practical solution. We attempt to remove the noise existing in the MR imaging signal and preserve the image characteristics. A wavelet-based adaptive curve shrinkage function is presented in spherical coordinates system. The comparative experiments show that the denoising method can preserve better image details and enhance the coefficients of contours. Using these denoised images, the brain 3D visualization is given through surface triangle mesh model, which demonstrates the effectiveness of the proposed method.

  18. Plague and anthrax bacteria cell ultra structure 3D images

    NASA Astrophysics Data System (ADS)

    Volkov, Uryi P.; Konnov, Nikolai P.; Novikova, Olga V.; Yakimenko, Roman A.

    2002-07-01

    The vast majority of information about cells and cell organelle structure were obtained by means of transmission electron microscopy investigation of cells serial thin sections. However often it is very difficult to derive information about 3D structure of specimens from such electron micrographs. A new program restoring 3D image of cells from the serial thin sections micrographs have been developed in our lab. The program makes it possible to visualize a 3D image of cell and obtain image of inner cell structure in arbitrary plane. The plague bacteria and anthrax cells with spore were visualized with resolution about 70 nm by means of the program.

  19. 3-D building extraction using IKONOS multispectral images

    Microsoft Academic Search

    Hong-Gyoo Sohn; Choung-Hwan Park; Ho-Sung Kim; Joon Heo

    2005-01-01

    This paper presented an effective strategy to extract the buildings and to reconstruct 3-D buildings using high-resolution multispectral stereo satellite images. Proposed scheme contained three major steps: building enhancement and segmentation using both BDT (Background Discriminant Transformation) and ISODATA algorithm, conjugate building identification using the object matching with Hausdorff distance and color indexing, and 3-D building reconstruction using photogrammetric techniques.

  20. RECONSTRUCTION OF 3D TOOTH IMAGES S. Buchaillard1

    E-print Network

    Paris-Sud XI, Université de

    , and treatment simulations. For example, a dental implant can be inserted into the jawbone when a tooth on the other side of the jaw to define a 3D rep- resentation of the missing tooth could result in an implant tomography (CT) is the most efficient way of generating 3D objects. However, CT imag- ing of dental patients

  1. BASED ON STEREO SEQUENCE IMAGE 3-D MOTION PARAMETERS DETERMINATION

    Microsoft Academic Search

    Chunsen ZHANG; Jianqing ZHANG; Shaojun He

    Deriving accurate 3-D motion information of a scene is an essential and important task in computer vision, and is also one of the most difficult problems. In this paper, using photogrammetry method and computer vision technique the author investigates the determination of 3-D motion parameters from binocular stereo image sequences method and steps. Discussing the in-situ calibration for binocular stereo

  2. 3D quantitative phase imaging of neural networks using WDT

    NASA Astrophysics Data System (ADS)

    Kim, Taewoo; Liu, S. C.; Iyer, Raj; Gillette, Martha U.; Popescu, Gabriel

    2015-03-01

    White-light diffraction tomography (WDT) is a recently developed 3D imaging technique based on a quantitative phase imaging system called spatial light interference microscopy (SLIM). The technique has achieved a sub-micron resolution in all three directions with high sensitivity granted by the low-coherence of a white-light source. Demonstrations of the technique on single cell imaging have been presented previously; however, imaging on any larger sample, including a cluster of cells, has not been demonstrated using the technique. Neurons in an animal body form a highly complex and spatially organized 3D structure, which can be characterized by neuronal networks or circuits. Currently, the most common method of studying the 3D structure of neuron networks is by using a confocal fluorescence microscope, which requires fluorescence tagging with either transient membrane dyes or after fixation of the cells. Therefore, studies on neurons are often limited to samples that are chemically treated and/or dead. WDT presents a solution for imaging live neuron networks with a high spatial and temporal resolution, because it is a 3D imaging method that is label-free and non-invasive. Using this method, a mouse or rat hippocampal neuron culture and a mouse dorsal root ganglion (DRG) neuron culture have been imaged in order to see the extension of processes between the cells in 3D. Furthermore, the tomogram is compared with a confocal fluorescence image in order to investigate the 3D structure at synapses.

  3. Vessel segmentation in retinal images

    Microsoft Academic Search

    Dietrich Paulus; Serge Chastel; Tobias Feldmann

    2005-01-01

    Detection of the papilla region and vessel detection on images of the retina are problems that can be solved with pattern recognition techniques. Topographic images, as provided e.g. by the HRT device, as well as fundus images can be used as source for the detection. It is of diagnostic importance to separate vessels inside the papilla area from those outside

  4. Accommodation response measurements for integral 3D image

    NASA Astrophysics Data System (ADS)

    Hiura, H.; Mishina, T.; Arai, J.; Iwadate, Y.

    2014-03-01

    We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468 dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned 60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15 and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions. The results suggest that the IP image induced more natural accommodation responses compared to the binocular stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however, they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce accommodation to the depth positions of 3D images.

  5. Manifold of Multi-view Image Features for Sketch-based 3D Model Retrieval

    E-print Network

    Ohbuchi, Ryutarou

    1 3D 1) , 2) , 3) , 4) 1,2,3,4) Manifold of Multi-view Image Features for Sketch-based 3D,2,3,4) University of Yamanashi { t10kf027, g13dm003, g11mk014, ohbuchi } yamanashi.ac.jp 3D 3D 3D (L1, L2, Cosine) 3D 3D 3D L1 1. 3 (3D )Microsoft Kinect 3D3D3D 3D 3D 3D [5][8][9] 3D Shin 3

  6. Algorithms for 3D time-of-flight imaging

    E-print Network

    Mei, Jonathan (Jonathan B.)

    2013-01-01

    This thesis describes the design and implementation of two novel frameworks and processing schemes for 3D imaging based on time-of- flight (TOF) principles. The first is a low power, low hardware complexity technique based ...

  7. Image processing techniques in 3-D foot shape measurement system

    NASA Astrophysics Data System (ADS)

    Liu, Guozhong; Li, Ping; Wang, Boxiong; Shi, Hui; Luo, Xiuzhi

    2008-10-01

    The 3-D foot-shape measurement system based on laser-line-scanning principle was designed and 3-D foot-shape measurements without blind areas and the automatic extraction of foot-parameters were achieved. The paper is focused on the study of the system structure and principle and image processing techniques. The key techniques related to the image processing for 3-D foot shape measurement system include laser stripe extraction, laser stripe coordinate transformation from CCD cameras image coordinates system to laser plane coordinates system, laser stripe assembly of eight CCD cameras and eliminating of image noise and disturbance. 3-D foot shape measurement makes it possible to realize custom-made shoe-making and shows great prosperity in shoe design, foot orthopaedic treatment, shoe size standardization and establishment of a feet database for consumers.

  8. Tomographic molecular imaging and 3D

    E-print Network

    Cai, Long

    , magnetic reso- nance imaging, X-ray­computed tomography) suffer from a pau- city of targetable contrast or different fixatives. It has been shown that optical projection tomography (OPT) has advantages over confocal

  9. Vessel segmentation in retinal images

    NASA Astrophysics Data System (ADS)

    Paulus, Dietrich; Chastel, Serge; Feldmann, Tobias

    2005-04-01

    Detection of the papilla region and vessel detection on images of the retina are problems that can be solved with pattern recognition techniques. Topographic images, as provided e.g. by the HRT device, as well as fundus images can be used as source for the detection. It is of diagnostic importance to separate vessels inside the papilla area from those outside this area. Therefore, detection of the papilla is important also for vessel segmentation. In this contribution we present state of the art methods for automatic disk segmentation and compare their results. Vessels detected with matched filters (wavelets, derivatives of the Gaussian, etc.) are shown as well as vessel segmentation using image morphology. We present our own method for vessel segmentation based on a special matched filter followed by image morphology. In this contribution we argue for a new matched filter that is suited for large vessels in HRT images.

  10. User-guided segmentation for volumetric retinal optical coherence tomography images.

    PubMed

    Yin, Xin; Chao, Jennifer R; Wang, Ruikang K

    2014-08-01

    Despite the existence of automatic segmentation techniques, trained graders still rely on manual segmentation to provide retinal layers and features from clinical optical coherence tomography (OCT) images for accurate measurements. To bridge the gap between this time-consuming need of manual segmentation and currently available automatic segmentation techniques, this paper proposes a user-guided segmentation method to perform the segmentation of retinal layers and features in OCT images. With this method, by interactively navigating three-dimensional (3-D) OCT images, the user first manually defines user-defined (or sketched) lines at regions where the retinal layers appear very irregular for which the automatic segmentation method often fails to provide satisfactory results. The algorithm is then guided by these sketched lines to trace the entire 3-D retinal layer and anatomical features by the use of novel layer and edge detectors that are based on robust likelihood estimation. The layer and edge boundaries are finally obtained to achieve segmentation. Segmentation of retinal layers in mouse and human OCT images demonstrates the reliability and efficiency of the proposed user-guided segmentation method. PMID:25147962

  11. Incremental volume reconstruction and rendering for 3D ultrasound imaging

    E-print Network

    North Carolina at Chapel Hill, University of

    environment for 3D ultrasound imaging 1. The system uses video see-through head-mounted display (HMD), which on a helmet worn by a user. Our video see-through HMD system displays ultrasound echography image data.g., an image of a pregnant woman). This is a part of our continuing see-through HMD research, which includes

  12. Inverse synthetic aperture 3-D imaging laser radar

    Microsoft Academic Search

    Jin He; Xiao-you Yang; Jian-feng Wang; Qun Zhang

    2010-01-01

    Three-dimensional (3-D) Image can represent target's physical characteristic well and improve target recognition capability, however, conventional optical imaging radar which is limited by the array units or scan system cannot realize the high resolution imaging for moving targets. This paper combines the inverse synthetic aperture technology, laser signal and interferometric technique to suggest a new radar system which is called

  13. Imaging and 3D morphological analysis of collagen fibrils.

    PubMed

    Altendorf, H; Decencière, E; Jeulin, D; De sa Peixoto, P; Deniset-Besseau, A; Angelini, E; Mosser, G; Schanne-Klein, M-C

    2012-08-01

    The recent booming of multiphoton imaging of collagen fibrils by means of second harmonic generation microscopy generates the need for the development and automation of quantitative methods for image analysis. Standard approaches sequentially analyse two-dimensional (2D) slices to gain knowledge on the spatial arrangement and dimension of the fibrils, whereas the reconstructed three-dimensional (3D) image yields better information about these characteristics. In this work, a 3D analysis method is proposed for second harmonic generation images of collagen fibrils, based on a recently developed 3D fibre quantification method. This analysis uses operators from mathematical morphology. The fibril structure is scanned with a directional distance transform. Inertia moments of the directional distances yield the main fibre orientation, corresponding to the main inertia axis. The collaboration of directional distances and fibre orientation delivers a geometrical estimate of the fibre radius. The results include local maps as well as global distribution of orientation and radius of the fibrils over the 3D image. They also bring a segmentation of the image into foreground and background, as well as a classification of the foreground pixels into the preferred orientations. This accurate determination of the spatial arrangement of the fibrils within a 3D data set will be most relevant in biomedical applications. It brings the possibility to monitor remodelling of collagen tissues upon a variety of injuries and to guide tissues engineering because biomimetic 3D organizations and density are requested for better integration of implants. PMID:22670759

  14. Objective quality measurement of integral 3D images

    NASA Astrophysics Data System (ADS)

    Forman, Matthew C.; Davies, Neil A.; McCormick, Malcolm

    2002-05-01

    At De Montfort University the imaging technologies group have developed an integral imaging system capable of real time capture and replay. The system has many advantages compared with other 3D capture and display techniques, however one issue that has not been adequately addressed is the measurement of the fidelity of replayed 3D images where some distortion has occurred. This paper presents a method for producing a viewing angle-dependent PSNR metric based on extraction of optical model data as conventional images. The technique produces image quality measurements which are more relevant to the volume spatial content of an integral image than a conventional fidelity metric applied to the raw, optically encoded spatial distribution. Comparisons of the previous, single metric with the new angle-dependent metric are made when used in assessing the performance of a 3D-DCT based compression scheme, and the utility of the extra information provided by the angle dependent PSNR is considered.

  15. Imaging hypoxia using 3D photoacoustic spectroscopy

    NASA Astrophysics Data System (ADS)

    Stantz, Keith M.

    2010-02-01

    Purpose: The objective is to develop a multivariate in vivo hemodynamic model of tissue oxygenation (MiHMO2) based on 3D photoacoustic spectroscopy. Introduction: Low oxygen levels, or hypoxia, deprives cancer cells of oxygen and confers resistance to irradiation, some chemotherapeutic drugs, and oxygen-dependent therapies (phototherapy) leading to treatment failure and poor disease-free and overall survival. For example, clinical studies of patients with breast carcinomas, cervical cancer, and head and neck carcinomas (HNC) are more likely to suffer local reoccurrence and metastasis if their tumors are hypoxic. A novel method to non invasively measure tumor hypoxia, identify its type, and monitor its heterogeneity is devised by measuring tumor hemodynamics, MiHMO2. Material and Methods: Simulations are performed to compare tumor pO2 levels and hypoxia based on physiology - perfusion, fractional plasma volume, fractional cellular volume - and its hemoglobin status - oxygen saturation and hemoglobin concentration - based on in vivo measurements of breast, prostate, and ovarian tumors. Simulations of MiHMO2 are performed to assess the influence of scanner resolutions and different mathematic models of oxygen delivery. Results: Sensitivity of pO2 and hypoxic fraction to photoacoustic scanner resolution and dependencies on model complexity will be presented using hemodynamic parameters for different tumors. Conclusions: Photoacoustic CT spectroscopy provides a unique ability to monitor hemodynamic and cellular physiology in tissue, which can be used to longitudinally monitor tumor oxygenation and its response to anti-angiogenic therapies.

  16. Dedicated 3D photoacoustic breast imaging

    PubMed Central

    Kruger, Robert A.; Kuzmiak, Cherie M.; Lam, Richard B.; Reinecke, Daniel R.; Del Rio, Stephen P.; Steed, Doreen

    2013-01-01

    Purpose: To report the design and imaging methodology of a photoacoustic scanner dedicated to imaging hemoglobin distribution throughout a human breast. Methods: The authors developed a dedicated breast photoacoustic mammography (PAM) system using a spherical detector aperture based on our previous photoacoustic tomography scanner. The system uses 512 detectors with rectilinear scanning. The scan shape is a spiral pattern whose radius varies from 24 to 96 mm, thereby allowing a field of view that accommodates a wide range of breast sizes. The authors measured the contrast-to-noise ratio (CNR) using a target comprised of 1-mm dots printed on clear plastic. Each dot absorption coefficient was approximately the same as a 1-mm thickness of whole blood at 756 nm, the output wavelength of the Alexandrite laser used by this imaging system. The target was immersed in varying depths of an 8% solution of stock Liposyn II-20%, which mimics the attenuation of breast tissue (1.1 cm?1). The spatial resolution was measured using a 6 ?m-diameter carbon fiber embedded in agar. The breasts of four healthy female volunteers, spanning a range of breast size from a brassiere C cup to a DD cup, were imaged using a 96-mm spiral protocol. Results: The CNR target was clearly visualized to a depth of 53 mm. Spatial resolution, which was estimated from the full width at half-maximum of a profile across the PAM image of a carbon fiber, was 0.42 mm. In the four human volunteers, the vasculature was well visualized throughout the breast tissue, including to the chest wall. Conclusions: CNR, lateral field-of-view and penetration depth of our dedicated PAM scanning system is sufficient to image breasts as large as 1335 mL, which should accommodate up to 90% of the women in the United States. PMID:24320471

  17. 3-D capacitance density imaging system

    DOEpatents

    Fasching, G.E.

    1988-03-18

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved. 7 figs.

  18. 3-D seismic imaging of complex geologies

    SciTech Connect

    Womble, D.E.; Dosanjh, S.S.; VanDyke, J.P.; Oldfield, R.A.; Greenberg, D.S.

    1995-02-01

    We present three codes for the Intel Paragon that address the problem of three-dimensional seismic imaging of complex geologies. The first code models acoustic wave propagation and can be used to generate data sets to calibrate and validate seismic imaging codes. This code reported the fastest timings for acoustic wave propagation codes at a recent SEG (Society of Exploration Geophysicists) meeting. The second code implements a Kirchhoff method for pre-stack depth migration. Development of this code is almost complete, and preliminary results are presented. The third code implements a wave equation approach to seismic migration and is a Paragon implementation of a code from the ARCO Seismic Benchmark Suite.

  19. 3D integration technologies for imaging applications

    NASA Astrophysics Data System (ADS)

    De Moor, Piet

    2008-06-01

    The aim of this paper is to give an overview of micro-electronic technologies under development today, and how they are impacting on the radiation detection and imaging of tomorrow. After a short introduction, the different enabling technologies will be discussed. Finally, a few examples of ongoing developments at IMEC on advanced detector systems will be given.

  20. Laser beacon adaptive optics ophthalmoscope for retinal multilayer imaging

    Microsoft Academic Search

    Liu Ruixue; Li Dayu; Xia Mingliang; Kong Ningning; Qi Yue; Zheng Xianliang; Xuan Li

    2011-01-01

    A flood-illuminated adaptive optics ophthalmoscope for retinal multilayer imaging is introduced in this paper. By arranging an alterable stop in the illumination path, the illuminated area of retinal layer can be changed. A laser beacon for wavefront sensing is formed when the stop is a narrow aperture. The large aperture allows flood illumination and expands imaging field. A moveable imaging

  1. The retinal Image registration based on Scale Invariant Feature

    Microsoft Academic Search

    LiFang Wei; Lin Pan; Lin Lin; Lun Yu

    2010-01-01

    Accurate retinal Image registration is essential to monitor and track the progress of various diseases. Since it is low quality images on Non-mydriatic or with the disease. Then vascular structure will become less clear. It becomes more difficult for the general registration methods. In this paper, a novel feature based retinal image registration method is proposed to solve this problem.

  2. Second harmonic imaging of membrane potential of neurons with retinal

    E-print Network

    Columbia University

    Second harmonic imaging of membrane potential of neurons with retinal Boaz A. Nemet Volodymyr, New York 10027 Abstract. We present a method to optically measure and image the membrane potential reti- nal as the chromophore [second harmonic retinal imaging of mem- brane potential (SHRIMP)]. We

  3. On Anisotropic Diffusion in 3D image processing and image sequence analysis

    Microsoft Academic Search

    Karol Mikula; Martin Rumpf; Fiorella Sgallari

    A morphological multiscale method in 3D image and 3D image sequence processing is discussed which identifies edges on level sets and the motion of features in time. Based on these indicator evaluation the image data is processed applying nonlinear diffusion and the theory of geometric evolution problems. The aim is to smooth level sets of a 3D image while simultaneously

  4. 3-D Imaging Based, Radiobiological Dosimetry

    PubMed Central

    Sgouros, George; Frey, Eric; Wahl, Richard; He, Bin; Prideaux, Andrew; Hobbs, Robert

    2008-01-01

    Targeted radionuclide therapy holds promise as a new treatment against cancer. Advances in imaging are making it possible to evaluate the spatial distribution of radioactivity in tumors and normal organs over time. Matched anatomical imaging such as combined SPECT/CT and PET/CT have also made it possible to obtain tissue density information in conjunction with the radioactivity distribution. Coupled with sophisticated iterative reconstruction algorithims, these advances have made it possible to perform highly patient-specific dosimetry that also incorporates radiobiological modeling. Such sophisticated dosimetry techniques are still in the research investigation phase. Given the attendant logistical and financial costs, a demonstrated improvement in patient care will be a prerequisite for the adoption of such highly-patient specific internal dosimetry methods. PMID:18662554

  5. Depth dimension compression of 3-D images

    Microsoft Academic Search

    N. K. Ignatyev

    1984-01-01

    As holography develops, there is renewed interest in noncoherent light photography including possibilities for three-dimensional views utilizing the parallax-paneramogram method in which the lenticular grating carrier of the fixed image has a limited depth of resolution of approximately 10% of the distance of the scene. This principle was used to develop a method of noncoherent parallax-panoramogram photography with depths extending

  6. Acoustic 3D imaging of dental structures

    SciTech Connect

    Lewis, D.K. [Lawrence Livermore National Lab., CA (United States); Hume, W.R. [California Univ., Los Angeles, CA (United States); Douglass, G.D. [California Univ., San Francisco, CA (United States)

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  7. Adaptive Optics Retinal Imaging: Emerging Clinical Applications

    PubMed Central

    Godara, Pooja; Dubis, Adam M.; Roorda, Austin; Duncan, Jacque L.; Carroll, Joseph

    2010-01-01

    The human retina is a uniquely accessible tissue. Tools like scanning laser ophthalmoscopy (SLO) and spectral domain optical coherence tomography (SD-OCT) provide clinicians with remarkably clear pictures of the living retina. While the anterior optics of the eye permit such non-invasive visualization of the retina and associated pathology, these same optics induce significant aberrations that in most cases obviate cellular-resolution imaging. Adaptive optics (AO) imaging systems use active optical elements to compensate for aberrations in the optical path between the object and the camera. Applied to the human eye, AO allows direct visualization of individual rod and cone photoreceptor cells, RPE cells, and white blood cells. AO imaging has changed the way vision scientists and ophthalmologists see the retina, helping to clarify our understanding of retinal structure, function, and the etiology of various retinal pathologies. Here we review some of the advances made possible with AO imaging of the human retina, and discuss applications and future prospects for clinical imaging. PMID:21057346

  8. Molecular Imaging of Retinal Disease

    PubMed Central

    Capozzi, Megan E.; Gordon, Andrew Y.; Penn, John S.

    2013-01-01

    Abstract Imaging of the eye plays an important role in ocular therapeutic discovery and evaluation in preclinical models and patients. Advances in ophthalmic imaging instrumentation have enabled visualization of the retina at an unprecedented resolution. These developments have contributed toward early detection of the disease, monitoring of disease progression, and assessment of the therapeutic response. These powerful technologies are being further harnessed for clinical applications by configuring instrumentation to detect disease biomarkers in the retina. These biomarkers can be detected either by measuring the intrinsic imaging contrast in tissue, or by the engineering of targeted injectable contrast agents for imaging of the retina at the cellular and molecular level. Such approaches have promise in providing a window on dynamic disease processes in the retina such as inflammation and apoptosis, enabling translation of biomarkers identified in preclinical and clinical studies into useful diagnostic targets. We discuss recently reported and emerging imaging strategies for visualizing diverse cell types and molecular mediators of the retina in vivo during health and disease, and the potential for clinical translation of these approaches. PMID:23421501

  9. Exposing digital image forgeries by 3D reconstruction technology

    NASA Astrophysics Data System (ADS)

    Wang, Yongqiang; Xu, Xiaojing; Li, Zhihui; Liu, Haizhen; Li, Zhigang; Huang, Wei

    2009-11-01

    Digital images are easy to tamper and edit due to availability of powerful image processing and editing software. Especially, forged images by taking from a picture of scene, because of no manipulation was made after taking, usual methods, such as digital watermarks, statistical correlation technology, can hardly detect the traces of image tampering. According to image forgery characteristics, a method, based on 3D reconstruction technology, which detect the forgeries by discriminating the dimensional relationship of each object appeared on image, is presented in this paper. This detection method includes three steps. In the first step, all the parameters of images were calibrated and each crucial object on image was chosen and matched. In the second step, the 3D coordinates of each object were calculated by bundle adjustment. In final step, the dimensional relationship of each object was analyzed. Experiments were designed to test this detection method; the 3D reconstruction and the forged image 3D reconstruction were computed independently. Test results show that the fabricating character in digital forgeries can be identified intuitively by this method.

  10. SNR analysis of 3D magnetic resonance tomosynthesis (MRT) imaging

    NASA Astrophysics Data System (ADS)

    Kim, Min-Oh; Kim, Dong-Hyun

    2012-03-01

    In conventional 3D Fourier transform (3DFT) MR imaging, signal-to-noise ratio (SNR) is governed by the well-known relationship of being proportional to the voxel size and square root of the imaging time. Here, we introduce an alternative 3D imaging approach, termed MRT (Magnetic Resonance Tomosynthesis), which can generate a set of tomographic MR images similar to multiple 2D projection images in x-ray. A multiple-oblique-view (MOV) pulse sequence is designed to acquire the tomography-like images used in tomosynthesis process and an iterative back-projection (IBP) reconstruction method is used to reconstruct 3D images. SNR analysis is performed and shows that resolution and SNR tradeoff is not governed as with typical 3DFT MR imaging case. The proposed method provides a higher SNR than the conventional 3D imaging method with a partial loss of slice-direction resolution. It is expected that this method can be useful for extremely low SNR cases.

  11. Building 3D scenes from 2D image sequences

    NASA Astrophysics Data System (ADS)

    Cristea, Paul D.

    2006-05-01

    Sequences of 2D images, taken by a single moving video receptor, can be fused to generate a 3D representation. This dynamic stereopsis exists in birds and reptiles, whereas the static binocular stereopsis is common in mammals, including humans. Most multimedia computer vision systems for stereo image capture, transmission, processing, storage and retrieval are based on the concept of binocularity. As a consequence, their main goal is to acquire, conserve and enhance pairs of 2D images able to generate a 3D visual perception in a human observer. Stereo vision in birds is based on the fusion of images captured by each eye, with previously acquired and memorized images from the same eye. The process goes on simultaneously and conjointly for both eyes and generates an almost complete all-around visual field. As a consequence, the baseline distance is no longer fixed, as in the case of binocular 3D view, but adjustable in accordance with the distance to the object of main interest, allowing a controllable depth effect. Moreover, the synthesized 3D scene can have a better resolution than each individual 2D image in the sequence. Compression of 3D scenes can be achieved, and stereo transmissions with lower bandwidth requirements can be developed.

  12. Making 3D binary digital images well-composed

    Microsoft Academic Search

    Marcelo Siqueira; Longin Jan Latecki; Jean Gallier

    2004-01-01

    A 3D binary digital image is said to be well-composed if and only if the set of points in the faces shared by the voxels of foreground and background points of the image is a 2D manifold. Well-composed images enjoy important topological and geometric properties; in particular, there is only one type of connected component in any well-composed image, as

  13. Prostate Mechanical Imaging: 3-D Image Composition and Feature Calculations

    PubMed Central

    Egorov, Vladimir; Ayrapetyan, Suren; Sarvazyan, Armen P.

    2008-01-01

    We have developed a method and a device entitled prostate mechanical imager (PMI) for the real-time imaging of prostate using a transrectal probe equipped with a pressure sensor array and position tracking sensor. PMI operation is based on measurement of the stress pattern on the rectal wall when the probe is pressed against the prostate. Temporal and spatial changes in the stress pattern provide information on the elastic structure of the gland and allow two-dimensional (2-D) and three-dimensional (3-D) reconstruction of prostate anatomy and assessment of prostate mechanical properties. The data acquired allow the calculation of prostate features such as size, shape, nodularity, consistency/hardness, and mobility. The PMI prototype has been validated in laboratory experiments on prostate phantoms and in a clinical study. The results obtained on model systems and in vivo images from patients prove that PMI has potential to become a diagnostic tool that could largely supplant DRE through its higher sensitivity, quantitative record storage, ease-of-use and inherent low cost. PMID:17024836

  14. Retinal segmentation using multicolor laser imaging.

    PubMed

    Sergott, Robert C

    2014-09-01

    Spectral-domain optical coherence tomography (SD-OCT) changed 3 worlds: clinical care, clinical research, and the regulatory environment of phases 2, 3, and 4 pharmaceutical and surgical trials. OCT is now undergoing another transformation with multicolor technology, which acquires images using data from 3 simultaneous lasers: red, green, and blue, taking advantage of the different wavelengths of each of these colors to most precisely image 3 different zones of the retina. Rather than seeing only the surface of the retina and optic disc and any large lesions in the deeper retina, this technology provides a topographic map of the outer (red), mid (green), and inner (blue) retina somewhat similar to what is observed with fundus autoflourescence of deep retina, retinal pigment epithelium, and choroid. Multicolor imaging will supplement and help to define what is observed with traditional fundus photography and SD-OCT. In addition, it may demonstrate abnormalities when routine photography is normal and when SD-OCT findings are equivocal. This review will illustrate the basic principles of multicolor imaging and will show clinical examples of how this technique can further define retinal and optic nerve pathology. PMID:25133967

  15. A 3D surface imaging system for assessing human obesity

    NASA Astrophysics Data System (ADS)

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  16. Getting in touch—3D printing in Forensic Imaging

    Microsoft Academic Search

    Lars Chr. Ebert; Michael J. Thali; Steffen Ross

    2011-01-01

    With the increasing use of medical imaging in forensics, as well as the technological advances in rapid prototyping, we suggest combining these techniques to generate displays of forensic findings. We used computed tomography (CT), CT angiography, magnetic resonance imaging (MRI) and surface scanning with photogrammetry in conjunction with segmentation techniques to generate 3D polygon meshes. Based on these data sets,

  17. Computer-Generated Image Holograms for 3D-Display

    Microsoft Academic Search

    Andreas Jendral; Ralf Bräuer; Olof Bryngdahl

    1995-01-01

    Computer-generated image holograms are particularly useful for 3D-display applications. We discuss the properties of the reconstructions obtained with this hologram type. Conditions are given which must be fulfilled to ensure a disturbance free reconstruction. A new efficient algorithm is presented which implements the hidden surface effect for synthetic image holograms.

  18. 3-D Depth Reconstruction from a Single Still Image

    Microsoft Academic Search

    Ashutosh Saxena; Sung H. Chung; Andrew Y. Ng

    2008-01-01

    We consider the task of 3-d depth estimation from a single still image. We take a supervised learning approach to this problem, in which we begin by collecting a training set of monocu- lar images (of unstructured indoor and outdoor environments which include forests, sidewalks, trees, buildings, etc.) and their corresponding ground-truth depthmaps. Then, we apply su- pervised learning to

  19. 2D and 3D Elasticity Imaging Using Freehand Ultrasound

    E-print Network

    Drummond, Tom

    2D and 3D Elasticity Imaging Using Freehand Ultrasound Joel Edward Lindop Pembroke College March to mechanical properties (e.g., stiffness) to which conventional forms of ultrasound, X-ray and magnetic that occur between the acquisition of multiple ultrasound images. Likely applications include improved

  20. 3D image analysis of abdominal aortic aneurysm

    NASA Astrophysics Data System (ADS)

    Subasic, Marko; Loncaric, Sven; Sorantin, Erich

    2001-07-01

    In this paper we propose a technique for 3-D segmentation of abdominal aortic aneurysm (AAA) from computed tomography angiography (CTA) images. Output data (3-D model) form the proposed method can be used for measurement of aortic shape and dimensions. Knowledge of aortic shape and size is very important in planning of minimally invasive procedure that is for selection of appropriate stent graft device for treatment of AAA. The technique is based on a 3-D deformable model and utilizes the level-set algorithm for implementation of the method. The method performs 3-D segmentation of CTA images and extracts a 3-D model of aortic wall. Once the 3-D model of aortic wall is available it is easy to perform all required measurements for appropriate stent graft selection. The method proposed in this paper uses the level-set algorithm for deformable models, instead of the classical snake algorithm. The main advantage of the level set algorithm is that it enables easy segmentation of complex structures, surpassing most of the drawbacks of the classical approach. We have extended the deformable model to incorporate the a priori knowledge about the shape of the AAA. This helps direct the evolution of the deformable model to correctly segment the aorta. The algorithm has been implemented in IDL and C languages. Experiments have been performed using real patient CTA images and have shown good results.

  1. A microfabricated 3-D stem cell delivery scaffold for retinal regenerative therapy

    E-print Network

    Sodha, Sonal

    2009-01-01

    Diseases affecting the retina, such as Age-related Macular Degeneration (AMD) and Retinitis Pigmentosa (RP), result in the degeneration of the photoreceptor cells and can ultimately lead to blindness in patients. There is ...

  2. 2D/3D Image Registration using Regression Learning

    PubMed Central

    Chou, Chen-Rui; Frederick, Brandon; Mageras, Gig; Chang, Sha; Pizer, Stephen

    2013-01-01

    In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object’s 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called CLARET (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projec-tion(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region’s motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method’s application to Image-guided Radiation Therapy (IGRT) requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof. PMID:24058278

  3. Adaptive optics with pupil tracking for high resolution retinal imaging

    E-print Network

    Dainty, Chris

    Adaptive optics with pupil tracking for high resolution retinal imaging Betul Sahin,1, Barbara@gmail.com Abstract: Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing with a compact adaptive optics flood illumination fundus camera where it was possible to compensate

  4. Segmentation of Retinal Arteries in Adaptive Optics Images

    E-print Network

    Boyer, Edmond

    Segmentation of Retinal Arteries in Adaptive Optics Images Nicolas Lermé, Florence Rossant--In this paper, we present a method for automatically segmenting the walls of retinal arteries in adaptive optics, ap- proximate parallelism, retina imaging. I. INTRODUCTION Arterial hypertension (AH) and diabetic

  5. Single 3D cell segmentation from optical CT microscope images

    NASA Astrophysics Data System (ADS)

    Xie, Yiting; Reeves, Anthony P.

    2014-03-01

    The automated segmentation of the nucleus and cytoplasm regions in 3D optical CT microscope images has been achieved with two methods, a global threshold gradient based approach and a graph-cut approach. For the first method, the first two peaks of a gradient figure of merit curve are selected as the thresholds for cytoplasm and nucleus segmentation. The second method applies a graph-cut segmentation twice: the first identifies the nucleus region and the second identifies the cytoplasm region. Image segmentation of single cells is important for automated disease diagnostic systems. The segmentation methods were evaluated with 200 3D images consisting of 40 samples of 5 different cell types. The cell types consisted of columnar, macrophage, metaplastic and squamous human cells and cultured A549 cancer cells. The segmented cells were compared with both 2D and 3D reference images and the quality of segmentation was determined by the Dice Similarity Coefficient (DSC). In general, the graph-cut method had a superior performance to the gradient-based method. The graph-cut method achieved an average DSC of 86% and 72% for nucleus and cytoplasm segmentations respectively for the 2D reference images and 83% and 75% for the 3D reference images. The gradient method achieved an average DSC of 72% and 51% for nucleus and cytoplasm segmentation for the 2D reference images and 71% and 51% for the 3D reference images. The DSC of cytoplasm segmentation was significantly lower than for the nucleus since the cytoplasm was not differentiated as well by image intensity from the background.

  6. Integrated optical 3D digital imaging based on DSP scheme

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.

    2008-03-01

    We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.

  7. Optimized Bayes variational regularization prior for 3D PET images.

    PubMed

    Rapisarda, Eugenio; Presotto, Luca; De Bernardi, Elisabetta; Gilardi, Maria Carla; Bettinardi, Valentino

    2014-09-01

    A new prior for variational Maximum a Posteriori regularization is proposed to be used in a 3D One-Step-Late (OSL) reconstruction algorithm accounting also for the Point Spread Function (PSF) of the PET system. The new regularization prior strongly smoothes background regions, while preserving transitions. A detectability index is proposed to optimize the prior. The new algorithm has been compared with different reconstruction algorithms such as 3D-OSEM+PSF, 3D-OSEM+PSF+post-filtering and 3D-OSL with a Gauss-Total Variation (GTV) prior. The proposed regularization allows controlling noise, while maintaining good signal recovery; compared to the other algorithms it demonstrates a very good compromise between an improved quantitation and good image quality. PMID:24958594

  8. Segmentation, registration,and selective watermarking of retinal images 

    E-print Network

    Wu, Di

    2006-08-16

    In this dissertation, I investigated some fundamental issues related to medical image segmentation, registration, and watermarking. I used color retinal fundus images to perform my study because of the rich representation ...

  9. Segmentation, registration,and selective watermarking of retinal images

    E-print Network

    Wu, Di

    2006-08-16

    algorithms to retinal images is a rapidly developing field, which has led to great advancements in retinal structure analysis. Automated methods can help detection and control of retinal diseases such as diabetic retinopa- thy. Screening of diabetic... retinopathy by automated algorithms could reduce the occurrence of blindness by 50% [1][2] and lessen the expenses associated with ex- aminations. The automated algorithms are needed to be able to screen patients for diabetic retinopathy and other conditions...

  10. 3D reconstruction based on CT image and its application

    NASA Astrophysics Data System (ADS)

    Zhang, Jianxun; Zhang, Mingmin

    2004-03-01

    Reconstitute the 3-D model of the liver and its internal piping system and simulation of the liver surgical operation can increase the accurate and security of the liver surgical operation, attain a purpose for the biggest limit decrease surgical operation wound, shortening surgical operation time, increasing surgical operation succeeding rate, reducing medical treatment expenses and promoting patient recovering from illness. This text expatiated technology and method that the author constitutes 3-D the model of the liver and its internal piping system and simulation of the liver surgical operation according to the images of CT. The direct volume rendering method establishes 3D the model of the liver. Under the environment of OPENGL adopt method of space point rendering to display liver's internal piping system and simulation of the liver surgical operation. Finally, we adopt the wavelet transform method compressed the medical image data.

  11. Reconstruction of 3D scenes from sequences of images

    NASA Astrophysics Data System (ADS)

    Niu, Bei; Sang, Xinzhu; Chen, Duo; Cai, Yuanfa

    2013-08-01

    Reconstruction of three-dimensional (3D) scenes is an active research topic in the field of computer vision and 3D display. It's a challenge to model 3D objects rapidly and effectively. A 3D model can be extracted from multiple images. The system only requires a sequence of images taken with cameras without knowing the parameters of camera, which provide flexibility to a high degree. We focus on quickly merging point cloud of the object from depth map sequences. The whole system combines algorithms of different areas in computer vision, such as camera calibration, stereo correspondence, point cloud splicing and surface reconstruction. The procedure of 3D reconstruction is decomposed into a number of successive steps. Firstly, image sequences are received by the camera freely moving around the object. Secondly, the scene depth is obtained by a non-local stereo matching algorithm. The pairwise is realized with the Scale Invariant Feature Transform (SIFT) algorithm. An initial matching is then made for the first two images of the sequence. For the subsequent image that is processed with previous image, the point of interest corresponding to ones in previous images are refined or corrected. The vertical parallax between the images is eliminated. The next step is to calibrate camera, and intrinsic parameters and external parameters of the camera are calculated. Therefore, The relative position and orientation of camera are gotten. A sequence of depth maps are acquired by using a non-local cost aggregation method for stereo matching. Then point cloud sequence is achieved by the scene depths, which consists of point cloud model using the external parameters of camera and the point cloud sequence. The point cloud model is then approximated by a triangular wire-frame mesh to reduce geometric complexity and to tailor the model to the requirements of computer graphics visualization systems. Finally, the texture is mapped onto the wire-frame model, which can also be used for 3D display. According to the experimental results, we can reconstruct a 3D point cloud model more quickly and efficiently than other methods.

  12. Classification of left and right eye retinal images

    NASA Astrophysics Data System (ADS)

    Tan, Ngan Meng; Liu, Jiang; Wong, Damon W. K.; Zhang, Zhuo; Lu, Shijian; Lim, Joo Hwee; Li, Huiqi; Wong, Tien Yin

    2010-03-01

    Retinal image analysis is used by clinicians to diagnose and identify, if any, pathologies present in a patient's eye. The developments and applications of computer-aided diagnosis (CAD) systems in medical imaging have been rapidly increasing over the years. In this paper, we propose a system to classify left and right eye retinal images automatically. This paper describes our two-pronged approach to classify left and right retinal images by using the position of the central retinal vessel within the optic disc, and by the location of the macula with respect to the optic nerve head. We present a framework to automatically identify the locations of the key anatomical structures of the eye- macula, optic disc, central retinal vessels within the optic disc and the ISNT regions. A SVM model for left and right eye retinal image classification is trained based on the features from the detection and segmentation. An advantage of this is that other image processing algorithms can be focused on regions where diseases or pathologies and more likely to occur, thereby increasing the efficiency and accuracy of the retinal CAD system/pathology detection. We have tested our system on 102 retinal images, consisting of 51 left and right images each and achieved and accuracy of 94.1176%. The high experimental accuracy and robustness of this system demonstrates that there is potential for this system to be integrated and applied with other retinal CAD system, such as ARGALI, for a priori information in automatic mass screening and diagnosis of retinal diseases.

  13. 3D Image Viz-Analysis Tools and V3D Development Hackathon, July 26 -August 8, 2010

    E-print Network

    Peng, Hanchuan

    - Hacking / Dinner at Bob's Pub #12;3D Image Viz-Analysis Tools and V3D Development Hackathon, July 263D Image Viz-Analysis Tools and V3D Development Hackathon, July 26 - August 8, 2010 Janelia Farm, Zongcai Ruan, and Luis Ibanez 12:00-1:00 pm Lunch 1:00 pm- Hacking / Dinner at Bob's Pub July 28, 29, 30

  14. The role of endothelial cells in the retinal stem and progenitor cell niche within a 3D engineered hydrogel matrix.

    PubMed

    Aizawa, Yukie; Shoichet, Molly S

    2012-07-01

    Cell-cell interactions are critical to understanding functional tissues. A number of stem cell populations have been shown to receive key regulatory information from endothelial cells (ECs); however, the role of ECs in the retinal stem and progenitor cell (RSPC) niche has been largely unexplored. To gain greater insight into the role of ECs on RSPC fate, a three-dimensional (3D) co-culture model, incorporating cell-cell interactions, was designed by covalently-modifying agarose hydrogels with growth factors and cell-adhesive peptides in defined volumes. Therein ECs adopted tubular-like morphologies similar to those observed in vivo, but not observed in two-dimensional (2D) cultures. Unexpectedly, ECs inhibited proliferation and differentiation of RSPCs, revealing, for the first time, the possible role of ECs on RSPC fate. This 3D hydrogel scaffold provides a simple, reproducible and versatile method with which to answer biological questions related to the cellular microenvironment. PMID:22560669

  15. Volume Morphing Methods for Landmark Based 3D Image Deformation

    E-print Network

    Fang, Shiaofen

    Volume Morphing Methods for Landmark Based 3D Image Deformation Shiaofen Fang Raghu Raghavan The Johns Hopkins University School of Medicine, Baltimore, MD, 21205 ABSTRACT Volume morphing algorithms are developed for morphing transformations that create new forms and simulate shape deformation

  16. Landmine detection in high resolution 3D GPR images

    Microsoft Academic Search

    E. E. Ligthart; A. G. Yarovoy; F. Roth; L. P. Ligthart

    2004-01-01

    This work describes a novel landmine detection and classification algorithm for high resolution 3D ground penetrating radar (GPR) images. The algorithm was tested on data measured with a video impulse radar (VIR) system developed by the International Research Centre for Telecommunications-transmission and Radar (IRCTR). The algorithm detected all landmines (including the difficult to detect M14 mines) and classified almost all

  17. Two dimensional arrays for 3-D ultrasound imaging

    Microsoft Academic Search

    Stephen W. Smith; Warren Lee; Edward D. Light; Jesse T. Yen; Patrick Wolf; Salim Idriss

    2002-01-01

    Phased array ultrasound transducers have been fabricated in our laboratories at Duke University since 1970. In 1986, we began the development of 2-D arrays with a 20 × 20 element Mills cross array including 64 active channels operating at 1 MHz which produced the first real time 3-D ultrasound images. In our more recent arrays we have progressed to 108

  18. Mapping textures on 3D geometric model using reflectance image

    Microsoft Academic Search

    Ryo Kurazume; Ko Nishino; Mark D. Wheeler; Katsushi Ikeuchi

    2005-01-01

    Texture mapping on scanned objects, that is, the method to map current color images on a 3D geometric model mea- sured by a range sensor, is a key technique of photometric modeling for virtual reality. Usually range and color im- ages are obtained from different viewing positions, through two independent range and color sensors. Thus, in or- der to map

  19. Target penetration of laser-based 3D imaging systems

    Microsoft Academic Search

    Geraldine S. Cheok; Kamel S. Saidi; Marek Franaszek

    2009-01-01

    The ASTM E57.02 Test Methods Subcommittee is developing a test method to evaluate the ranging performance of a 3D imaging system. The test method will involve either measuring the distance between two targets or between an instrument and a target. The first option is necessary because some instruments cannot be centered over a point and will require registration of the

  20. 3D imaging lidar for lunar robotic exploration

    NASA Astrophysics Data System (ADS)

    Hussein, Marwan W.; Tripp, Jeffrey W.

    2009-05-01

    Part of the requirements of the future Constellation program is to optimize lunar surface operations and reduce hazards to astronauts. Toward this end, many robotic platforms, rovers in specific, are being sought to carry out a multitude of missions involving potential EVA sites survey, surface reconnaissance, path planning and obstacle detection and classification. 3D imaging lidar technology provides an enabling capability that allows fast, accurate and detailed collection of three-dimensional information about the rover's environment. The lidar images the region of interest by scanning a laser beam and measuring the pulse time-of-flight and the bearing. The accumulated set of laser ranges and bearings constitutes the threedimensional image. As part of the ongoing NASA Ames research center activities in lunar robotics, the utility of 3D imaging lidar was evaluated by testing Optech's ILRIS-3D lidar on board the K-10 Red rover during the recent Human - Robotics Systems (HRS) field trails in Lake Moses, WA. This paper examines the results of the ILRIS-3D trials, presents the data obtained and discusses its application in lunar surface robotic surveying and scouting.

  1. Facial image comparison using 3D techniques Arnout Ruifroka

    E-print Network

    Veltkamp, Remco

    was not geared towards facial recognition however, and used a relative low-resolution scanning system. Therefore image. Techniques using three or more landmark points on the face have been proposed for matching-dimensional purposes. Using 3D models one can deal with one main problem in 2D face recognition; the pose

  2. Nonlinear Probabilistic Estimation of 3D Geometry from Images

    E-print Network

    constraints involving non­Euclidean domains, such as those found in the 3­D vision geometry problems. Using are geometrically poorly leveraged by the image fea­ tures, involve nonlinear relationships, and have non­Euclidean state domains. To model such domains, a manifold­tangent framework is developed which allows non­Euclidean

  3. 3D Measurements in Images using CAD Models George Vosselman

    E-print Network

    Vosselman, George

    3D Measurements in Images using CAD Models George Vosselman Delft University of Technology Faculty.vosselman@geo.tudelft.nl Keywords: Measurement, Matching, CAD-Models Abstract Semi-automatic measurement of objects with regular are summarised in section six. 2 Related work 2.1 Manipulation of wire frames The tools available in CAD packages

  4. Enhancing retinal images by extracting structural information

    NASA Astrophysics Data System (ADS)

    Molodij, G.; Ribak, E. N.; Glanc, M.; Chenegros, G.

    2014-02-01

    High-resolution imaging of the retina has significant importance for science: physics and optics, biology, and medicine. The enhancement of images with poor contrast and the detection of faint structures require objective methods for assessing perceptual image quality. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce a framework for quality assessment based on the degradation of structural information. We implemented a new processing technique on a long sequence of retinal images of subjects with normal vision. We were able to perform a precise shift-and-add at the sub-pixel level in order to resolve the structures of the size of single cells in the living human retina. Last, we quantified the restoration reliability of the distorted images using an improved quality assessment. To that purpose, we used the single image restoration method based on the ergodic principle, which has originated in solar astronomy, to deconvolve aberrations after adaptive optics compensation.

  5. Method for 3D Rendering Based on Intersection Image Display Which Allows Representation of Internal Structure of 3D objects

    Microsoft Academic Search

    Kohei Arai

    2013-01-01

    Method for 3D rendering based on intersection image display which allows representation of internal structure is proposed. The proposed method is essentially different from the conventional volume rendering based on solid model which allows representation of just surface of the 3D objects. By using afterimage, internal structure can be displayed through exchanging the intersection images with internal structure for the

  6. Automatic segmentation of the cerebellum of fetuses on 3D ultrasound images, using a 3D Point Distribution Model.

    PubMed

    Gutierrez Becker, Benjamin; Arambula Cosio, Fernando; Guzman Huerta, Mario E; Benavides-Serralde, Jesus Andres

    2010-01-01

    Analysis of fetal biometric parameters on ultrasound images is widely performed and it is essential to estimate the gestational age, as well as the fetal growth pattern. The use of three dimensional ultrasound (3D US) is preferred over other tomographic modalities such as CT or MRI, due to its inherent safety and availability. However, the image quality of 3D US is not as good as MRI and therefore there is little work on the automatic segmentation of anatomic structures in 3D US of fetal brains. In this work we present preliminary results of the development of a 3D Point Distribution Model (PDM), for automatic segmentation, of the cerebellum in 3D US of the fetal brain. The model is adjusted to a fetal 3D ultrasound, using a genetic algorithm which optimizes a model fitting function. Preliminary results show that the approach reported is able to automatically segment the cerebellum in 3D ultrasounds of fetal brains. PMID:21096244

  7. 3D-visual laser-diode-based photoacoustic imaging.

    PubMed

    Zeng, Lvming; Liu, Guodong; Yang, Diwu; Ji, Xuanrong

    2012-01-16

    We present a 3D-visual laser-diode-based photoacoustic imaging (LD-PAI) system with a pulsed semiconductor laser source, which has the properties of being inexpensive, portable, and durable. The laser source was operated at a wavelength of 905 nm with a repetition rate of 0.8 KHz. The energy density on the sample surface is about 2.35 mJ/cm(2) with a pulse energy as low as 5.6 ?J. By raster-scanning, preliminary 3D volumetric renderings of the knotted and helical blood vessel phantoms have been visualized integrally with an axial resolution of 1.1 mm and a lateral resolution of 0.5 mm, and typical 2D photoacoustic image slices with different thickness and orientation were produced with clarity for detailed comparison and analysis in 3D diagnostic visualization. In addition, the pulsed laser source was integrated with the optical lens group and the 3D adjustable rotational stage, with the result that the compact volume of the total radiation source is only 10 × 3 × 3 cm(3). Our goal is to significantly reduce the costs and sizes of the deep 3D-visual PAI system for future producibility. PMID:22274468

  8. Imaging and visualization of 3-D cardiac electric activity

    Microsoft Academic Search

    Bin He; Dongsheng Wu

    2001-01-01

    Noninvasive imaging of cardiac electric activity is of importance for better understanding the underlying mechanisms and for aiding clinical diagnosis and intervention of cardiac ab- normalities. We propose to image the three-dimensional (3-D) car- diac bioelectric source distribution from body-surface electrocar- diograms. Cardiac electrical sources were modeled by a current dipole distribution throughout the entire myocardium, and esti- mated by

  9. 3D Correlative Imaging | High Resolution Electron Microscopy

    Cancer.gov

    One key area of interest for the lab has been to close the 3D imaging gap, finding ways to image whole cells and tissues at high resolution. Focused ion beam scanning electron microscopy (FIB-SEM, or otherwise known as ion abrasion scanning electron microscopy, IA-SEM) uses a scanning electron beam to image the face of a fixed, resin-embedded sample, and an ion beam to remove “slices” of the sample, resulting in a sequential stack of high resolution images.

  10. Stereotactic mammography imaging combined with 3D US imaging for image guided breast biopsy

    SciTech Connect

    Surry, K. J. M.; Mills, G. R.; Bevan, K.; Downey, D. B.; Fenster, A. [Imaging Research Labs, Robarts Research Institute, London, N6A 5K8 (Canada) and Department of Medical Biophysics, University of Western Ontario, London, N6A 5C1 (Canada); Imaging Research Labs, Robarts Research Institute, London, N6A 5K8 (Canada); Imaging Research Labs, Robarts Research Institute, London, N6A 5K8 (Canada) and Department of Radiology, London Health Sciences Centre, London, N6A 5K8 (Canada); Imaging Research Labs, Robarts Research Institute, London, N6A 5K8 (Canada) and Department of Medical Biophysics, University of Western Ontario, London, N6A 5C1 Canada

    2007-11-15

    Stereotactic X-ray mammography (SM) and ultrasound (US) guidance are both commonly used for breast biopsy. While SM provides three-dimensional (3D) targeting information and US provides real-time guidance, both have limitations. SM is a long and uncomfortable procedure and the US guided procedure is inherently two dimensional (2D), requiring a skilled physician for both safety and accuracy. The authors developed a 3D US-guided biopsy system to be integrated with, and to supplement SM imaging. Their goal is to be able to biopsy a larger percentage of suspicious masses using US, by clarifying ambiguous structures with SM imaging. Features from SM and US guided biopsy were combined, including breast stabilization, a confined needle trajectory, and dual modality imaging. The 3D US guided biopsy system uses a 7.5 MHz breast probe and is mounted on an upright SM machine for preprocedural imaging. Intraprocedural targeting and guidance was achieved with real-time 2D and near real-time 3D US imaging. Postbiopsy 3D US imaging allowed for confirmation that the needle was penetrating the target. The authors evaluated 3D US-guided biopsy accuracy of their system using test phantoms. To use mammographic imaging information, they registered the SM and 3D US coordinate systems. The 3D positions of targets identified in the SM images were determined with a target localization error (TLE) of 0.49 mm. The z component (x-ray tube to image) of the TLE dominated with a TLE{sub z} of 0.47 mm. The SM system was then registered to 3D US, with a fiducial registration error (FRE) and target registration error (TRE) of 0.82 and 0.92 mm, respectively. Analysis of the FRE and TRE components showed that these errors were dominated by inaccuracies in the z component with a FRE{sub z} of 0.76 mm and a TRE{sub z} of 0.85 mm. A stereotactic mammography and 3D US guided breast biopsy system should include breast compression for stability and safety and dual modality imaging for target localization. The system will provide preprocedural x-ray mammography information in the form of SM imaging along with real-time US imaging for needle guidance to a target. 3D US imaging will also be available for targeting, guidance, and biopsy verification immediately postbiopsy.

  11. Refraction Correction in 3D Transcranial Ultrasound Imaging

    PubMed Central

    Lindsey, Brooks D.; Smith, Stephen W.

    2014-01-01

    We present the first correction of refraction in three-dimensional (3D) ultrasound imaging using an iterative approach that traces propagation paths through a two-layer planar tissue model, applying Snell’s law in 3D. This approach is applied to real-time 3D transcranial ultrasound imaging by precomputing delays offline for several skull thicknesses, allowing the user to switch between three sets of delays for phased array imaging at the push of a button. Simulations indicate that refraction correction may be expected to increase sensitivity, reduce beam steering errors, and partially restore lost spatial resolution, with the greatest improvements occurring at the largest steering angles. Distorted images of cylindrical lesions were created by imaging through an acrylic plate in a tissue-mimicking phantom. As a result of correcting for refraction, lesions were restored to 93.6% of their original diameter in the lateral direction and 98.1% of their original shape along the long axis of the cylinders. In imaging two healthy volunteers, the mean brightness increased by 8.3% and showed no spatial dependency. PMID:24275538

  12. Large deformation 3D image registration in image-guided radiation therapy

    E-print Network

    Utah, University of

    -guided radiation therapy 2 1. Introduction In radiation cancer therapy, the problem of organ motion over the courseLarge deformation 3D image registration in image-guided radiation therapy Mark Foskey, Brad Davis processing of serial 3D CT images used in image- guided radiation therapy. A major assumption in deformable

  13. Adaptive-optics SLO imaging combined with widefield OCT and SLO enables precise 3D localization of fluorescent cells in the mouse retina

    PubMed Central

    Zawadzki, Robert J.; Zhang, Pengfei; Zam, Azhar; Miller, Eric B.; Goswami, Mayank; Wang, Xinlei; Jonnal, Ravi S.; Lee, Sang-Hyuck; Kim, Dae Yu; Flannery, John G.; Werner, John S.; Burns, Marie E.; Pugh, Edward N.

    2015-01-01

    Adaptive optics scanning laser ophthalmoscopy (AO-SLO) has recently been used to achieve exquisite subcellular resolution imaging of the mouse retina. Wavefront sensing-based AO typically restricts the field of view to a few degrees of visual angle. As a consequence the relationship between AO-SLO data and larger scale retinal structures and cellular patterns can be difficult to assess. The retinal vasculature affords a large-scale 3D map on which cells and structures can be located during in vivo imaging. Phase-variance OCT (pv-OCT) can efficiently image the vasculature with near-infrared light in a label-free manner, allowing 3D vascular reconstruction with high precision. We combined widefield pv-OCT and SLO imaging with AO-SLO reflection and fluorescence imaging to localize two types of fluorescent cells within the retinal layers: GFP-expressing microglia, the resident macrophages of the retina, and GFP-expressing cone photoreceptor cells. We describe in detail a reflective afocal AO-SLO retinal imaging system designed for high resolution retinal imaging in mice. The optical performance of this instrument is compared to other state-of-the-art AO-based mouse retinal imaging systems. The spatial and temporal resolution of the new AO instrumentation was characterized with angiography of retinal capillaries, including blood-flow velocity analysis. Depth-resolved AO-SLO fluorescent images of microglia and cone photoreceptors are visualized in parallel with 469 nm and 663 nm reflectance images of the microvasculature and other structures. Additional applications of the new instrumentation are discussed. PMID:26114038

  14. Adaptive-optics SLO imaging combined with widefield OCT and SLO enables precise 3D localization of fluorescent cells in the mouse retina.

    PubMed

    Zawadzki, Robert J; Zhang, Pengfei; Zam, Azhar; Miller, Eric B; Goswami, Mayank; Wang, Xinlei; Jonnal, Ravi S; Lee, Sang-Hyuck; Kim, Dae Yu; Flannery, John G; Werner, John S; Burns, Marie E; Pugh, Edward N

    2015-06-01

    Adaptive optics scanning laser ophthalmoscopy (AO-SLO) has recently been used to achieve exquisite subcellular resolution imaging of the mouse retina. Wavefront sensing-based AO typically restricts the field of view to a few degrees of visual angle. As a consequence the relationship between AO-SLO data and larger scale retinal structures and cellular patterns can be difficult to assess. The retinal vasculature affords a large-scale 3D map on which cells and structures can be located during in vivo imaging. Phase-variance OCT (pv-OCT) can efficiently image the vasculature with near-infrared light in a label-free manner, allowing 3D vascular reconstruction with high precision. We combined widefield pv-OCT and SLO imaging with AO-SLO reflection and fluorescence imaging to localize two types of fluorescent cells within the retinal layers: GFP-expressing microglia, the resident macrophages of the retina, and GFP-expressing cone photoreceptor cells. We describe in detail a reflective afocal AO-SLO retinal imaging system designed for high resolution retinal imaging in mice. The optical performance of this instrument is compared to other state-of-the-art AO-based mouse retinal imaging systems. The spatial and temporal resolution of the new AO instrumentation was characterized with angiography of retinal capillaries, including blood-flow velocity analysis. Depth-resolved AO-SLO fluorescent images of microglia and cone photoreceptors are visualized in parallel with 469 nm and 663 nm reflectance images of the microvasculature and other structures. Additional applications of the new instrumentation are discussed. PMID:26114038

  15. Retinal Area Detector from Scanning Laser Ophthalmoscope (SLO) Images for Diagnosing Retinal Diseases.

    PubMed

    Haleem, Muhammad Salman; Han, Liangxiu; van Hemert, Jano; Li, Baihua; Fleming, Alan

    2014-08-26

    Scanning Laser Ophthalmoscopes (SLOs) can be used for early detection of retinal diseases. With the advent of latest screening technology, the advantage of using SLO is its wide Field of View (FOV), which can image a large part of the retina for better diagnosis of the retinal diseases. On the other hand, during the imaging process, artefacts such as eyelashes and eyelids are also imaged along with the retinal area. This brings a big challenge on how to exclude these artefacts. In this paper, we propose a novel approach to automatically extract out true retinal area from an SLO image based on image processing and machine learning approaches. To reduce the complexity of image processing tasks and provide a convenient primitive image pattern, we have grouped pixels into different regions based on the regional size and compactness, called superpixels. The framework then calculates image based features reflecting textural and structural information and classifies between retinal area and artefacts. The experimental evaluation results have shown good performance with an overall accuracy of 92%. PMID:25167560

  16. INTRODUCTION A 3D image of skeletal hard tissue can be obtained using

    E-print Network

    Nebel, Jean-Christophe

    SUPERIMPOSITION Superimposition of the C3D stereophotographic image over 3D spiral CT scan image of the skull different modalities, stereo photogrammetry and a 3D spiral CT scan is possible. · Registration accuracy image superimposition on to 3D CT scan images ­ the future of orthognathic surgery. Dr. B.S. Khambay1

  17. Automated Detection of Optic Disc and Exudates in Retinal Images

    Microsoft Academic Search

    P. C. Siddalingaswamy; K. Gopalakrishna Prabhu

    Digital colour retinal images are used by the ophthalmologists for the detection of many eye related diseases such as Diabetic\\u000a retinopathy. These images are generated in large number during the mass screening of the disease and may result in biased\\u000a observation due to fatigue. Automated retinal image processing system could save workload of the ophthalmologists and also\\u000a assist them to

  18. Towards 3D map generation from digital aerial images

    NASA Astrophysics Data System (ADS)

    Zebedin, Lukas; Klaus, Andreas; Gruber-Geymayer, Barbara; Karner, Konrad

    This paper describes the fusion of information extracted from multispectral digital aerial images for highly automatic 3D map generation. The proposed approach integrates spectral classification and 3D reconstruction techniques. The multispectral digital aerial images consist of a high resolution panchromatic channel as well as lower resolution RGB and near infrared (NIR) channels and form the basis for information extraction. Our land use classification is a 2-step approach that uses RGB and NIR images for an initial classification and the panchromatic images as well as a digital surface model (DSM) for a refined classification. The DSM is generated from the high resolution panchromatic images of a specific photo mission. Based on the aerial triangulation using area and feature-based points of interest the algorithms are able to generate a dense DSM by a dense image matching procedure. Afterwards a true ortho image for classification, panchromatic or color input images can be computed. In a last step specific layers for buildings and vegetation are generated and the classification is updated.

  19. Texture blending on 3D models using casual images

    NASA Astrophysics Data System (ADS)

    Liu, Xingming; Liu, Xiaoli; Li, Ameng; Liu, Junyao; Wang, Huijing

    2013-12-01

    In this paper, a method for constructing photorealistic textured model using 3D structured light digitizer is presented. Our method acquisition of range images and texture images around object, and range images are registered and integrated to construct geometric model of object. System is calibrated and poses of texture-camera are determined so that the relationship between texture and geometric model is established. After that, a global optimization is applied to assign compatible texture to adjacent surface and followed with a level procedure to remove artifacts due to vary lighting, approximate geometric model and so on. Lastly, we demonstrate the effect of our method on constructing a real model of world.

  20. 3-D Reconstruction from Medical Images with Improved GVF Snake Model

    Microsoft Academic Search

    Jinyong Cheng; Yihui Liu; Li Bai

    2008-01-01

    3-D reconstruction from medical images is an important application of computer graphics and biomedicine image processing. Image segmentation is a crucial step in 3-D reconstruction. In this paper, an improved image segmentation method which is suitable for 3-D reconstruction is put forward. A 3-D reconstruction algorithm is used to reconstruct the 3-D model from images. First, rough edge is extracted

  1. 3-D Image Denoising By Local Smoothing And Nonparametric Partha Sarathi Mukherjee and Peihua Qiu

    E-print Network

    Qiu, Peihua

    3-D Image Denoising By Local Smoothing And Nonparametric Regression Partha Sarathi Mukherjee-dimensional (3-D) images are becoming increasingly popular in image applications, such as magnetic resonance imaging (MRI), functional MRI (fMRI), and other image applications. Observed 3-D images often contain

  2. 3D DC/IP BOREHOLE-TO-BOREHOLE IMAGING

    NASA Astrophysics Data System (ADS)

    Milkereit, B.; Qian, W.; Bongajum, E. L.

    2009-12-01

    Our goal is the development of robust 3D DC/IP imaging technology for rock mass characterization. This work focuses on the use of multi-electrode array surface and borehole electric methods to build 3D conductivity and chargeability earth models. Over the past 3 years, we carried out field projects to evaluate the use of cross-borehole electrical methods for imaging subsurface conductive zones and to quantify chargeability effects. Several single borehole vertical resistivity profiles (VRP), borehole-to-borehole, and borehole-to-surface resistivity tomography (BRT) survey tests have been successfully conducted. The multichannel borehole DC/IP resistivity data acquisition system consists of multiple borehole cables, each with 24 electrodes which may act as either source or receiver. When a constant injection voltage is applied between electrodes, the boreholes need to be water filled so as the electrode array couples to the rock formation. The borehole cable design allows a seamless integration of borehole and surface measurements with or without simultaneous readings from surface electrodes. The system has the capacity to acquire more than 1000 full waveform resistance and chargeability readings per hour. We established a multi-step procedure for data acquisition, processing and interpretation. For the borehole-to-borehole application, we have successfully mapped conductive zones between boreholes up to 350m apart. Using at least two boreholes helps to constrain the direction (azimuth) of the imaged conductive zones. Borehole resistivity tomography test surveys were conducted to map three-dimensional massive sulfide zones between boreholes in the Sudbury area. Both surface and in-mine borehole acquisition geometries were tested. The 3D conductivity model for massive sulfides was derived from a four-borehole acquisition geometry. We continue to utilize the 3D IP (induced polarization) information in the inversion process and develop new 3D tomographic inversion schemes for arbitrary boreholes and surface electrode arrays.

  3. Advanced 3D imaging lidar concepts for long range sensing

    NASA Astrophysics Data System (ADS)

    Gordon, K. J.; Hiskett, P. A.; Lamb, R. A.

    2014-06-01

    Recent developments in 3D imaging lidar are presented. Long range 3D imaging using photon counting is now a possibility, offering a low-cost approach to integrated remote sensing with step changing advantages in size, weight and power compared to conventional analogue active imaging technology. We report results using a Geiger-mode array for time-of-flight, single photon counting lidar for depth profiling and determination of the shape and size of tree canopies and distributed surface reflections at a range of 9km, with 4?J pulses with a frame rate of 100kHz using a low-cost fibre laser operating at a wavelength of ?=1.5 ?m. The range resolution is less than 4cm providing very high depth resolution for target identification. This specification opens up several additional functionalities for advanced lidar, for example: absolute rangefinding and depth profiling for long range identification, optical communications, turbulence sensing and time-of-flight spectroscopy. Future concepts for 3D time-of-flight polarimetric and multispectral imaging lidar, with optical communications in a single integrated system are also proposed.

  4. High-resolution 3-D imaging of objects through walls

    NASA Astrophysics Data System (ADS)

    Schechter, Richard S.; Chun, Sung-Taek

    2010-11-01

    This paper describes the use of microwaves to accurately image objects behind dielectric walls. The data are first simulated by using a finite-difference time-domain code. A large model of a room with walls and objects inside is used as a test case. Since the model and associated volume are big compared to wavelengths, the code is run on a parallel supercomputer. A fixed 2-D receiver array captures all the return data simultaneously. A time-domain backprojection algorithm with a correction for the time delay and refraction caused by the front wall then reconstructs high-fidelity 3-D images. A rigorous refraction correction using Snell's law and a simpler but faster linear correction are compared in both 2-D and 3-D. It is shown that imaging in 3-D and viewing an image in the plane parallel to the receiver array is necessary to identify objects by shape. It is also shown that a simple linear correction for the wall is sufficient.

  5. Method for extracting the aorta from 3D CT images

    NASA Astrophysics Data System (ADS)

    Taeprasartsit, Pinyo; Higgins, William E.

    2007-03-01

    Bronchoscopic biopsy of the central-chest lymph nodes is vital in the staging of lung cancer. Three-dimensional multi-detector CT (MDCT) images provide vivid anatomical detail for planning bronchoscopy. Unfortunately, many lymph nodes are situated close to the aorta, and an inadvertent needle biopsy could puncture the aorta, causing serious harm. As an eventual aid for more complete planning of lymph-node biopsy, it is important to define the aorta. This paper proposes a method for extracting the aorta from a 3D MDCT chest image. The method has two main phases: (1) Off-line Model Construction, which provides a set of training cases for fitting new images, and (2) On-Line Aorta Construction, which is used for new incoming 3D MDCT images. Off-Line Model Construction is done once using several representative human MDCT images and consists of the following steps: construct a likelihood image, select control points of the medial axis of the aortic arch, and recompute the control points to obtain a constant-interval medial-axis model. On-Line Aorta Construction consists of the following operations: construct a likelihood image, perform global fitting of the precomputed models to the current case's likelihood image to find the best fitting model, perform local fitting to adjust the medial axis to local data variations, and employ a region recovery method to arrive at the complete constructed 3D aorta. The region recovery method consists of two steps: model-based and region-growing steps. This region growing method can recover regions outside the model coverage and non-circular tube structures. In our experiments, we used three models and achieved satisfactory results on twelve of thirteen test cases.

  6. Image Appraisal for 2D and 3D Electromagnetic Inversion

    SciTech Connect

    Alumbaugh, D.L.; Newman, G.A.

    1999-01-28

    Linearized methods are presented for appraising image resolution and parameter accuracy in images generated with two and three dimensional non-linear electromagnetic inversion schemes. When direct matrix inversion is employed, the model resolution and posterior model covariance matrices can be directly calculated. A method to examine how the horizontal and vertical resolution varies spatially within the electromagnetic property image is developed by examining the columns of the model resolution matrix. Plotting the square root of the diagonal of the model covariance matrix yields an estimate of how errors in the inversion process such as data noise and incorrect a priori assumptions about the imaged model map into parameter error. This type of image is shown to be useful in analyzing spatial variations in the image sensitivity to the data. A method is analyzed for statistically estimating the model covariance matrix when the conjugate gradient method is employed rather than a direct inversion technique (for example in 3D inversion). A method for calculating individual columns of the model resolution matrix using the conjugate gradient method is also developed. Examples of the image analysis techniques are provided on 2D and 3D synthetic cross well EM data sets, as well as a field data set collected at the Lost Hills Oil Field in Central California.

  7. Right main bronchus perforation detected by 3D-image

    PubMed Central

    Bense, László; Eklund, Gunnar; Jorulf, Hakan; Farkas, Árpád; Balásházy, Imre; Hedenstierna, Göran; Krebsz, Ádám; Madas, Balázs Gergely; Strindberg, Jerker Eden

    2011-01-01

    A male metal worker, who has never smoked, contracted debilitating dyspnoea in 2003 which then deteriorated until 2007. Spirometry and chest x-rays provided no diagnosis. A 3D-image of the airways was reconstructed from a high-resolution CT (HRCT) in 2007, showing peribronchial air on the right side, mostly along the presegmental airways. After digital subtraction of the image of the peribronchial air, a hole on the cranial side of the right main bronchus was detected. The perforation could be identified at the re-examination of HRCTs in 2007 and 2009, but not in 2010 when it had possibly healed. The occupational exposure of the patient to evaporating chemicals might have contributed to the perforation and hampered its healing. A 3D HRCT reconstruction should be considered to detect bronchial anomalies, including wall-perforation, when unexplained dyspnoea or other chest symptoms call for extended investigation. PMID:22679238

  8. Getting in touch--3D printing in forensic imaging.

    PubMed

    Ebert, Lars Chr; Thali, Michael J; Ross, Steffen

    2011-09-10

    With the increasing use of medical imaging in forensics, as well as the technological advances in rapid prototyping, we suggest combining these techniques to generate displays of forensic findings. We used computed tomography (CT), CT angiography, magnetic resonance imaging (MRI) and surface scanning with photogrammetry in conjunction with segmentation techniques to generate 3D polygon meshes. Based on these data sets, a 3D printer created colored models of the anatomical structures. Using this technique, we could create models of bone fractures, vessels, cardiac infarctions, ruptured organs as well as bitemark wounds. The final models are anatomically accurate, fully colored representations of bones, vessels and soft tissue, and they demonstrate radiologically visible pathologies. The models are more easily understood by laypersons than volume rendering or 2D reconstructions. Therefore, they are suitable for presentations in courtrooms and for educational purposes. PMID:21602004

  9. Retinal motion estimation and image dewarping in adaptive optics scanning

    E-print Network

    Parker, Albert E.

    Retinal motion estimation and image dewarping in adaptive optics scanning laser ophthalmoscopy estimation in adaptive optics scanning laser ophthalmoscopy Curtis R. Vogel Department of Mathematical optics scanning laser ophthalmoscopy. © 2005 Optical Society of America OCIS codes: (010.1080) Adaptive

  10. Extracting 3D Layout From a Single Image Using Global Image Structures.

    PubMed

    Lou, Zhongyu; Gevers, Theo; Hu, Ninghang

    2015-10-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very beneficial for extracting pixel-level 3D layout since it implies the way how pixels in the image are organized. In this paper, we propose an approach that first predicts the global image structure, and then we use the global structure for fine-grained pixel-level 3D layout extraction. In particular, image features are extracted based on multiple layout templates. We then learn a discriminative model for classifying the global layout at the image-level. Using latent variables, we implicitly model the sublevel semantics of the image, which enrich the expressiveness of our model. After the image-level structure is obtained, it is used as the prior knowledge to infer pixel-wise 3D layout. Experiments show that the results of our model outperform the state-of-the-art methods by 11.7% for 3D structure classification. Moreover, we show that employing the 3D structure prior information yields accurate 3D scene layout segmentation. PMID:25966478

  11. 3D Image-Based Viscoplastic Response with Crystal Plasticity

    Microsoft Academic Search

    Anthony D. Rollett; Sukbin Lee; Ricardo A. Lebensohn

    An efficient digital FFT-based viscoplastic method was applied to calculating the viscoplastic stress-strain response on a\\u000a 3D image of a serial sectioned nickel alloy. A single strain step under uniaxial tensile loading was calculated using crystal\\u000a plasticity. Analysis of the results indicated higher stresses near grain boundaries than in the bulk of grains. All types\\u000a of grain boundary gave similar

  12. Cup Products on Polyhedral Approximations of 3D Digital Images

    Microsoft Academic Search

    Rocío González-Díaz; Javier Lamar; Ronald Umble

    2011-01-01

    \\u000a Let I be a 3D digital image, and let Q(I) be the associated cubical complex. In this paper we show how to simplify the combinatorial structure of Q(I) and obtain a homeomorphic cellular complex P(I) with fewer cells. We introduce formulas for a diagonal approximation on a general polygon and use it to compute cup products\\u000a on the cohomology H

  13. Proposed traceable structural resolution protocols for 3D imaging systems

    Microsoft Academic Search

    David MacKinnon; J.-Angelo Beraldin; Luc Cournoyer; Benjamin Carrier; François Blais

    2009-01-01

    A protocol for determining structural resolution using a potentially-traceable reference material is proposed. Where possible, terminology was selected to conform to those published in ISO JCGM 200:2008 (VIM) and ASTM E 2544-08 documents. The concepts of resolvability and edge width are introduced to more completely describe the ability of an optical non-contact 3D imaging system to resolve small features. A

  14. “MEMSEye” for optical 3D tracking and imaging applications

    Microsoft Academic Search

    V. Milanovic; A. Kasturi; N. Siu; M. Radojicic; Y. Su

    2011-01-01

    We demonstrate a compact, low-power device which combines a laser source, a MEMS mirror, and photosensors to enable fast-motion tracking of an object in a 3D volume while obtaining its precise XYZ coordinates, as well as high resolution laser-based imaging. Any object can be tracked which is marked by retro-reflective tape, or a corner-cube retroreflector (CCR). Two separate subsystems which

  15. Laser-induced retinal damage threshold as a function of retinal image size

    NASA Astrophysics Data System (ADS)

    Zuclich, Joseph A.; Lund, David J.; Edsall, Peter R.; Hollins, Richard C.; Smith, Peter A.; Stuck, Bruce E.; McLin, Leon N.

    1999-06-01

    The dependence of retinal damage threshold on laser spot size was examined for two pulsewidth regimes; nanosecond- duration Q-switched pulses from a doubled Nd:YAG laser and microsecond-duration pulses from a flashlamp-pumped dye laser. Threshold determinations were conducted for nominal retinal image sizes ranging from 1.5 mrad to 100 mrad of visual field, corresponding to image diameters of approximately 22 ?m to 1.4 mm on the primate retina. In addition, baseline collimated-beam damage thresholds were determined for comparison to the extended source data. Together, this set of retinal damage thresholds reveals the functional dependence of threshold on spot size. The threshold dose was found to vary with the area of the image for larger image sizes. The results are compared to previously published extended source damage thresholds and to the ANSI Z136.1 laser safety standard maximum permissible exposure levels for diffuse reflections.

  16. Multi-View Image Coding Using 3-D Voxel Models Yongying Gao and Hayder Radha

    E-print Network

    Radha, Hayder

    Multi-View Image Coding Using 3-D Voxel Models Yongying Gao and Hayder Radha Department, radha}@egr.msu.edu AbstractWe propose a multi-view image coding system in 3-D space based on an improved volumetric 3-D reconstruction. Unlike existing multi-view image coding schemes, in which the 3- D scene

  17. Variation of laser-induced retinal damage threshold with retinal image size (Abstract Only)

    NASA Astrophysics Data System (ADS)

    Zuclich, Joseph A.; Lund, David J.; Edsall, Peter R.; Hollins, Richard C.; Smith, Peter A.; Stuck, Bruce E.; McLin, Leon N.; Kennedy, Paul K.; Till, Stephen

    2000-03-01

    The dependence of retinal damage threshold on laser spot size was examined for two pulsewidth regimes; nanosecond- duration Q-switched pluses from a doubled Nd:YAG laser and microsecond-duration pulses from a flashlamp-pumped dye laser. Threshold determination were conducted for nominal retinal image sizes ranging form 1.5 mrad to 100 mrad of visual field, corresponding to image diameters of approximately 22 micrometers to 1.4 mm on the primate retina. Together, this set of retinal damage threshold reveals the functional dependence of threshold on spot size. The threshold dose was found to vary with the area of the image for larger image sizes. The experimental results were compared to the predictions of the Thompson-Gerstman granular model of laser-induced retinal damage. The experimental and theoretical trends of threshold variation with retinal spot size were essentially the same, with both data sets showing threshold dose proportional to image area for spot sizes >= 150 micrometers . The absolute values predicted by the model, however, were significantly higher than experimental values, possibly because of uncertainty in various biological input parameters, such as the melanosome absorption coefficient and the number of melanosomes per RPE cell.

  18. Radiometric Quality Evaluation of INSAT-3D Imager Data

    NASA Astrophysics Data System (ADS)

    Prakash, S.; Jindal, D.; Badal, N.; Kartikeyan, B.; Gopala Krishna, B.

    2014-11-01

    INSAT-3D is an advanced meteorological satellite of ISRO which acquires imagery in optical and infra-red (IR) channels for study of weather dynamics in Indian sub-continent region. In this paper, methodology of radiometric quality evaluation for Level-1 products of Imager, one of the payloads onboard INSAT-3D, is described. Firstly, overall visual quality of scene in terms of dynamic range, edge sharpness or modulation transfer function (MTF), presence of striping and other image artefacts is computed. Uniform targets in Desert and Sea region are identified for which detailed radiometric performance evaluation for IR channels is carried out. Mean brightness temperature (BT) of targets is computed and validated with independently generated radiometric references. Further, diurnal/seasonal trends in target BT values and radiometric uncertainty or sensor noise are studied. Results of radiometric quality evaluation over duration of eight months (January to August 2014) and comparison of radiometric consistency pre/post yaw flip of satellite are presented. Radiometric Analysis indicates that INSAT-3D images have high contrast (MTF > 0.2) and low striping effects. A bias of <4K is observed in the brightness temperature values of TIR-1 channel measured during January-August 2014 indicating consistent radiometric calibration. Diurnal and seasonal analysis shows that Noise equivalent differential temperature (NEdT) for IR channels is consistent and well within specifications.

  19. Target penetration of laser-based 3D imaging systems

    NASA Astrophysics Data System (ADS)

    Cheok, Geraldine S.; Saidi, Kamel S.; Franaszek, Marek

    2009-01-01

    The ASTM E57.02 Test Methods Subcommittee is developing a test method to evaluate the ranging performance of a 3D imaging system. The test method will involve either measuring the distance between two targets or between an instrument and a target. The first option is necessary because some instruments cannot be centered over a point and will require registration of the instrument coordinate frame into the target coordinate frame. The disadvantage of this option is that registration will introduce an additional error into the measurements. The advantage of this option is that this type of measurement, relative measurement, is what is typically used in field applications. A potential target geometry suggested for the test method is a planar target. The ideal target material would be diffuse, have uniform reflectivity for wavelengths between 500 nm to 1600 nm (wavelengths of most commercially-available 3D imaging systems), and have minimal or no penetration of the laser into the material. A possible candidate material for the target is Spectralon1. However, several users have found that there is some penetration into the Spectralon by a laser and this is confirmed by the material manufacturer. The effect of this penetration on the range measurement is unknown. This paper will present an attempt to quantify the laser penetration depth into the Spectralon material for four 3D imaging systems.

  20. Femoroacetabular impingement with chronic acetabular rim fracture - 3D computed tomography, 3D magnetic resonance imaging and arthroscopic correlation

    PubMed Central

    Chhabra, Avneesh; Nordeck, Shaun; Wadhwa, Vibhor; Madhavapeddi, Sai; Robertson, William J

    2015-01-01

    Femoroacetabular impingement is uncommonly associated with a large rim fragment of bone along the superolateral acetabulum. We report an unusual case of femoroacetabular impingement (FAI) with chronic acetabular rim fracture. Radiographic, 3D computed tomography, 3D magnetic resonance imaging and arthroscopy correlation is presented with discussion of relative advantages and disadvantages of various modalities in the context of FAI.

  1. Femoroacetabular impingement with chronic acetabular rim fracture - 3D computed tomography, 3D magnetic resonance imaging and arthroscopic correlation.

    PubMed

    Chhabra, Avneesh; Nordeck, Shaun; Wadhwa, Vibhor; Madhavapeddi, Sai; Robertson, William J

    2015-07-18

    Femoroacetabular impingement is uncommonly associated with a large rim fragment of bone along the superolateral acetabulum. We report an unusual case of femoroacetabular impingement (FAI) with chronic acetabular rim fracture. Radiographic, 3D computed tomography, 3D magnetic resonance imaging and arthroscopy correlation is presented with discussion of relative advantages and disadvantages of various modalities in the context of FAI. PMID:26191497

  2. Error estimations of 3D digital image correlation measurements

    NASA Astrophysics Data System (ADS)

    Becker, Thomas; Splitthof, Karsten; Siebert, Thorsten; Kletting, Peter

    2006-08-01

    Systematical errors of digital image correlation (DIC) measurements build a limiting factor for the accuracy of the resulting quantities. A major source for introducing systematical errors is the system calibration. We present a 3D digital image correlation system, which provides error information not only of diverse error sources but even more the propagation of errors throughout the calculations to the resulting contours, displacements and strains. On the basis of this system we discuss error sources, error propagation and the impact on correlation results. Performance tests for studying the impact of calibration errors on the resulting data are shown.

  3. Pavement cracking measurements using 3D laser-scan images

    NASA Astrophysics Data System (ADS)

    Ouyang, W.; Xu, B.

    2013-10-01

    Pavement condition surveying is vital for pavement maintenance programs that ensure ride quality and traffic safety. This paper first introduces an automated pavement inspection system which uses a three-dimensional (3D) camera and a structured laser light to acquire dense transverse profiles of a pavement lane surface when it carries a moving vehicle. After the calibration, the 3D system can yield a depth resolution of 0.5 mm and a transverse resolution of 1.56 mm pixel-1 at 1.4 m camera height from the ground. The scanning rate of the camera can be set to its maximum at 5000 lines s-1, allowing the density of scanned profiles to vary with the vehicle's speed. The paper then illustrates the algorithms that utilize 3D information to detect pavement distress, such as transverse, longitudinal and alligator cracking, and presents the field tests on the system's repeatability when scanning a sample pavement in multiple runs at the same vehicle speed, at different vehicle speeds and under different weather conditions. The results show that this dedicated 3D system can capture accurate pavement images that detail surface distress, and obtain consistent crack measurements in repeated tests and under different driving and lighting conditions.

  4. 3D pupil plane imaging of opaque targets

    NASA Astrophysics Data System (ADS)

    Cain, Stephen C.

    2010-08-01

    Correlography is a technique that allows image formation from non-imaged speckle patterns via their relationship to the autocorrelation of the scene. Algorithms designed to form images from this type of data represent a particular type of phase retrieval algorithm since the autocorrelation function is related to the Fourier magnitude of the scene but not the Fourier phase. Methods for forming 2-D images from far field intensity measurements have been explored previously, but no 3-D methods have been put forward for forming range images of a scene from this kind of measurement. Farfield intensity measurements are attractive large focusing optics are not required to form images. Pupil plane intensity imaging is also attractive due to the fact that the effects of atmospheric turbulence close to the imaging system are mitigated by the cancelation of phase errors in the intensity operation. This paper suggests a method for obtaining 3-D images of a scene through the use of successive 2-D pupil plane intensity measurements sampled with an APD (Avalanche Photo-Diode) array. The 2-D array samples the returning pulse from a laser at a fast enough rate to avoid aliasing of the pulse shape in time. The spatial pattern received by the array allows the Autocorrelation of the scene to be determined as a function of time. The temporal autocorrelation function contains range information to each point in the scene illuminated by the pulsed laser. The proposed algorithm uses a model for the LADAR pulse and its relation to the autocorrelation of the scene as a function of time to estimate the range to every point in the reconstructed scene assuming that all surfaces are opaque (meaning a second return from the same point in the scene is not anticipated). The method is demonstrated using a computer simulation.

  5. 3D Lunar Terrain Reconstruction from Apollo Images

    NASA Technical Reports Server (NTRS)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  6. Large Scale 3D Image Reconstruction in Optical Interferometry

    E-print Network

    Schutz, Antony; Mary, David; Thiébaut, Eric; Soulez, Ferréol

    2015-01-01

    Astronomical optical interferometers (OI) sample the Fourier transform of the intensity distribution of a source at the observation wavelength. Because of rapid atmospheric perturbations, the phases of the complex Fourier samples (visibilities) cannot be directly exploited , and instead linear relationships between the phases are used (phase closures and differential phases). Consequently, specific image reconstruction methods have been devised in the last few decades. Modern polychromatic OI instruments are now paving the way to multiwavelength imaging. This paper presents the derivation of a spatio-spectral ("3D") image reconstruction algorithm called PAINTER (Polychromatic opticAl INTErferometric Reconstruction software). The algorithm is able to solve large scale problems. It relies on an iterative process, which alternates estimation of polychromatic images and of complex visibilities. The complex visibilities are not only estimated from squared moduli and closure phases, but also from differential phase...

  7. Quantitative 3D Optical Imaging: Applications in Dosimetry and Biophysics

    NASA Astrophysics Data System (ADS)

    Thomas, Andrew Stephen

    Optical-CT has been shown to be a potentially useful imaging tool for the two very different spheres of biologists and radiation therapy physicists, but it has yet to live up to that potential. In radiation therapy, researchers have used optical-CT for the readout of 3D dosimeters, but it is yet to be a clinically relevant tool as the technology is too slow to be considered practical. Biologists have used the technique for structural imaging, but have struggled with emission tomography as the reality of photon attenuation for both excitation and emission have made the images quantitatively irrelevant. Dosimetry. The DLOS (Duke Large field of view Optical-CT Scanner) was designed and constructed to make 3D dosimetry utilizing optical-CT a fast and practical tool while maintaining the accuracy of readout of the previous, slower readout technologies. Upon construction/optimization/implementation of several components including a diffuser, band pass filter, registration mount & fluid filtration system the dosimetry system provides high quality data comparable to or exceeding that of commercial products. In addition, a stray light correction algorithm was tested and implemented. The DLOS in combination with the 3D dosimeter it was designed for, PREAGETM, then underwent rigorous commissioning and benchmarking tests validating its performance against gold standard data including a set of 6 irradiations. DLOS commissioning tests resulted in sub-mm isotropic spatial resolution (MTF >0.5 for frequencies of 1.5lp/mm) and a dynamic range of ˜60dB. Flood field uniformity was 10% and stable after 45minutes. Stray light proved to be small, due to telecentricity, but even the residual can be removed through deconvolution. Benchmarking tests showed the mean 3D passing gamma rate (3%, 3mm, 5% dose threshold) over the 6 benchmark data sets was 97.3% +/- 0.6% (range 96%-98%) scans totaling ˜10 minutes, indicating excellent ability to perform 3D dosimetry while improving the speed of readout. Noise was low at ˜2% for 2mm reconstructions. The DLOS/PRESAGERTM benchmark tests show consistently excellent performance, with very good agreement to simple known distributions. The telecentric design was critical to enabling fast (~15mins) imaging with minimal stray light artifacts. The system produces accurate isotropic 2mm3 dose data over clinical volumes (e.g. 16cm diameter phantoms, 12 cm height), and represents a uniquely useful and versatile new tool for commissioning complex radiotherapy techniques. The system also has wide versatility, and has successfully been used in preliminary tests with protons and with kV irradiations. Biology. Attenuation corrections for optical-emission-CT were done by modeling physical parameters in the imaging setup within the framework of an ordered subset expectation maximum (OSEM) iterative reconstruction algorithm. This process has a well documented history in single photon emission computed tomography (SPECT), but is inherently simpler due to the lack of excitation photons to account for. Excitation source strength distribution, excitation and emission attenuation were modeled. The accuracy of the correction was investigated by imaging phantoms containing known distributions of attenuation and fluorophores. The correction was validated on a manufactured phantom designed to give uniform emission in a central cuboidal region and later applied to a cleared mouse brain with GFP (green-fluorescentprotein) labeled vasculature and a cleared 4T1 xenograft flank tumor with constitutive RFP (red-fluorescent-protein). Reconstructions were compared to corresponding slices imaged with a fluorescent dissection microscope. Significant optical-ECT attenuation artifacts were observed in the uncorrected phantom images and appeared up to 80% less intense than the verification image in the central region. The corrected phantom images showed excellent agreement with the verification image with only slight variations. The corrected tissue sample reconstructions showed general agreement between the verification images. Comp

  8. Acquisition of 3D Image Representation in Multimedia Ambiance Communication using 3D Laser Scanner and Digital Camera

    Microsoft Academic Search

    Toshifumi Kanamaru; Kunio Yamada; Tadashi Ichikawa; Takeshi Naemura; Kiyoharu Aizawa; Takahiro Saito

    2000-01-01

    This paper addresses a new scheme of acquisition of 3D image representation from range data and texture data. The concept of a layered structure defined for painting, such as long- range, mid-range, and short-range views, that can be applied to a 3D image. Long and mid-range views are located at a reasonable distance, and therefore do not require the perfect

  9. Detection of optic disc in retinal images by means of a geometrical model of vessel structure

    Microsoft Academic Search

    Marco Foracchia; Enrico Grisan; Alfredo Ruggeri

    2004-01-01

    We present here a new method to identify the position of the optic disc (OD) in retinal fundus images. The method is based on the preliminary detection of the main retinal vessels. All retinal vessels originate from the OD and their path follows a similar directional pattern (parabolic course) in all images. To describe the general direction of retinal vessels

  10. The 3D model control of image processing

    NASA Technical Reports Server (NTRS)

    Nguyen, An H.; Stark, Lawrence

    1989-01-01

    Telerobotics studies remote control of distant robots by a human operator using supervisory or direct control. Even if the robot manipulators has vision or other senses, problems arise involving control, communications, and delay. The communication delays that may be expected with telerobots working in space stations while being controlled from an Earth lab have led to a number of experiments attempting to circumvent the problem. This delay in communication is a main motivating factor in moving from well understood instantaneous hands-on manual control to less well understood supervisory control; the ultimate step would be the realization of a fully autonomous robot. The 3-D model control plays a crucial role in resolving many conflicting image processing problems that are inherent in resolving in the bottom-up approach of most current machine vision processes. The 3-D model control approach is also capable of providing the necessary visual feedback information for both the control algorithms and for the human operator.

  11. 3D range scan enhancement using image-based methods

    NASA Astrophysics Data System (ADS)

    Herbort, Steffen; Gerken, Britta; Schugk, Daniel; Wöhler, Christian

    2013-10-01

    This paper addresses the problem of 3D surface scan refinement, which is desirable due to noise, outliers, and missing measurements being present in the 3D surfaces obtained with a laser scanner. We present a novel algorithm for the fusion of absolute laser scanner depth profiles and photometrically estimated surface normal data, which yields a noise-reduced and highly detailed depth profile with large scale shape robustness. In contrast to other approaches published in the literature, the presented algorithm (1) regards non-Lambertian surfaces, (2) simultaneously computes surface reflectance (i.e. BRDF) parameters required for 3D reconstruction, (3) models pixelwise incident light and viewing directions, and (4) accounts for interreflections. The algorithm as such relies on the minimization of a three-component error term, which penalizes intensity deviations, integrability deviations, and deviations from the known large-scale surface shape. The solution of the error minimization is obtained iteratively based on a calculus of variations. BRDF parameters are estimated by initially reducing and then iteratively refining the optical resolution, which provides the required robust data basis. The 3D reconstruction of concave surface regions affected by interreflections is improved by compensating global illumination in the image data. The algorithm is evaluated based on eight objects with varying albedos and reflectance behaviors (diffuse, specular, metallic). The qualitative evaluation shows a removal of outliers and a strong reduction of noise, while the large scale shape is preserved. Fine surface details Which are previously not contained in the surface scans, are incorporated through using image data. The algorithm is evaluated with respect to its absolute accuracy using two caliper objects of known shape, and based on synthetically generated data. The beneficial effect of interreflection compensation on the reconstruction accuracy is evaluated quantitatively in a Photometric Stereo framework.

  12. Non-rigid 2D-3D Medical Image Registration using Markov Random Fields

    E-print Network

    Paris-Sud XI, Université de

    Non-rigid 2D-3D Medical Image Registration using Markov Random Fields Enzo Ferrante and Nikos of this approach. Keywords: 2D-3D registration, medical imaging, markov random fields, dis- crete optimization. 1 Introduction 2D-3D image registration is an important problem in medical imaging and it can be applied

  13. Multi-Label Simple Points Definition for 3D Images Digital Deformable Model

    E-print Network

    Paris-Sud XI, Université de

    Multi-Label Simple Points Definition for 3D Images Digital Deformable Model Alexandre Dupas1 for segmenting 3D images into regions, a kind of deformable digital partition with the following specific is shown on several 3D image segmentations. Key words: Simple Point, Deformable Model, Multi-Label Image 1

  14. Automating the Extraction of 3D Models from Medical Images for Virtual Reality and Haptic Simulations

    Microsoft Academic Search

    Silvio H. Rizzi; P. Pat Banerjee; Cristian J. Luciano

    2007-01-01

    The Sensimmer platform represents our ongoing research on simultaneous haptics and graphics rendering of 3D models. For simulation of medical and surgical procedures using Sensimmer, 3D models must be obtained from medical imaging data, such as Magnetic Resonance Imaging (MRI) or Computed Tomography (CT). Image segmentation techniques are used to determine the anatomies of interest from the images. 3D models

  15. Ultra-realistic 3-D imaging based on colour holography

    NASA Astrophysics Data System (ADS)

    Bjelkhagen, H. I.

    2013-02-01

    A review of recent progress in colour holography is provided with new applications. Colour holography recording techniques in silver-halide emulsions are discussed. Both analogue, mainly Denisyuk colour holograms, and digitally-printed colour holograms are described and their recent improvements. An alternative to silver-halide materials are the panchromatic photopolymer materials such as the DuPont and Bayer photopolymers which are covered. The light sources used to illuminate the recorded holograms are very important to obtain ultra-realistic 3-D images. In particular the new light sources based on RGB LEDs are described. They show improved image quality over today's commonly used halogen lights. Recent work in colour holography by holographers and companies in different countries around the world are included. To record and display ultra-realistic 3-D images with perfect colour rendering are highly dependent on the correct recording technique using the optimal recording laser wavelengths, the availability of improved panchromatic recording materials and combined with new display light sources.

  16. Digital holography for microscopic imaging and 3D shape measurement

    NASA Astrophysics Data System (ADS)

    Buehl, Johannes; Babovsky, Holger; Grosse, Marcus; Kiessling, Armin; Kowarschik, Richard

    2011-06-01

    Digital holography is used for a wide range of applications. A lot of techniques deal with holographic microscopy or the 3D shape measurement of objects. We present our approaches to these applications. To increase the resolution of a microscopic imaging system a method for aperture synthesis is applied, where the spatial frequency shift, the global phase differences and the amplitude ratios of the individual sections of the Fourier spectrum are measured by using an overlap between them. It is shown that this method can be performed out including sub-pixel accuracy. The experimental holographic setup uses tilted illumination beams realized by an LCoS SLM, which can be easily adapted to the numerical aperture of the microscope objective. For the 3D shape measurement of arbitrary diffuse-reflecting macroscopic objects a novel approach is demonstrated, which uses common digital holographic setup together with a second CCD and an LCoS to modulate the object wave. Our idea is to capture a series of holograms from multiple positions and to apply concepts of structured light photogrammetry, which deliver more accurate depth information. The method yields a dense 3D point cloud of a scene.

  17. 3D-LZ helicopter ladar imaging system

    NASA Astrophysics Data System (ADS)

    Savage, James; Harrington, Walter; McKinley, R. Andrew; Burns, H. N.; Braddom, Steven; Szoboszlay, Zoltan

    2010-04-01

    A joint-service team led by the Air Force Research Laboratory's Munitions and Sensors Directorates completed a successful flight test demonstration of the 3D-LZ Helicopter LADAR Imaging System. This was a milestone demonstration in the development of technology solutions for a problem known as "helicopter brownout", the loss of situational awareness caused by swirling sand during approach and landing. The 3D-LZ LADAR was developed by H.N. Burns Engineering and integrated with the US Army Aeroflightdynamics Directorate's Brown-Out Symbology System aircraft state symbology aboard a US Army EH-60 Black Hawk helicopter. The combination of these systems provided an integrated degraded visual environment landing solution with landing zone situational awareness as well as aircraft guidance and obstacle avoidance information. Pilots from the U.S. Army, Air Force, Navy, and Marine Corps achieved a 77% landing rate in full brownout conditions at a test range at Yuma Proving Ground, Arizona. This paper will focus on the LADAR technology used in 3D-LZ and the results of this milestone demonstration.

  18. 3D laser optoacoustic ultrasonic imaging system for preclinical research

    NASA Astrophysics Data System (ADS)

    Ermilov, Sergey A.; Conjusteau, André; Hernandez, Travis; Su, Richard; Nadvoretskiy, Vyacheslav; Tsyboulski, Dmitri; Anis, Fatima; Anastasio, Mark A.; Oraevsky, Alexander A.

    2013-03-01

    In this work, we introduce a novel three-dimensional imaging system for in vivo high-resolution anatomical and functional whole-body visualization of small animal models developed for preclinical or other type of biomedical research. The system (LOUIS-3DM) combines a multi-wavelength optoacoustic and ultrawide-band laser ultrasound tomographies to obtain coregistered maps of tissue optical absorption and acoustic properties, displayed within the skin outline of the studied animal. The most promising applications of the LOUIS-3DM include 3D angiography, cancer research, and longitudinal studies of biological distribution of optoacoustic contrast agents (carbon nanotubes, metal plasmonic nanoparticles, etc.).

  19. Prostate mechanical imaging: 3-D image composition and feature calculations

    Microsoft Academic Search

    Vladimir Egorov; Suren Ayrapetyan; Armen P. Sarvazyan

    2006-01-01

    We have developed a method and a device entitled prostate mechanical imager (PMI) for the real-time imaging of prostate using a transrectal probe equipped with a pressure sensor array and position tracking sensor. PMI operation is based on measurement of the stress pattern on the rectal wall when the probe is pressed against the prostate. Temporal and spatial changes in

  20. Automatic registration of multiple texel images (fused lidar/digital imagery) for 3D image creation

    NASA Astrophysics Data System (ADS)

    Budge, Scott E.; Badamikar, Neeraj

    2013-05-01

    Creation of 3D images through remote sensing is a topic of interest in many applications such as terrain / building modeling and automatic target recognition (ATR). Several photogrammetry-based methods have been proposed that derive 3D information from digital images from different perspectives, and lidar- based methods have been proposed that merge lidar point clouds and texture the merged point clouds with digital imagery. Image registra­ tion alone has difficulty with smooth regions with low contrast, whereas point cloud merging alone has difficulty with outliers and lack of proper convergence in the merging process. This paper presents a method to create 3D images that uses the unique properties of texel images (pixel­ fused lidar and digital imagery) to improve the quality and robustness of fused 3D images. The proposed method uses both image processing and point-cloud merging to combine texel images in an iterative technique. Since the digital image pixels and the lidar 3D points are fused at the sensor level, more accurate 3D images are generated because registration of image data automatically improves the merging of the point clouds, and vice versa. Examples illustrate the value of this method over other methods.

  1. Application of 3D surface imaging in breast cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Alderliesten, Tanja; Sonke, Jan-Jakob; Betgen, Anja; Honnef, Joeri; van Vliet-Vroegindeweij, Corine; Remeijer, Peter

    2012-02-01

    Purpose: Accurate dose delivery in deep-inspiration breath-hold (DIBH) radiotherapy for patients with breast cancer relies on precise treatment setup and monitoring of the depth of the breath hold. This study entailed performance evaluation of a 3D surface imaging system for image guidance in DIBH radiotherapy by comparison with cone-beam computed tomography (CBCT). Materials and Methods: Fifteen patients, treated with DIBH radiotherapy after breast-conserving surgery, were included. The performance of surface imaging was compared to the use of CBCT for setup verification. Retrospectively, breast surface registrations were performed for CBCT to planning CT as well as for a 3D surface, captured concurrently with CBCT, to planning CT. The resulting setup errors were compared with linear regression analysis. For the differences between setup errors, group mean, systematic and random errors were calculated. Furthermore, a residual error after registration (RRE) was assessed for both systems by investigating the root-mean-square distance between the planning CT surface and registered CBCT/captured surface. Results: Good correlation between setup errors was found: R2=0.82, 0.86, 0.82 in left-right, cranio-caudal and anteriorposterior direction, respectively. Systematic and random errors were <=0.16cm and <=0.13cm in all directions, respectively. RRE values for surface imaging and CBCT were on average 0.18 versus 0.19cm with a standard deviation of 0.10 and 0.09cm, respectively. Wilcoxon-signed-ranks testing showed that CBCT registrations resulted in higher RRE values than surface imaging registrations (p=0.003). Conclusion: This performance evaluation study shows very promising results

  2. Performance assessment of 3D surface imaging technique for medical imaging applications

    NASA Astrophysics Data System (ADS)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Recent development in optical 3D surface imaging technologies provide better ways to digitalize the 3D surface and its motion in real-time. The non-invasive 3D surface imaging approach has great potential for many medical imaging applications, such as motion monitoring of radiotherapy, pre/post evaluation of plastic surgery and dermatology, to name a few. Various commercial 3D surface imaging systems have appeared on the market with different dimension, speed and accuracy. For clinical applications, the accuracy, reproducibility and robustness across the widely heterogeneous skin color, tone, texture, shape properties, and ambient lighting is very crucial. Till now, a systematic approach for evaluating the performance of different 3D surface imaging systems still yet exist. In this paper, we present a systematic performance assessment approach to 3D surface imaging system assessment for medical applications. We use this assessment approach to exam a new real-time surface imaging system we developed, dubbed "Neo3D Camera", for image-guided radiotherapy (IGRT). The assessments include accuracy, field of view, coverage, repeatability, speed and sensitivity to environment, texture and color.

  3. Ridge-based retinal image registration algorithm involving OCT fundus images

    NASA Astrophysics Data System (ADS)

    Li, Ying; Gregori, Giovanni; Knighton, Robert W.; Lujan, Brandon J.; Rosenfeld, Philip J.; Lam, Byron L.

    2011-03-01

    This paper proposes an algorithm for retinal image registration involving OCT fundus images (OFIs). The first application of the algorithm is to register OFIs with color fundus photographs; such registration between multimodal retinal images can help correlate features across imaging modalities, which is important for both clinical and research purposes. The second application is to perform the montage of several OFIs, which allows us to construct 3D OCT images over a large field of view out of separate OCT datasets. We use blood vessel ridges as registration features. The brute force search and an Iterative Closest Point (ICP) algorithm are employed for image pair registration. Global alignment to minimize the distance between matching pixel pairs is used to obtain the montage of OFIs. Quality of OFIs is the big limitation factor of the registration algorithm. In the first experiment, the effect of manual OFI enhancement on registration was evaluated for the affine model on 11 image pairs from diseased eyes. The average root mean square error (RMSE) decreases from 58 ?m to 40 ?m. This indicates that the registration algorithm is robust to manual enhancement. In the second experiment for the montage of OFIs, the algorithm was tested on 6 sets from healthy eyes and 6 sets from diseased eyes, each set having 8 partially overlapping SD-OCT images. Visual evaluation showed that the montage performance was acceptable for normal cases, and not good for abnormal cases due to low visibility of blood vessels. The average RMSE for a typical montage case from a healthy eye is 2.3 pixels (69 ?m).

  4. Recent progress in 3-D imaging of sea freight containers

    NASA Astrophysics Data System (ADS)

    Fuchs, Theobald; Schön, Tobias; Dittmann, Jonas; Sukowski, Frank; Hanke, Randolf

    2015-03-01

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today's 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  5. 3D Tomographic imaging of colliding cylindrical blast waves

    NASA Astrophysics Data System (ADS)

    Smith, R. A.; Lazarus, J.; Hohenberger, M.; Robinson, J.; Marocchino, A.; Chittenden, J.; Dunne, M.; Moore, A.; Gumbrell, E.

    2007-11-01

    The interaction of strong shocks & radiative blast waves is believed to give rise to the turbulent, knotted structures commonly observed in extended astrophysical objects. Modeling these systems is however extremely challenging due to the complex interplay between hydrodynamics, radiation and atomic physics. As a result we have been developing laboratory scale blast wave collision experiments to provide high quality data for code benchmarking, & to improve our physical understanding. We report on experimental & numerical investigations of the collision dynamics of counter propagating strong (>Mach 50) cylindrical thin-shelled blast waves driven by focusing intense laser pulses into an extended medium of atomic clusters. In our test system the blast wave collision creates strongly asymmetric electron density profiles, precluding the use of Abel inversion methods. In consequence we have employed a new tomographic imaging technique, allowing us to recover the full 3D, time framed electron density distribution. Tomography & streaked Schlieren imaging enabled tracking of radial & longitudinal mass flow & the investigation of Mach stem formation as pairs of blast waves collided. We have compared our experimental system to numerical simulations by the 3D magnetoresistive hydrocode GORGON.

  6. Computing 3D head orientation from a monocular image sequence

    NASA Astrophysics Data System (ADS)

    Horprasert, Thanarat; Yacoob, Yaser; Davis, Larry S.

    1997-02-01

    An approach for estimating 3D head orientation in a monocular image sequence is proposed. The approach employs recently developed image-based parameterized tracking for face and face features to locate the area in which a sub- pixel parameterized shape estimation of the eye's boundary is performed. This involves tracking of five points (four at the eye corners and the fifth is the tip of the nose). We describe an approach that relies on the coarse structure of the face to compute orientation relative to the camera plane. Our approach employs projective invariance of the cross-ratios of the eye corners and anthropometric statistics to estimate the head yaw, roll and pitch. Analytical and experimental results are reported.

  7. Adaptive optics-optical coherence tomography for in vivo retinal imaging: comparative analysis of two wavefront correctors

    Microsoft Academic Search

    Robert J. Zawadzki; Steven M. Jones; Mingtao Zhao; Stacey S. Choi; Sophie S. Laut; Scot S. Olivier; Joseph A. Izatt; John S. Werner

    2006-01-01

    Adaptive optics-optical coherence tomography (AO-OCT) has the potential to improve lateral resolution for OCT retinal imaging. Several reports have already described the successful combination of AO with a scanning confocal Fourier-domain OCT instrument to permit real-time three-dimensional (3D) imaging with high resolution (in all three dimensions). One of the key components that sets the performance limit of AO is the

  8. Automated Extraction of Lymph Nodes from 3-D Abdominal CT Images Using 3-D Minimum Directional Difference Filter

    Microsoft Academic Search

    Takayuki Kitasaka; Yukihiro Tsujimura; Yoshihiko Nakamura; Kensaku Mori; Yasuhito Suenaga; Masaaki Ito; Shigeru Nawano

    2007-01-01

    This paper presents a method for extracting lymph node regions from 3-D abdominal CT images using 3-D minimum directional\\u000a difference filter. In the case of surgery of colonic cancer, resection of metastasis lesions is performed with resection of\\u000a a primary lesion. Lymph nodes are main route of metastasis and are quite important for deciding resection area. Diagnosis\\u000a of enlarged lymph

  9. Enhanced visualization of MR angiogram with modified MIP and 3D image fusion

    Microsoft Academic Search

    Jong H. Kim; Kyoung M. Yeon; Man C. Han; Dong Hyuk Lee; Han I. Cho

    1997-01-01

    We have developed a 3D image processing and display technique that include image resampling, modification of MIP, volume rendering, and fusion of MIP image with volumetric rendered image. This technique facilitates the visualization of the 3D spatial relationship between vasculature and surrounding organs by overlapping the MIP image on the volumetric rendered image of the organ. We applied this technique

  10. A parallel algorithm to reconstruct bounding surfaces in 3D images

    E-print Network

    Genaud, Stéphane

    @icps.u­strasbg.fr Abstract Growing size of 3D digital images causes sequential algorithms to be less and less usable on whole by several parallel processors. Keywords: parallel applications, computer graphics, 3D digital images decade, 3D digitalization techniques such as the Magnetic Resonance Imaging have been extensively

  11. PICTOMETRY'S PROPRIETARY AIRBORNE DIGITAL IMAGING SYSTEM AND ITS APPLICATION IN 3D CITY MODELLING

    E-print Network

    Salvaggio, Carl

    PICTOMETRY'S PROPRIETARY AIRBORNE DIGITAL IMAGING SYSTEM AND ITS APPLICATION IN 3D CITY MODELLING issues in generation of 3D city models using Pictometry digital oblique images will also be discussed. 1 developed for creating 3D city models from digital images and other auxiliary data automatically or semi

  12. Atlas-based 3D-Shape Reconstruction from X-Ray Images Hans Lamecker

    E-print Network

    Andrzejak, Artur

    Atlas-based 3D-Shape Reconstruction from X-Ray Images Hans Lamecker , Thomas H. Wenckebach, Hans {lastname}@zib.de Abstract In many cases x-ray images are the only basis for surgery planning. Nevertheless a method to reconstruct 3D shapes from few digital x-ray images on the basis of 3D-statistical shape models

  13. Speckle Suppression for 3-D Ultrasound Images Using Nonlinear Multiscale Wavelet Diffusion

    E-print Network

    Duncan, James S.

    ultrasound methods used for visualization of 3-D anatomy and pathology.1 Compared with other medical imagingSpeckle Suppression for 3-D Ultrasound Images Using Nonlinear Multiscale Wavelet Diffusion Yong Yue ABSTRACT We introduce a new speckle suppression approach for 3-D ultrasound images. The proposed method

  14. Sewing Faces : A topological reconstruction of 6connected objects bounding surfaces in 3D digital images

    E-print Network

    Genaud, Stéphane

    one can get 3D digital images of any part of the hu­ man body. These images are characterized by 3DSewing Faces : A topological reconstruction of 6­connected objects bounding surfaces in 3D digital ''Sewing Faces''. From 3D images deøned by a block of voxels, this algorithm based on a contour follow­ ing

  15. High Resolution 3D Radar Imaging of Comet Interiors

    NASA Astrophysics Data System (ADS)

    Asphaug, E. I.; Gim, Y.; Belton, M.; Brophy, J.; Weissman, P. R.; Heggy, E.

    2012-12-01

    Knowing the interiors of comets and other primitive bodies is fundamental to our understanding of how planets formed. We have developed a Discovery-class mission formulation, Comet Radar Explorer (CORE), based on the use of previously flown planetary radar sounding techniques, with the goal of obtaining high resolution 3D images of the interior of a small primitive body. We focus on the Jupiter-Family Comets (JFCs) as these are among the most primitive bodies reachable by spacecraft. Scattered in from far beyond Neptune, they are ultimate targets of a cryogenic sample return mission according to the Decadal Survey. Other suitable targets include primitive NEOs, Main Belt Comets, and Jupiter Trojans. The approach is optimal for small icy bodies ~3-20 km diameter with spin periods faster than about 12 hours, since (a) navigation is relatively easy, (b) radar penetration is global for decameter wavelengths, and (c) repeated overlapping ground tracks are obtained. The science mission can be as short as ~1 month for a fast-rotating JFC. Bodies smaller than ~1 km can be globally imaged, but the navigation solutions are less accurate and the relative resolution is coarse. Larger comets are more interesting, but radar signal is unlikely to be reflected from depths greater than ~10 km. So, JFCs are excellent targets for a variety of reasons. We furthermore focus on the use of Solar Electric Propulsion (SEP) to rendezvous shortly after the comet's perihelion. This approach leaves us with ample power for science operations under dormant conditions beyond ~2-3 AU. This leads to a natural mission approach of distant observation, followed by closer inspection, terminated by a dedicated radar mapping orbit. Radar reflections are obtained from a polar orbit about the icy nucleus, which spins underneath. Echoes are obtained from a sounder operating at dual frequencies 5 and 15 MHz, with 1 and 10 MHz bandwidths respectively. The dense network of echoes is used to obtain global 3D images of interior structure to ~20 m, and to map dielectric properties (related to internal composition) to better than 200 m throughout. This is comparable in detail to modern 3D medical ultrasound, although we emphasize that the techniques are somewhat different. An interior mass distribution is obtained through spacecraft tracking, using data acquired during the close, quiet radar orbits. This is aligned with the radar-based images of the interior, and the shape model, to contribute to the multi-dimensional 3D global view. High-resolution visible imaging provides boundary conditions and geologic context to these interior views. An infrared spectroscopy and imaging campaign upon arrival reveals the time-evolving activity of the nucleus and the structure and composition of the inner coma, and the definition of surface units. CORE is designed to obtain a total view of a comet, from the coma to the active and evolving surface to the deep interior. Its primary science goal is to obtain clear images of internal structure and dielectric composition. These will reveal how the comet was formed, what it is made of, and how it 'works'. By making global yet detailed connections from interior to exterior, this knowledge will be an important complement to the Rosetta mission, and will lay the foundation for comet nucleus sample return by revealing the areas of shallow depth to 'bedrock', and relating accessible deposits to their originating provenances within the nucleus.

  16. Class-specific grasping of 3D objects from a single 2D image

    Microsoft Academic Search

    Han-Pang Chiu; Huan Liu; Leslie Pack Kaelbling; T. Lozano-Perez

    2010-01-01

    Our goal is to grasp 3D objects given a single image, by using prior 3D shape models of object classes. The shape models, defined as a collection of oriented primitive shapes centered at fixed 3D positions, can be learned from a few labeled images for each class. The 3D class model can then be used to estimate the 3D shape

  17. Improvements of 3-D image quality in integral display by reducing distortion errors

    Microsoft Academic Search

    Masahiro Kawakita; Hisayuki Sasaki; Jun Arai; Fumio Okano; Koya Suehiro; Yasuyuki Haino; Makoto Yoshimura; Masahito Sato

    2008-01-01

    An integral three-dimensional (3-D) system based on the principle of integral photography can display natural 3-D images. We studied ways of improving the resolution and viewing angle of 3-D images by using extremely highresolution (EHR) video in an integral 3-D video system. One of the problems with the EHR projection-type integral 3-D system is that positional errors appear between the

  18. Recent Advances in Retinal Imaging With Adaptive Optics

    E-print Network

    Williams, David

    Recent Advances in Retinal Imaging With Adaptive Optics 36 Optics & Photonics News January 2005 suggested the use of adaptive optics to improve ground-based astronomy, where the rapidly changing the use of adaptive optics is not limited to astronomical imaging, and in the past few decades there has

  19. Deconvolution of adaptive optics retinal images Julian C. Christou

    E-print Network

    Deconvolution of adaptive optics retinal images Julian C. Christou Center for Adaptive Optics the contrast of the adaptive optics images. In this work we demonstrate that quantitative information is also by using adaptive optics1 (AO). The wave-front correction is not perfect, however. Although a diffraction

  20. Personal identification based on blood vessels of retinal fundus images

    Microsoft Academic Search

    Keisuke Fukuta; Toshiaki Nakagawa; Yoshinori Hayashi; Yuji Hatanaka; Takeshi Hara; Hiroshi Fujita

    2008-01-01

    Biometric technique has been implemented instead of conventional identification methods such as password in computer, automatic teller machine (ATM), and entrance and exit management system. We propose a personal identification (PI) system using color retinal fundus images which are unique to each individual. The proposed procedure for identification is based on comparison of an input fundus image with reference fundus

  1. Front and rear image generation module for depth-fused 3-D display

    Microsoft Academic Search

    Hideaki Takada; Shiro Suyama; M. Date; K. Kimura

    2006-01-01

    We have developed an image generation module for the depth-fused 3-D (DFD) display, which can show 3-D images using the DFD visual illusion. The module generates the front and rear images from the 2-D original image and depth-map image. The module uses a field programmable gate array (FPGA). This module shows 3-D images at the full video rate and easily

  2. Non-rigid Elastic Registration of Retinal Images using Local Window Mutual Information

    E-print Network

    Martin, Ralph R.

    -rigid retinal image registration between colour fundus photographs and Scanning Laser Ophthalmoscope (SLO photographs and Scanning Laser Ophthalmoscope (SLO) images. The fundus image gives very high quality

  3. Three-dimensional reconstruction of blood vessels extracted from retinal fundus images.

    PubMed

    Martinez-Perez, M Elena; Espinosa-Romero, Arturo

    2012-05-01

    We present a 3D reconstruction of retinal blood vessel trees using two views of fundus images. The problem is addressed by using well known computer vision techniques which consider: 1) The recovery of camera-eyeball model parameters by an auto-calibration method. The camera parameters are found via the solution of simplified Kruppa equations, based on correspondences found by a LMedS optimisation correlation between pairs of eight different views. 2) The extraction of blood vessels and skeletons from two fundus images. 3) The matching of corresponding points of the two skeleton trees. The trees are previously labelled during the analysis of 2D binary images. Finally, 4) the lineal triangulation of matched correspondence points and the surface modelling via generalised cylinders using diameter measurements extracted from the 2D binary images. The method is nearly automatic and it is tested with 2 sets of 10 fundus retinal images, each one taken from different subjects. Results of 3D vein and artery trees reconstructions are shown. PMID:22565765

  4. 3D geometric analysis of the aorta in 3D MRA follow-up pediatric image data

    NASA Astrophysics Data System (ADS)

    Wörz, Stefan; Alrajab, Abdulsattar; Arnold, Raoul; Eichhorn, Joachim; von Tengg-Kobligk, Hendrik; Schenk, Jens-Peter; Rohr, Karl

    2014-03-01

    We introduce a new model-based approach for the segmentation of the thoracic aorta and its main branches from follow-up pediatric 3D MRA image data. For robust segmentation of vessels even in difficult cases (e.g., neighboring structures), we propose a new extended parametric cylinder model which requires only relatively few model parameters. The new model is used in conjunction with a two-step fitting scheme for refining the segmentation result yielding an accurate segmentation of the vascular shape. Moreover, we include a novel adaptive background masking scheme and we describe a spatial normalization scheme to align the segmentation results from follow-up examinations. We have evaluated our proposed approach using different 3D synthetic images and we have successfully applied the approach to follow-up pediatric 3D MRA image data.

  5. Peripapillary retinal nerve fiber layer thickness distribution in Chinese with myopia measured by 3D-optical coherence tomography

    PubMed Central

    Zhao, Jing-Jing; Zhuang, Wen-Juan; Yang, Xue-Qiu; Li, Shan-Shan; Xiang, Wei

    2013-01-01

    AIM To assess the effect of myopia on the thickness of retinal nerve fiber layer (RNFL) measured by 3D optical coherence tomography (3D-OCT) in a group of nonglaucomatous Chinese subjects. METHODS Two hundred and fifty-eight eyes of 258 healthy Chinese myopic individuals were recruited and four groups were classified according to their spherical equivalent (SE): low myopia (n=42, -0.5D3D-OCT. The RNFL thicknesses among four sample groups were performed by one-way analysis of variance (one-way ANOVA) and least significant difference test (LSD test). Correlations between RNFL thickness and axial length/spherical equivalent were performed by linear regression analysis. RESULTS The overall RNFL parameters shown significant differences between groups excluding 7, 9, 10, 11 o'clock hour thickness. The RNFL thickness of superior, nasal, inferior, average and 1, 2, 3, 4, 5, 6, 12 o'clock sectors were decreased with the increasing axial length and higher degree of myopia. In contrast, as axial length and the degree of myopia increased, the temporal and 8, 9 o'clock sectors thicknesses were increased. A considerable proportion of myopic eyes were classified as outside the normal limits. Six o'clock was the most notable of the total, which 43.4% were outside the normal limits. CONCLUSION On the measurement of RNFL, the characteristics of RNFL with the change of the degree of myopia were observed. As the degree of myopia increases, the RNFL thickness measured by 3D-OCT including the average and superior, nasal, inferior sectors decreases. And due to the change of RNFL thickness, it should be considered when using OCT to access for the damage of glaucoma especially people with myopia. PMID:24195037

  6. Imaging the 3D geometry of pseudotachylyte-bearing faults

    NASA Astrophysics Data System (ADS)

    Resor, Phil; Shervais, Katherine

    2013-04-01

    Dynamic friction experiments in granitoid or gabbroic rocks that achieve earthquake slip velocities reveal significant weakening by melt-lubrication of the sliding surfaces. Extrapolation of these experimental results to seismic source depths (> 7 km) suggests that the slip weakening distance (Dw) over which this transition occurs is < 10 cm. The physics of this lubrication in the presence of a fluid (melt) is controlled by surface micro-topography. In order to characterize fault surface microroughness and its evolution during dynamic slip events on natural faults, we have undertaken an analysis of three-dimensional (3D) fault surface microtopography and its causes on a suite of pseudotachylyte-bearing fault strands from the Gole Larghe fault zone, Italy. The solidification of frictional melt soon after seismic slip ceases "freezes in" earthquake source geometries, however it also precludes the development of extensive fault surface exposures that have enabled direct studies of fault surface roughness. We have overcome this difficulty by imaging the intact 3D geometry of the fault using high-resolution X-ray computed tomography (CT). We collected a suite of 2-3.5 cm diameter cores (2-8 cm long) from individual faults within the Gole Larghe fault zone with a range of orientations (+/- 45 degrees from average strike) and slip magnitudes (0-1 m). Samples were scanned at the University of Texas High Resolution X-ray CT Facility, using an Xradia MicroCT scanner with a 70 kV X-ray source. Individual voxels (3D pixels) are ~36 ?m across. Fault geometry is thus imaged over ~4 orders of magnitude from the micron scale up to ~Dw. Pseudotachylyte-bearing fault zones are imaged as tabular bodies of intermediate X-ray attenuation crosscutting high attenuation biotite and low attenuation quartz and feldspar of the surrounding tonalite. We extract the fault surfaces (contact between the pseudotachylyte bearing fault zone and the wall rock) using integrated manual mapping, automated edge detection, and statistical evaluation. This approach results in a digital elevation model for each side of the fault zone that we use to quantify melt thickness and volume as well as surface microroughness and explore the relationship between these properties and the geometry, slip magnitude, and wall rock mineralogy of the fault.

  7. 3D Chemical and Elemental Imaging by STXM Spectrotomography

    SciTech Connect

    Wang, J.; Karunakaran, C.; Lu, Y.; Hormes, J. [Canadian Light Source Inc., University of Saskatchewan, Saskatoon, SK S7N 0X4 (Canada); Hitchcock, A. P. [BIMR, McMaster University, Hamilton, ON L8S 4M1 (Canada); Prange, A. [Microbiology and Food Hygiene, Niederrhein University of Applied Sciences, Moenchengladbach (Germany); Institute for Microbiology and Virology, University of Witten/Herdecke, Witten (Germany); Center for Advanced Microstructures and Devices (CAMD), Louisiana State University, Baton Rouge, LA (United States); Franz, B. [Microbiology and Food Hygiene, Niederrhein University of Applied Sciences, Moenchengladbach (Germany); Harkness, T. [College of Medicine, University of Saskatchewan, Saskatoon, SK S7N 5E5 (Canada); Obst, M. [Center for Applied Geoscience, Tuebingen University, Tuebingen (Germany)

    2011-09-09

    Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

  8. Intra-operative 3D pose estimation of fractured bone segments for image guided orthopedic surgery

    Microsoft Academic Search

    P. Gamage; S. Q. Xie; P. Delmas; P. Xu; S. Mukherjee

    2009-01-01

    The widespread adoption of minimally invasive surgical techniques have driven the need for 3D intra-operative image guidance. Hence the 3D pose estimation (position and orientation) performed through the registration of pre-operatively prepared 3D anatomical data to intra-operative 2D fluoroscopy images is one of the main research areas of image guided orthopedic surgery. The goal of this 2D-3D registration is to

  9. Vision expert system 3D-IMPRESS for automated construction of three dimensional image processing procedures

    Microsoft Academic Search

    Xiang-Rong Zhou; Akinobu Shimizu; Jun-ichi Hasegawa; Jun-ichiro Toriwaki; Takeshi Hara; Hiroshi Fujita

    2001-01-01

    In this paper a three dimensional (3D) image processing expert system called 3D-IMPRESS is presented. This system can automatically construct a 3D image processing procedure by using pairs of an original input image and a desired output figure called sample figure given by a user This paper describes the outline of 3D-IMPRESS and presents a method of procedure consolidation for

  10. 3-D Imaging of the Heart Chambers with C-arm CT

    E-print Network

    Fiebig, Peter

    3-D Imaging of the Heart Chambers with C-arm CT 3D-Bildgebung der Herzkammern mit C-Bogen-CT Der to now, high resolution 2-D X-ray images are acquired with a C-arm system in standard views of the cardiac chambers can be performed in 3-D. In the last years, cardiac imaging in 3-D using a C-arm system

  11. Fully digital, phase-domain ?? 3D range image sensor in 130nm CMOS imaging technology 

    E-print Network

    Walker, Richard John

    2012-06-25

    Three-Dimensional (3D) optical range-imaging is a field experiencing rapid growth, expanding into a wide variety of machine vision applications, most recently including consumer gaming. Time of Flight (ToF) cameras, akin ...

  12. 3D imaging of semiconductor components by discrete laminography

    SciTech Connect

    Batenburg, K. J. [Centrum Wiskunde and Informatica, P.O. Box 94079, NL-1090 GB Amsterdam, The Netherlands and iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Palenstijn, W. J.; Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2014-06-19

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  13. GPU-accelerated denoising of 3D magnetic resonance images

    SciTech Connect

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  14. 3D lesion insertion in digital breast tomosynthesis images

    NASA Astrophysics Data System (ADS)

    Vaz, Michael S.; Besnehard, Quentin; Marchessoux, Cédric

    2011-03-01

    Digital breast tomosynthesis (DBT) is a new volumetric breast cancer screening modality. It is based on the principles of computed tomography (CT) and shows promise for improving sensitivity and specificity compared to digital mammography, which is the current standard protocol. A barrier to critically evaluating any new modality, including DBT, is the lack of patient data from which statistically significant conclusions can be drawn; such studies require large numbers of images from both diseased and healthy patients. Since the number of detected lesions is low in relation to the entire breast cancer screening population, there is a particular need to acquire or otherwise create diseased patient data. To meet this challenge, we propose a method to insert 3D lesions in the DBT images of healthy patients, such that the resulting images appear qualitatively faithful to the modality and could be used in future clinical trials or virtual clinical trials (VCTs). The method facilitates direct control of lesion placement and lesion-to-background contrast and is agnostic to the DBT reconstruction algorithm employed.

  15. 3D quantitative Fourier analysis of second harmonic generation microscopy images of collagen structure in cartilage

    NASA Astrophysics Data System (ADS)

    Romijn, Elisabeth I.; Lilledahl, Magnus B.

    2013-02-01

    One of the main advantages of nonlinear microscopy is that it provides 3D imaging capability. Second harmonic generation is widely used to image the 3D structure of collagen fibers, and several works have highlighted the modification of the collagen fiber fabric in important diseases. By using an ellipsoidal specific fitting technique on the Fourier transformed image, we show, using both synthetic images and SHG images from cartilage, that the 3D direction of the collagen fibers can be robustly determined.

  16. Geometric Smoothing of 3D Surfaces and Nonlinear Diffusion of 3D Images

    E-print Network

    .2 On the Design of a Curvature Dependent Flow . . . . . . . . . . . . . . . . . 8 4 The Entropy Scale are important to a number of computer vision applications, we expect this techniques to be a useful component for them. Keywords: Shape representation, deformation, scale, 3D smoothing, curvature dependent flow

  17. Portable, low-priced retinal imager for eye disease screening

    NASA Astrophysics Data System (ADS)

    Soliz, Peter; Nemeth, Sheila; VanNess, Richard; Barriga, E. S.; Zamora, Gilberto

    2014-02-01

    The objective of this project was to develop and demonstrate a portable, low-priced, easy to use non-mydriatic retinal camera for eye disease screening in underserved urban and rural locations. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities or other economically stressed healthcare facilities. Our approach for Smart i-Rx is based primarily on a significant departure from current generations of desktop and hand-held commercial retinal cameras as well as those under development. Our techniques include: 1) Exclusive use of off-the-shelf components; 2) Integration of retinal imaging device into low-cost, high utility camera mount and chin rest; 3) Unique optical and illumination designed for small form factor; and 4) Exploitation of autofocus technology built into present digital SLR recreational cameras; and 5) Integration of a polarization technique to avoid the corneal reflex. In a prospective study, 41 out of 44 diabetics were imaged successfully. No imaging was attempted on three of the subjects due to noticeably small pupils (less than 2mm). The images were of sufficient quality to detect abnormalities related to diabetic retinopathy, such as microaneurysms and exudates. These images were compared with ones taken non-mydriatically with a Canon CR-1 Mark II camera. No cases identified as having DR by expert retinal graders were missed in the Smart i-Rx images.

  18. Towards wide-field high-resolution retinal imaging

    E-print Network

    Kellerer, Aglae

    2015-01-01

    Adaptive optical correction is an efficient technique to obtain high-resolution images of the retinal surface. A main limitation of adaptive optical correction, however, is the small size of the corrected image. For medical purposes it is important to increase the size of the corrected images. This can be done through composite imaging, but a major difficulty is then the introduction of reconstruction artifacts. Another approach is multi-conjugate adaptive optics. MCAO comes in two flavors. The star- oriented approach has been demonstrated on the eye and allows to increase the diameter of the corrected image by a factor of approximately 2-3. Difficulties in the tomographic reconstruction precludes the correction of larger fields. Here we have investigate the possibility to apply a layer-oriented MCAO approach to retinal imaging.

  19. Non Conventional Imaging Systems for 3D Digitization of Transparent Objects.

    E-print Network

    Paris-Sud XI, Université de

    Non Conventional Imaging Systems for 3D Digitization of Transparent Objects. Shape from made in the field of non conventionnal imaging techniques for 3D digitization of transparent object objects, has sucessfully been modified and applied to the digitization of specular objects. Keywords: 3D

  20. 3D Human Posture Estimation Using the HOG Features from Monocular Image

    E-print Network

    Takiguchi, Tetsuya

    3D Human Posture Estimation Using the HOG Features from Monocular Image Katsunori ONISHI Tetsuya In this paper, we propose a method to estimate the 3D human posture from monocular image without us- ing the markers. A 3D human body is expressed by a multi-joint model, and a set of the joint angles describes

  1. Omnidirectional Integral Photography images compression using the 3D-DCT

    E-print Network

    Athens, University of

    Omnidirectional Integral Photography images compression using the 3D-DCT N. P. Sgouros, D. P transform (3D-DCT) [6] encoder for use in omnidirectional IP image compression. The encoder utilizes the 2D traversal scheme based on the Hilbert space filling curve. 2. Compression using the 3D-DCT and 2D scanning

  2. Simulated 3D Ultrasound LV Cardiac Images for Active Shape Model Training

    E-print Network

    Frangi, Alejandro

    Simulated 3D Ultrasound LV Cardiac Images for Active Shape Model Training Constantine Butakoff of 3D ultrasound cardiac segmentation using Active Shape Models (ASM) is presented. The proposed resolution MRI scans and the appearance model obtained from simulated 3D ultrasound images. Usually

  3. Wide field of view retinal imaging using one-micrometer adaptive optics scanning laser ophthalmoscope

    NASA Astrophysics Data System (ADS)

    Sasaki, Kazuhiro; Kurokawa, Kazuhiro; Makita, Shuichi; Tamada, Daiki; Lim, Yiheng; Cense, Barry; Yasuno, Yoshiaki

    2011-03-01

    Wide field of view (FOV) retinal imaging with high resolution has been demonstrated for quantitative analysis of retinal microstructures. An adaptive optics scanning laser ophthalmoscope (AO-SLO) that was built in our laboratory was improved by a customized scanning protocol for scanning wide region. A post-processing program was developed for generating wide FOV retinal images. The high resolution retinal image with 1.7 degree by 3.0 degree FOV were obtained.

  4. Advanced 3D polarimetric flash ladar imaging through foliage

    NASA Astrophysics Data System (ADS)

    Murray, James T.; Moran, Steven E.; Roddier, Nicolas; Vercillo, Richard; Bridges, Robert; Austin, William

    2003-08-01

    High-resolution three-dimensional flash ladar system technologies are under development that enables remote identification of vehicles and armament hidden by heavy tree canopies. We have developed a sensor architecture and design that employs a 3D flash ladar receiver to address this mission. The receiver captures 128×128×>30 three-dimensional images for each laser pulse fired. The voxel size of the image is 3"×3"×4" at the target location. A novel signal-processing algorithm has been developed that achieves sub-voxel (sub-inch) range precision estimates of target locations within each pixel. Polarization discrimination is implemented to augment the target-to-foliage contrast. When employed, this method improves the range resolution of the system beyond the classical limit (based on pulsewidth and detection bandwidth). Experiments were performed with a 6 ns long transmitter pulsewidth that demonstrate 1-inch range resolution of a tank-like target that is occluded by foliage and a range precision of 0.3" for unoccluded targets.

  5. Seeing More Is Knowing More: V3D Enables Real-Time 3D Visualization and Quantitative Analysis of Large-Scale Biological Image Data Sets

    Microsoft Academic Search

    Hanchuan Peng; Fuhui Long

    2011-01-01

    \\u000a Everyone understands seeing more is knowing more. However, for large-scale 3D microscopic image analysis, it has not been\\u000a an easy task to efficiently visualize, manipulate and understand high-dimensional data in 3D, 4D or 5D spaces. We developed\\u000a a new 3D+ image visualization and analysis platform, V3D, to meet this need. The V3D system provides 3D visualization of gigabyte-sized\\u000a microscopy image

  6. Statistical methods for 2D-3D registration of optical and LIDAR images

    E-print Network

    Mastin, Dana Andrew

    2009-01-01

    Fusion of 3D laser radar (LIDAR) imagery and aerial optical imagery is an efficient method for constructing 3D virtual reality models. One difficult aspect of creating such models is registering the optical image with the ...

  7. SEPARABLE BEAMFORMING FOR 3-D SYNTHETIC APERTURE ULTRASOUND IMAGING Ming Yang, Richard Sampson*

    E-print Network

    Kambhampati, Subbarao

    . INTRODUCTION Ultrasound imaging is one of the most popular medical imaging modalities; it is inexpensiveSEPARABLE BEAMFORMING FOR 3-D SYNTHETIC APERTURE ULTRASOUND IMAGING Ming Yang, Richard Sampson ultrasound imaging, but power constraints have precluded practical implementation of high- resolution 3-D

  8. Automatic detection of optic disc and exudates in retinal images

    Microsoft Academic Search

    D. Kavitha; S. Shenbaga Devi

    2005-01-01

    A fast, reliable and efficient method for detecting the optic disc and exudates in retinal fundus images is presented in this work. The algorithm proceeds through three main steps: 1. Segmentation of blood vessels using median filtering and morphological operations and detection of the convergent point by fitting the blood vessels data using least square polynomial curve fitting algorithm. 2.

  9. Comparison of retinal image quality with spherical and customized aspheric

    E-print Network

    Dainty, Chris

    Comparison of retinal image quality with spherical and customized aspheric intraocular lenses aspheric intraocular lenses calculated with real ray tracing," J. Cataract Refract. Surg. 35(11), 1984­1994 (2009). 4. D. A. Atchison, "Design of aspheric intraocular lenses," Ophthalmic Physiol. Opt. 11(2), 137

  10. Blood Flow Magnetic Resonance Imaging of Retinal Degeneration

    E-print Network

    Duong, Timothy Q.

    Blood Flow Magnetic Resonance Imaging of Retinal Degeneration Yingxia Li,1 Haiying Cheng,1 Qiang. Duong1,2,3,4,5,6,7 PURPOSE. This study aims to investigate quantitative basal blood flow as well as hypercapnia- and hyperoxia-induced blood flow changes in the retinas of the Royal College of Surgeons (RCS

  11. ICER3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    Microsoft Academic Search

    A. Kiely; M. Klimesh; H. Xie; N. Aranki

    2006-01-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide loss- less and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decom- position structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating

  12. Microwave image reconstruction from 3-D fields coupled to 2-D parameter estimation

    Microsoft Academic Search

    Qianqian Fang; Paul M. Meaney; Shireen D. Geimer; Anatoly V. Streltsov; Keith D. Paulsen

    2004-01-01

    An efficient Gauss-Newton iterative imaging technique utilizing a three-dimensional (3-D) field solution coupled to a two-dimensional (2-D) parameter estimation scheme (3-D\\/2-D) is presented for microwave tomographic imaging in medical applications. While electromagnetic wave propagation is described fully by a 3-D vector field, a 3-D scalar model has been applied to improve the efficiency of the iterative reconstruction process with apparently

  13. Retinal functional imager (RFI): non-invasive functional imaging of the retina.

    PubMed

    Ganekal, S

    2013-01-01

    Retinal functional imager (RFI) is a unique non-invasive functional imaging system with novel capabilities for visualizing the retina. The objective of this review was to show the utility of non-invasive functional imaging in various disorders. Electronic literature search was carried out using the websites www.pubmed.gov and www.google.com. The search words were retinal functional imager and non-invasive retinal imaging used in combination. The articles published or translated into English were studied. The RFI directly measures hemodynamic parameters such as retinal blood-flow velocity, oximetric state, metabolic responses to photic activation and generates capillary perfusion maps (CPM) that provides retinal vasculature detail similar to flourescein angiography. All of these parameters stand in a direct relationship to the function and therefore the health of the retina, and are known to be degraded in the course of retinal diseases. Detecting changes in retinal function aid early diagnosis and treatment as functional changes often precede structural changes in many retinal disorders. PMID:24172564

  14. Evidence of outer retinal changes in glaucoma patients as revealed by ultrahigh-resolution in vivo retinal imaging

    PubMed Central

    Choi, Stacey S; Zawadzki, Robert J; Lim, Michele C; Brandt, James D; Keltner, John L; Doble, Nathan; Werner, John S

    2010-01-01

    Aims It is well established that glaucoma results in a thinning of the inner retina. To investigate whether the outer retina is also involved, ultrahigh-resolution retinal imaging techniques were utilised. Methods Eyes from 10 glaucoma patients (25–78 years old), were imaged using three research-grade instruments: (1) ultrahigh-resolution Fourier-domain optical coherence tomography (UHR-FD-OCT), (2) adaptive optics (AO) UHR-FD-OCT and (3) AO-flood illuminated fundus camera (AO-FC). UHR-FD-OCT and AO-UHR-FD-OCT B-scans were examined for any abnormalities in the retinal layers. On some patients, cone density measurements were made from the AO-FC en face images. Correlations between retinal structure and visual sensitivity were measured by Humphrey visual-field (VF) testing made at the corresponding retinal locations. Results All three in vivo imaging modalities revealed evidence of outer retinal changes along with the expected thinning of the inner retina in glaucomatous eyes with VF loss. AO-UHR-FD-OCT images identified the exact location of structural changes within the cone photoreceptor layer with the AO-FC en face images showing dark areas in the cone mosaic at the same retinal locations with reduced visual sensitivity. Conclusion Losses in cone density along with expected inner retinal changes were demonstrated in well-characterised glaucoma patients with VF loss. PMID:20956277

  15. Simulation of 3D MRI brain images for quantitative evaluation of image segmentation algorithms

    Microsoft Academic Search

    Gudrun Wagenknecht; Hans-Juergen Kaiser; Thorsten Obladen; Osama Sabri; Udalrich Buell

    2000-01-01

    To model the true shape of MRI brain images, automatically classified T1-weighted 3D MRI images (gray matter, white matter, cerebrospinal fluid, scalp\\/bone and background) are utilized for simulation of grayscale data and imaging artifacts. For each class, Gaussian distribution of grayscale values is assumed, and mean and variance are computed from grayscale images. A random generator fills up the class

  16. Monocular 3D display unit using soft actuator for parallax image shift

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Kodama, Yuuki

    2010-11-01

    The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth. This vision unit needs an image shift optics for generating monocular parallax images. But conventional image shift mechanism is heavy because of its linear actuator system. To improve this problem, we developed a light-weight 3D vision unit for presenting monocular stereoscopic images using a soft linear actuator made of a polypyrrole film.

  17. COMPARING THREE PCA-BASED METHODS FOR THE 3D VISUALIZATION OF IMAGING SPECTROSCOPY DATA

    E-print Network

    Liere, Robert van

    visualizations. KEYWORDS Image processing and analysis, pattern analysis and recognition, transfer function component analysis (PCA) based methods to generate transfer functions for the 3D visualization of imaging, imaging spectroscopy, principal component analysis and multidimensional. 1. Introduction Direct volume

  18. Direct writing of digital images onto 3D surfaces

    Microsoft Academic Search

    Raymond C. W. Sung; Jonathan R. Corney; David P. Towers; Ian Black; Duncan P. Hand; Finlay McPherson; Doug E. R. Clark; Markus S. Gross

    2006-01-01

    Purpose – Aims to develop a greyscale “painting system” by enabling the physical reproduction of digital texture maps on arbitrary 3D objects selectively exposing “pixels” of photographic emulsion with a robot mounted light source. Design\\/methodology\\/approach – After reviewing existing methods of “decorating” 3D components, the properties of photographic emulsion are introduced and the nature of the rendering process' pixels discussed.

  19. Methodology 3D Head Reconstruction from a Single Image

    E-print Network

    Barthelat, Francois

    orientation. Faces in the database and faces submitted for recognition will be rotated to a standard pose to use a generic head model to reconstruct a 3D face. Our method does not require to store a database limited to faces where both eyes are visible. [1] Tony S. Jebara, "3D Pose Estimation and Normalization

  20. Automatic Single-Image 3d Reconstructions of Indoor Manhattan World Scenes

    Microsoft Academic Search

    Erick Delage; Honglak Lee; Andrew Y. Ng

    2005-01-01

    Summary. 3d reconstruction from a single image is inherently an ambiguous prob- lem. Yet when we look at a picture, we can often infer 3d information about the scene. Humans perform single-image 3d reconstructions by using a variety of single- image depth cues, for example, by recognizing objects and surfaces, and reasoning about how these surfaces are connected to each

  1. Improvement of integral 3D image quality by compensating for lens position errors

    Microsoft Academic Search

    Makoto Okui; Jun Arai; Masaki Kobayashi; Fumio Okano

    2004-01-01

    Integral photography (IP) or integral imaging is a way to create natural-looking three-dimensional (3-D) images with full parallax. Integral three-dimensional television (integral 3-D TV) uses a method that electronically presents 3-D images in real time based on this IP method. The key component is a lens array comprising many micro-lenses for shooting and displaying. We have developed a prototype device

  2. A Level Set Method for Anisotropic Geometric Diffusion in 3D Image Processing

    E-print Network

    Preusser, Tobias

    A Level Set Method for Anisotropic Geometric Diffusion in 3D Image Processing Tobias Preußer and Martin Rumpf Abstract--A new morphological multiscale method in 3D image process- ing is presented which combines the image processing methodology based on nonlinear diffusion equations and the theory

  3. DXSoil, a library for 3D image analysis in soil science

    Microsoft Academic Search

    Jean-fran-cois Delerue; Edith Perrier

    2002-01-01

    A comprehensive series of routines has been developed to extract structural and topological information from 3D images of porous media. The main application aims at feeding a pore network approach to simulate unsaturated hydraulic properties from soil core images. Beyond the application example, the successive algorithms presented in the paper allow, from any 3D object image, the extraction of the

  4. A Range Image Refinement Technique for Multi-view 3D Model Reconstruction

    E-print Network

    Subbarao, Murali "Rao"

    A Range Image Refinement Technique for Multi-view 3D Model Reconstruction Soon-Yong Park and Murali-mail: parksy@ece.sunysb.edu Abstract This paper presents a range image refinement technique for generating accurate 3D computer models of real ob- jects. Range images obtained from a stereo-vision system typically

  5. Imaging simulation for 3-D laser radar based on target model

    Microsoft Academic Search

    Xiaowei Yan; Jiahao Deng

    2008-01-01

    Laser imaging radar has the unique capability to generate 3D images of objects, which is widely used in the civilian and military fields concerning target detection and identification. The imaging simulation for 3-D laser radar is presented to help in the design of the future laser radar systems and gauge their performances. Each stage from the laser source to the

  6. Connectivity Preserving Digitization of Blurred Binary Images in 2D and 3D

    E-print Network

    Hamburg,.Universität

    Connectivity Preserving Digitization of Blurred Binary Images in 2D and 3D Peer Stelldinger the 3D results are interesting since up to now only digitization without blurring has been inves in pictures. Since the input for any image analysis algorithm is a digital image, which does not need to have

  7. The Use of 3D Seismic Imaging in Making Groundwater Management Decisions At Hazardous Waste Sites

    Microsoft Academic Search

    Mary-Linda Adams

    Three-dimensional (3D) acoustic imaging is a highly developed technology that has produced a detailed image of the subsurface, at over 30 hazardous waste sites. 3D imaging has been used to provide the density of data necessary to analyze the pathways for fluid transport, whether in free phase or as a dissolved plume. This information has then been used to optimally

  8. Registration of 3-D CT and 2-D Flat Images of Mouse via Affine Transformation

    Microsoft Academic Search

    Zheng Xia; Xishi Huang; Xiaobo Zhou; Youxian Sun; V. Ntziachristos; Stephen Wong

    2008-01-01

    It is difficult to directly coregister the 3-D fluorescence molecular tomography (FMT) image of a small tumor in a mouse whose maximal diameter is only a few millimeters with a larger CT image of the entire animal that spans about 10 cm. This paper proposes a new method to register 2-D flat and 3-D CT image first to facilitate the

  9. Data compression for transmission of holographic 3D images using digital-SSTV

    NASA Astrophysics Data System (ADS)

    Takano, Kunihiko; Sato, Koki; Okumura, Toshimichi; Kanaoka, Takumi; Koizumi, Shinya; Muto, Kenji; Wakabayashi, Ryoji

    2006-02-01

    In this paper, the quality of recovered holographic images produced by CGH adopted JPEG2000 is investigated. As a result, it is observed that this process gives nice 3D reconstructed images. It appears to show that a transmission of 3D holographic images can be possible.

  10. Surface Reconstruction by Propagating 3D Stereo Data in Multiple 2D Images

    E-print Network

    Paris, Sylvain

    Surface Reconstruction by Propagating 3D Stereo Data in Multiple 2D Images Gang ZENG1 , Sylvain reconstruction from multiple images. The central idea is to explore the integration of both 3D stereo data and 2D calibrated images. This is motivated by the fact that only robust and accurate feature points that survived

  11. MULTIPLE IMAGE DISPARITY CORRECTION FOR 3-D SCENE REPRESENTATION Matthew Grum and Adrian G. Bors

    E-print Network

    Bors, Adrian

    MULTIPLE IMAGE DISPARITY CORRECTION FOR 3-D SCENE REPRESENTATION Matthew Grum and Adrian G. Bors of multiple object 3-D scenes from a given sparse set of images. Multi-camera stereo vision has been addressed, interpolation and generalisation properties and have been widely used in pattern recognition and image

  12. Hydraulic conductivity imaging from 3-D transient hydraulic tomography at several pumping/observation densities

    E-print Network

    Barrash, Warren

    Hydraulic conductivity imaging from 3-D transient hydraulic tomography at several pumping August 2013; accepted 7 September 2013; published 13 November 2013. [1] 3-D Hydraulic tomography (3-D HT (primarily hydraulic conductivity, K) is estimated by joint inversion of head change data from multiple

  13. 3D Digital Volume Correlation of Synchrotron Radiation Laminography images of ductile

    E-print Network

    1 3D Digital Volume Correlation of Synchrotron Radiation Laminography images of ductile crack of measuring 3D displacement fields in the bulk during ductile crack initiation via combined Synchrotron to Tomography, SRCL is a technique that is particularly adapted to obtain in 3D reconstructed volumes of objects

  14. Image-Based Model Acquisition and Interactive Rendering for Building 3D Digital Archives

    E-print Network

    Chang, Chun-Fa

    Image-Based Model Acquisition and Interactive Rendering for Building 3D Digital Archives Ying) digital archives of museum artifacts. Our system allows the targeted audience to observe the digitized 3D visualization of CT or ultrasound scans in the medical community. Therefore, building 3D digital archives

  15. IEEE TRANSACTIONS ON IMAGE PROCESSING 1 3D Discrete Shearlet Transform and Video

    E-print Network

    Labate, Demetrio

    IEEE TRANSACTIONS ON IMAGE PROCESSING 1 3D Discrete Shearlet Transform and Video Processing Pooran Singh Negi and Demetrio Labate Abstract--In this paper, we introduce a digital implementation of the 3D to the digital setting and a more flexible mathematical structure. The 3D digital shearlet transform algorithm

  16. Topological Equivalence between a 3D Object and the Reconstruction of Its Digital Image

    E-print Network

    Latecki, Longin Jan

    Topological Equivalence between a 3D Object and the Reconstruction of Its Digital Image Peer. If one digitizes a 3D object even with a dense sampling grid, the reconstructed digital object may have that it is homeomorphic and close to the 3D object. The resulting digital object is always well- composed, which has nice

  17. Non conventional Imaging Systems for 3D Digitization of transparent and/or specular

    E-print Network

    Paris-Sud XI, Université de

    Non conventional Imaging Systems for 3D Digitization of transparent and/or specular manufactured presented in the literature Keywords--3D Digitization, non diffused surfaces, Shape from polarization@u-bourgogne.fr Abstract--3D scanning has been investigated for several years and most of the proposed approaches assume

  18. Segmental reproducibility of retinal blood flow velocity measurements using retinal function imager

    PubMed Central

    Chhablani, Jay; Bartsch, Dirk-Uwe; Kozak, Igor; Cheng, Lingyun; Alshareef, Rayan A; Rezeq, Sami S; Sampat, Kapil M; Garg, Sunir J; Burgansky-Eliash, Zvia; Freeman, William R

    2013-01-01

    Background To evaluate the reproducibility of blood flow velocity measurements of individual retinal blood vessel segments using retinal function imager (RFI). Methods Eighteen eyes of 15 healthy subjects were enrolled prospectively at three centers. All subjects underwent RFI imaging in two separate sessions 15 min apart by a single experienced photographer at each center. An average of five to seven serial RFI images were obtained. All images were transferred electronically to one center, and were analyzed by a single observer. Multiple blood vessel segments (each shorter than 100 ?m) were co-localized on first and second session images taken at different times of the same fundus using built-in software. Velocities of corresponding segments were determined, and then the inter-session reproducibility of flow velocity was assessed by the concordance correlation co-efficient (CCC), coefficient of reproducibility (CR), and coefficient of variance (CV). Results Inter-session CCC for flow velocity was 0.97 (95% confidence interval (CI), 0.966 to 0.9797). The CR was 1.49 mm/sec (95% CI, 1.39 to 1.59 mm/sec), and CV was 10.9%. The average arterial blood flow velocity was 3.16 mm/sec, and average venous blood flow velocity was 3.15 mm/sec. The CR for arterial and venous blood flow velocity was 1.61 mm/sec and 1.27 mm/sec respectively. Conclusion RFI provides reproducible measurements for retinal blood flow velocity for individual blood vessel segments, with 10.9% variability. PMID:23700326

  19. [Ophthalmologic diagnostic procedures and imaging of retinal vein occlusions].

    PubMed

    Mirshahi, A; Lorenz, K; Kramann, C; Stoffelns, B; Hattenbach, L-O

    2011-02-01

    Retinal vein occlusions are a common vascular disease of the eye. Ophthalmological diagnostic procedures and imaging are important for the prognosis of the disease, as are the systemic work-up and therapy. Besides routine ophthalmic tests (visual acuity, slit lamp examination, funduscopy) a work-up for glaucoma such as intraocular pressure, visual field or 24 h IOP profile is useful as a diagnostic procedure. Furthermore, new diagnostic and imaging tests such as central corneal thickness and optic nerve head imaging by Heidelberg retina tomography or optical coherence tomography (OCT) should be considered for glaucoma evaluation. Optical coherence tomography also plays a major role in treatment monitoring of macular edema secondary to retinal vein occlusions. Fluorescein angiography is well established and can provide information with regard to size and extent of the occlusion, degree of ischemia, areas of non-perfusion and neovascularization, as well as macular edema. PMID:21331683

  20. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    NASA Astrophysics Data System (ADS)

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  1. Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation.

    PubMed

    Wang, Junchen; Suenaga, Hideyuki; Liao, Hongen; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

    2015-03-01

    Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability. PMID:25465067

  2. Motion parallax based restitution of 3D images on legacy consumer mobile devices

    Microsoft Academic Search

    Martin Rerabek; Lutz Goldmann; Jong-Seok Lee; Touradj Ebrahimi

    2011-01-01

    While 3D display technologies are already widely available for cinema and home or corporate use, only a few portable devices currently feature 3D display capabilities. Moreover, the large majority of 3D display solutions rely on binocular perception. In this paper, we study the alternative methods for restitution of 3D images on conventional 2D displays and analyze their respective performance. This

  3. 3-D Seismic Methods for Shallow Imaging Beneath Pavement

    E-print Network

    Miller, Brian

    2013-05-31

    The research presented in this dissertation focuses on survey design and acquisition of near-surface 3D seismic reflection and surface wave data on pavement. Increased efficiency for mapping simple subsurface interfaces through a combined use...

  4. IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING 1 3D-Image Reconstruction in Highly Collimated 3D

    E-print Network

    Labate, Demetrio

    , the acceptance of new imaging instruments has been slow in past due to concerns about radiation safety of Texas Medical Branch, Massoud Motamedi, University of Texas Medical Branch Abstract--The paper, which reduces the overall radiation exposure when primarily the reconstruction of a specified region

  5. Anaglyph of retinal stem cells and developing axons: selective volume enhancement in microscopy images.

    PubMed

    Carri, Néstor Gabriel; Bermúdez, Sebastián Noo; Fiore, Luciano; Di Napoli, Jennifer; Scicolone, Gabriel

    2014-04-01

    Retinal stem cell culture has become a powerful research tool, but it requires reliable methods to obtain high-quality images of living and fixed cells. This study describes a procedure for using phase contrast microscopy to obtain three-dimensional (3-D) images for the study of living cells by photographing a living cell in a culture dish from bottom to top, as well as a procedure to increase the quality of scanning electron micrographs and laser confocal images. The procedure may also be used to photograph clusters of neural stem cells, and retinal explants with vigorous axonal growth. In the case of scanning electron microscopy and laser confocal images, a Gaussian procedure is applied to the original images. The methodology allows for the creation of anaglyphs and video reconstructions, and provides high-quality images for characterizing living cells or tissues, fixed cells or tissues, or organs observed with scanning electron and laser confocal microscopy. Its greatest advantage is that it is easy to obtain good results without expensive equipment. The procedure is fast, precise, simple, and offers a strategic tool for obtaining 3-D reconstructions of cells and axons suitable for easily determining the orientation and polarity of a specimen. It also enables video reconstructions to be created, even of specimens parallel to the plastic base of a tissue culture dish, It is also helpful for studying the distribution and organization of living cells in a culture, as it provides the same powerful information as optical tomography, which most confocal microscopes cannot do on sterile living cells. PMID:24510888

  6. A systematic approach for 2D-image to 3D-range registration in urban environments q

    E-print Network

    Stamos, Ioannis

    A systematic approach for 2D-image to 3D-range registration in urban environments q Lingyun Liu Available online 17 August 2011 Keywords: 2D-to-3D Registration Photorealistic 3D modeling a b s t r a c (extracted from 2D images) with 3D directions (derived from a 3D range model). Then, a hypothesis

  7. Multiview Geometry for Texture Mapping 2D Images Onto 3D Range Data Lingyun Liu and Ioannis Stamos

    E-print Network

    Wolberg, George

    technology with traditional digital photography. A system- atic way for registering 3D range scans and 2DMultiview Geometry for Texture Mapping 2D Images Onto 3D Range Data £ Lingyun Liu and Ioannis 3D registration techniques for texture mapping 2D images onto 3D range data. The 3D range scans

  8. Production of 3D consistent image representation of outdoor scenery for multimedia ambiance communication from multiviewpoint range data measured with a 3D laser scanner

    Microsoft Academic Search

    Takahiro Saito; Hiroshi Imamura; Shin-ichi Sunaga; Takashi Komatsu

    2002-01-01

    Toward future 3D image communication, we have started studying the Multimedia Ambiance Communication, a kind of shared-space communication, and adopted an approach to design the 3D-image space using actual images of outdoor scenery, by introducing the concept of the three-layer model of long-, mid- and short-range views. The long- and mid-range views do not require precise representation of their 3D

  9. Determining an initial image pair for fixing the scale of a 3d reconstruction from an image

    E-print Network

    Determining an initial image pair for fixing the scale of a 3d reconstruction from an image@ipb.uni-bonn.de rsteffen@uni-bonn.de Abstract. Algorithms for metric 3d reconstruction of scenes from cali- brated image of such a stable image pair is proposed. Based on this quality measure a fully automatic initialization phase

  10. Retinally reconstructed images (RRIs): digital images having a resolution match with the human eye

    Microsoft Academic Search

    Turker Kuyel; Wilson S. Geisler; Joydeep Ghosh

    1998-01-01

    Current digital image\\/video storage, transmission and display technologies use uniformly sampled images. On the other hand, the human retina has a nonuniform sampling density that decreases dramatically as the solid angle from the visual fixation axis increases. Therefore, there is sampling mismatch between the uniformly sampled digital images and the retina. This paper introduces Retinally Reconstructed Images (RRIs), a novel

  11. Retinally reconstructed images: digital images having a resolution match with the human eye

    Microsoft Academic Search

    T. Kyuel; Wilson S. Geisler; Joydeep Ghosh

    1999-01-01

    Current digital image\\/video storage, transmission and dis- play technologies use uniformly sampled images. On the other hand, the human retina has a nonuniform sampling density that decreases dramatically as the solid angle from the visual fixation axis increases. Therefore, there is sampling mismatch between the uniformly sampled digital images and the retina. This paper introduces retinally recon- structed images (RRI's),

  12. Seeing More Is Knowing More: V3D Enables Real-Time 3D Visualization and Quantitative Analysis of Large-Scale Biological Image Data Sets

    NASA Astrophysics Data System (ADS)

    Peng, Hanchuan; Long, Fuhui

    Everyone understands seeing more is knowing more. However, for large-scale 3D microscopic image analysis, it has not been an easy task to efficiently visualize, manipulate and understand high-dimensional data in 3D, 4D or 5D spaces. We developed a new 3D+ image visualization and analysis platform, V3D, to meet this need. The V3D system provides 3D visualization of gigabyte-sized microscopy image stacks in real time on current laptops and desktops. V3D streamlines the online analysis, measurement and proofreading of complicated image patterns by combining ergonomic functions for selecting a location in an image directly in 3D space and for displaying biological measurements, such as from fluorescent probes, using the overlaid surface objects. V3D runs on all major computer platforms and can be enhanced by software plug-ins to address specific biological problems. To demonstrate this extensibility, we built a V3Dbased application, V3D-Neuron, to reconstruct complex 3D neuronal structures from high-resolution brain images. V3D-Neuron can precisely digitize the morphology of a single neuron in a fruitfly brain in minutes, with about a 17-fold improvement in reliability and tenfold savings in time compared with other neuron reconstruction tools. Using V3D-Neuron, we demonstrate the feasibility of building a high-resolution 3D digital atlas of neurite tracts in the fruitfly brain. V3D can be easily extended using a simple-to-use and comprehensive plugin interface.

  13. High Resolution MALDI Imaging Mass Spectrometry of Retinal Tissue Lipids

    NASA Astrophysics Data System (ADS)

    Anderson, David M. G.; Ablonczy, Zsolt; Koutalos, Yiannis; Spraggins, Jeffrey; Crouch, Rosalie K.; Caprioli, Richard M.; Schey, Kevin L.

    2014-08-01

    Matrix assisted laser desorption ionization imaging mass spectrometry (MALDI IMS) has the ability to provide an enormous amount of information on the abundances and spatial distributions of molecules within biological tissues. The rapid progress in the development of this technology significantly improves our ability to analyze smaller and smaller areas and features within tissues. The mammalian eye has evolved over millions of years to become an essential asset for survival, providing important sensory input of an organism's surroundings. The highly complex sensory retina of the eye is comprised of numerous cell types organized into specific layers with varying dimensions, the thinnest of which is the 10 ?m retinal pigment epithelium (RPE). This single cell layer and the photoreceptor layer contain the complex biochemical machinery required to convert photons of light into electrical signals that are transported to the brain by axons of retinal ganglion cells. Diseases of the retina, including age-related macular degeneration (AMD), retinitis pigmentosa, and diabetic retinopathy, occur when the functions of these cells are interrupted by molecular processes that are not fully understood. In this report, we demonstrate the use of high spatial resolution MALDI IMS and FT-ICR tandem mass spectrometry in the Abca4 -/- knockout mouse model of Stargardt disease, a juvenile onset form of macular degeneration. The spatial distributions and identity of lipid and retinoid metabolites are shown to be unique to specific retinal cell layers.

  14. High-resolution retinal imaging: enhancement techniques

    NASA Astrophysics Data System (ADS)

    Mujat, Mircea; Patel, Ankit; Iftimia, Nicusor; Akula, James D.; Fulton, Anne B.; Ferguson, R. Daniel

    2015-03-01

    AO has achieved success in a range of applications in ophthalmology where microstructures need to be identified, counted, and mapped. Multiple images are averaged to improve the SNR or analyzed for temporal dynamics. For small patches, image registration by cross-correlation is straightforward. Larger images require more sophisticated registration techniques. Strip-based registration has been used successfully for photoreceptor mosaic alignment in small patches; however, if the deformations along long strips are not simple displacements, averaging will actually degrade the images. We have applied non-rigid registration that significantly improves the quality of processed images for mapping cones and rods, and microvasculature in dark-field imaging. Local grid deformations account for local image stretching and compression due to a number of causes. Individual blood cells can be traced along capillaries in high-speed imaging (130 fps) and flow dynamics can be analyzed.

  15. Simulation of a new 3D imaging sensor for identifying difficult military targets

    Microsoft Academic Search

    Christophe Harvey; Jonathan Wood; Peter Randall; Graham Watson; Gordon Smith

    2008-01-01

    This paper reports the successful application of automatic target recognition and identification (ATR\\/I) algorithms to simulated 3D imagery of 'difficult' military targets. QinetiQ and Selex S&AS are engaged in a joint programme to build a new 3D laser imaging sensor for UK MOD. The sensor is a 3D flash system giving an image containing range and intensity information suitable for

  16. 3-D imaging and quantitative comparison of human dentitions and simulated bite marks

    Microsoft Academic Search

    S. A. Blackwell; R. V. Taylor; I. Gordon; C. L. Ogleby; T. Tanijiri; M. Yoshino; M. R. Donald; J. G. Clement

    2007-01-01

    This study presents a technique developed for 3-D imaging and quantitative comparison of human dentitions and simulated bite\\u000a marks. A sample of 42 study models and the corresponding bites, made by the same subjects in acrylic dental wax, were digitised\\u000a by laser scanning. This technique allows image comparison of a 3-D dentition with a 3-D bite mark, eliminating distortion\\u000a due

  17. Registration of bimodal retinal images - improving modifications

    Microsoft Academic Search

    L. Kubecka; J. Jan

    2004-01-01

    The proper optical disc segmentation in images provided by confocal laser scanning ophthalmoscope and by color fundus-camera is a necessary step in early glaucoma or arteriosclerosis detection. Fusing information from both modalities into a vector-valued image is expected to improve the segmentation reliability. The paper describes a registration of these images using optimization based on mutual information criterion function extended

  18. 3D image display of fetal ultrasonic images by thin shell

    NASA Astrophysics Data System (ADS)

    Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen

    1999-05-01

    Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.

  19. 3D color surface digitization of human head from sequence of structured light images

    NASA Astrophysics Data System (ADS)

    Jin, Gang; Li, Dehua; Huang, Jianzhong; Li, Zeyu

    1998-09-01

    Acquiring 3D color model of human head is desired in many applications. In this paper, we introduce a scheme to obtain 3D color information of human head from image sequence in 3D laser color scanner. Structured light technology is employed to measure depth. We study the relationship among the object's images in different position. Synthesizing these information, we can obtain the shape of hair area from contour image. True color information of sample points can be acquired from the specified image in image sequence. The result of experiment is satisfactory.

  20. On the Cohomology of 3D Digital Images

    E-print Network

    Gonzalez-Diaz, Rocio; 10.1016/j.dam.2004.09.014

    2011-01-01

    We propose a method for computing the cohomology ring of three--dimensional (3D) digital binary-valued pictures. We obtain the cohomology ring of a 3D digital binary--valued picture $I$, via a simplicial complex K(I)topologically representing (up to isomorphisms of pictures) the picture I. The usefulness of a simplicial description of the "digital" cohomology ring of 3D digital binary-valued pictures is tested by means of a small program visualizing the different steps of the method. Some examples concerning topological thinning, the visualization of representative (co)cycles of (co)homology generators and the computation of the cup product on the cohomology of simple pictures are showed.

  1. A Java program for stereo retinal image visualization.

    PubMed

    Zhu, Yang-Ming

    2007-03-01

    Stereo imaging of the optic-disc is a gold standard examination of glaucoma, and progression of glaucoma can be detected from temporal stereo images. A Java-based software system is reported here which automatically aligns the left and right stereo retinal images and presents the aligned images side by side, along with the anaglyph computed from the aligned images. Moreover, the disparity between two aligned images is computed and used as the depth cue to render the optic-disc images, which can be interactively edited, panned, zoomed, rotated, and animated, allowing one to examine the surface of the optic-nerve head from different view angles. Measurement including length, area, and volume of regions of interest can also be performed interactively. PMID:17257706

  2. The ASTM E57 file format for 3D imaging data exchange

    NASA Astrophysics Data System (ADS)

    Huber, Daniel

    2011-03-01

    There is currently no general-purpose, open standard for storing data produced by three dimensional (3D) imaging systems, such as laser scanners. As a result, producers and consumers of such data rely on proprietary or ad-hoc formats to store and exchange data. There is a critical need in the 3D imaging industry for open standards that promote data interoperability among 3D imaging hardware and software systems. For the past three years, a group of volunteers has been working within the ASTM E57 Committee on 3D Imaging Systems to develop an open standard for 3D imaging system data exchange to meet this need. The E57 File Format for 3D Imaging Data Exchange (E57 format hereafter) is capable of storing point cloud data from laser scanners and other 3D imaging systems, as well as associated 2D imagery and core meta-data. This paper describes the motivation, requirements, design, and implementation of the E57 format, and highlights the technical concepts developed for the standard. We also compare the format with other proprietary or special purpose 3D imaging formats, such as the LAS format, and we discuss the open source library implementation designed to read, write, and validate E57 files.

  3. Adaptive and Quality 3D Meshing from Imaging Data Yongjie Zhang

    E-print Network

    Zhang, Yongjie "Jessica"

    Adaptive and Quality 3D Meshing from Imaging Data Yongjie Zhang Chandrajit Bajaj Bong-Soo Sohn at Austin Figure 1: Adaptive tetrahedral meshes extracted from UNC Head (CT, 129×129×129). Isovalues (in an algorithm to extract adaptive and quality 3D meshes directly from volumetric imaging data - primarily

  4. Opti-Acoustic Stereo Imaging, System Calibration and 3-D Reconstruction

    Microsoft Academic Search

    Shahriar Negahdaripour; Hicham Sekkati; Hamed Pirsiavash

    2007-01-01

    Utilization of an acoustic camera for range measure- ments is a key advantage for 3-D shape recovery of under- water targets by opti-acoustic stereo imaging, where the as- sociated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sec- tions. In this paper, we propose methods for system cali- bration and 3-D scene reconstruction

  5. E cient Reconstruction Techniques for Post-Rendering 3D Image Warping

    E-print Network

    North Carolina at Chapel Hill, University of

    E cient Reconstruction Techniques for Post-Rendering 3D Image Warping TR98-011 March 21, 1998 Action Institution. #12;UNC CS Technical Report #TR98-011, March 21, 1998 Efficient Reconstruction Techniques for Post-Rendering 3D Image Warping William R. Mark Gary Bishop Department of Computer Science

  6. Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing

    Microsoft Academic Search

    Seung-Hyun Hong; Bahram Javidi

    2004-01-01

    In the computational three-dimensional (3D) volumetric reconstruction integral imaging (II) system, volume pixels of the scene are reconstructed by superimposing the inversely mapped elemental images through a computationally simulated optical reconstruction process according to ray optics. Close placement of a 3D object to the lenslet array in the pickup process may result in significant variation in intensity between the adjacent

  7. Application of smoothing algorithms to enhance quality of 3D reconstructed images in tissues and cultures 

    E-print Network

    Enloe, Lillian Charity

    1999-01-01

    this very efficient, effective tool to smooth 3D reconstructed images of test spheres and plant cells. In order to reconstruct a series of segmented images in 3D it is necessary to use the Marching Cubes algorithm which takes a specified scalar value...

  8. 3D kinematics of the tarsal joints from magnetic resonance images

    Microsoft Academic Search

    Bruce E. Hirsch; Jayaram K. Udupa; Enyi Okereke; Howard J. Hillstrom; Sorin Siegler; Stacie I. Ringleb; Carl W. Imhauser

    2001-01-01

    We have developed a method for analyzing motion at skeletal joints based on the 3D reconstruction of magnetic resonance (MR) image data. Since the information about each voxel in MR images includes its location in the scanner, it follows that information is available for each organ whose 3D surface is computed from a series of MR slices. In addition, there

  9. Spatio-Temporal Data Fusion for 3D+T Image Reconstruction in Cerebral Angiography

    Microsoft Academic Search

    Andrew D. Copeland; Rami S. Mangoubi; Mukund N. Desai; Sanjoy K. Mitter; Adel M. Malek

    2010-01-01

    This paper provides a framework for generating high resolution time sequences of 3D images that show the dynamics of cerebral blood flow. These sequences have the potential to allow image feedback during medical procedures that facilitate the detection and observation of pathological abnormalities such as stenoses, aneurysms, and blood clots. The 3D time series is constructed by fusing a single

  10. Alternating Direction Method of Multipliers Applied to 3D Light Sheet Fluorescence Microscopy Image Deblurring Using

    E-print Network

    Weiss, Pierre

    Alternating Direction Method of Multipliers Applied to 3D Light Sheet Fluorescence Microscopy Image technique producing large 3D data sets: Light Sheet Fluorescence Microscopy. This paper details. Introduction Light Sheet Fluorescence Microscopy (LSFM) is a recent and very promising imaging technique

  11. Surface Reconstruction by Propagating 3D Stereo Data in Multiple 2D Images

    E-print Network

    Boyer, Edmond

    the integration of both 3D stereo data and 2D calibrated images. This is motivated by the fact that only robustSurface Reconstruction by Propagating 3D Stereo Data in Multiple 2D Images Gang ZENG1 , Sylvain. The density insufficiency and the inevitable holes in the stereo data should be filled in by using information

  12. Opti-Acoustic Stereo Imaging: On System Calibration and 3-D Target Reconstruction

    Microsoft Academic Search

    Shahriar Negahdaripour; Hicham Sekkati; Hamed Pirsiavash

    2009-01-01

    Utilization of an acoustic camera for range measurements is a key advantage for 3-D shape recovery of underwater targets by opti-acoustic stereo imaging, where the associated epipolar geometry of optical and acoustic image correspondences can be described in terms of conic sections. In this paper, we propose methods for system calibration and 3-D scene reconstruction by maximum likelihood estimation from

  13. Pragmatic fully 3D image reconstruction for the MiCES mouse imaging PET scanner

    NASA Astrophysics Data System (ADS)

    Lee, Kisung; Kinahan, Paul E.; Fessler, Jeffrey A.; Miyaoka, Robert S.; Janes, Marie; Lewellen, Tom K.

    2004-10-01

    We present a pragmatic approach to image reconstruction for data from the micro crystal elements system (MiCES) fully 3D mouse imaging positron emission tomography (PET) scanner under construction at the University of Washington. Our approach is modelled on fully 3D image reconstruction used in clinical PET scanners, which is based on Fourier rebinning (FORE) followed by 2D iterative image reconstruction using ordered-subsets expectation-maximization (OSEM). The use of iterative methods allows modelling of physical effects (e.g., statistical noise, detector blurring, attenuation, etc), while FORE accelerates the reconstruction process by reducing the fully 3D data to a stacked set of independent 2D sinograms. Previous investigations have indicated that non-stationary detector point-spread response effects, which are typically ignored for clinical imaging, significantly impact image quality for the MiCES scanner geometry. To model the effect of non-stationary detector blurring (DB) in the FORE+OSEM(DB) algorithm, we have added a factorized system matrix to the ASPIRE reconstruction library. Initial results indicate that the proposed approach produces an improvement in resolution without an undue increase in noise and without a significant increase in the computational burden. The impact on task performance, however, remains to be evaluated.

  14. A systematic approach for 2D-image to 3D-range registration in urban environments

    E-print Network

    Stamos, Ioannis

    photography. A system- atic way for registering 3D range scans and 2D images is thus essential. RecentA systematic approach for 2D-image to 3D-range registration in urban environments Lingyun Liu (extracted from 2D images) with 3D directions (derived from a 3D range model). Then, a hypothesis

  15. Exposing digital image forgeries by 3D reconstruction technology

    Microsoft Academic Search

    Yongqiang Wang; Xiaojing Xu; Zhihui Li; Haizhen Liu; Zhigang Li; Wei Huang

    2009-01-01

    Digital images are easy to tamper and edit due to availability of powerful image processing and editing software. Especially, forged images by taking from a picture of scene, because of no manipulation was made after taking, usual methods, such as digital watermarks, statistical correlation technology, can hardly detect the traces of image tampering. According to image forgery characteristics, a method,

  16. Polarimetric imaging of retinal disease by polarization sensitive SLO

    NASA Astrophysics Data System (ADS)

    Miura, Masahiro; Elsner, Ann E.; Iwasaki, Takuya; Goto, Hiroshi

    2015-03-01

    Polarimetry imaging is used to evaluate different features of the macular disease. Polarimetry images were recorded using a commercially- available polarization-sensitive scanning laser opthalmoscope at 780 nm (PS-SLO, GDx-N). From data sets of PS-SLO, we computed average reflectance image, depolarized light images, and ratio-depolarized light images. The average reflectance image is the grand mean of all input polarization states. The depolarized light image is the minimum of crossed channel. The ratio-depolarized light image is a ratio between the average reflectance image and depolarized light image, and was used to compensate for variation of brightness. Each polarimetry image is compared with the autofluorescence image at 800 nm (NIR-AF) and autofluorescence image at 500 nm (SW-AF). We evaluated four eyes with geographic atrophy in age related macular degeneration, one eye with retinal pigment epithelium hyperplasia, and two eyes with chronic central serous chorioretinopathy. Polarization analysis could selectively emphasize different features of the retina. Findings in ratio depolarized light image had similarities and differences with NIR-AF images. Area of hyper-AF in NIR-AF images showed high intensity areas in the ratio depolarized light image, representing melanin accumulation. Areas of hypo-AF in NIR-AF images showed low intensity areas in the ratio depolarized light images, representing melanin loss. Drusen were high-intensity areas in the ratio depolarized light image, but NIR-AF images was insensitive to the presence of drusen. Unlike NIR-AF images, SW-AF images showed completely different features from the ratio depolarized images. Polarization sensitive imaging is an effective tool as a non-invasive assessment of macular disease.

  17. 3D imaging of cone photoreceptors over extended time periods using optical coherence tomography with adaptive optics

    NASA Astrophysics Data System (ADS)

    Kocaoglu, Omer P.; Lee, Sangyeol; Jonnal, Ravi S.; Wang, Qiang; Herde, Ashley E.; Besecker, Jason; Gao, Weihua; Miller, Donald T.

    2011-03-01

    Optical coherence tomography with adaptive optics (AO-OCT) is a highly sensitive, noninvasive method for 3D imaging of the microscopic retina. The purpose of this study is to advance AO-OCT technology by enabling repeated imaging of cone photoreceptors over extended periods of time (days). This sort of longitudinal imaging permits monitoring of 3D cone dynamics in both normal and diseased eyes, in particular the physiological processes of disc renewal and phagocytosis, which are disrupted by retinal diseases such as age related macular degeneration and retinitis pigmentosa. For this study, the existing AO-OCT system at Indiana underwent several major hardware and software improvements to optimize system performance for 4D cone imaging. First, ultrahigh speed imaging was realized using a Basler Sprint camera. Second, a light source with adjustable spectrum was realized by integration of an Integral laser (Femto Lasers, ?c=800nm, ??=160nm) and spectral filters in the source arm. For cone imaging, we used a bandpass filter with ?c=809nm and ??=81nm (2.6 ?m nominal axial resolution in tissue, and 167 KHz A-line rate using 1,408 px), which reduced the impact of eye motion compared to previous AO-OCT implementations. Third, eye motion artifacts were further reduced by custom ImageJ plugins that registered (axially and laterally) the volume videos. In two subjects, cone photoreceptors were imaged and tracked over a ten day period and their reflectance and outer segment (OS) lengths measured. High-speed imaging and image registration/dewarping were found to reduce eye motion to a fraction of a cone width (1 ?m root mean square). The pattern of reflections in the cones was found to change dramatically and occurred on a spatial scale well below the resolution of clinical instruments. Normalized reflectance of connecting cilia (CC) and OS posterior tip (PT) of an exemplary cone was 54+/-4, 47+/-4, 48+/-6, 50+/-5, 56+/-1% and 46+/-4, 53+/-4, 52+/-6, 50+/-5, 44+/-1% for days #1,3,6,8,10 respectively. OS length of the same cone was 28.9, 26.4, 26.4, 30.6, and 28.1 ìm for days #1,3,6,8,10 respectively. It is plausible these changes are an optical correlate of the natural process of OS renewal and shedding.

  18. Constructing Complex 3D Biological Environments from Medical Imaging Using

    E-print Network

    Romano, Daniela

    information about the shape, size, and path followed by the mammalian oviduct, called the fallopian tube with a grounding in reality. 1.1 Biological Background The mammalian oviduct, called the fallopian tube in humans processed to identify the individual cross sections and determine the 3D path that the tube follows through

  19. Statistical skull models from 3D X-ray images

    E-print Network

    Berar, M; Bailly, G; Payan, Y; Berar, Maxime; Desvignes, Michel; Payan, Yohan

    2006-01-01

    We present 2 statistical models of the skull and mandible built upon an elastic registration method of 3D meshes. The aim of this work is to relate degrees of freedom of skull anatomy, as static relations are of main interest for anthropology and legal medicine. Statistical models can effectively provide reconstructions together with statistical precision. In our applications, patient-specific meshes of the skull and the mandible are high-density meshes, extracted from 3D CT scans. All our patient-specific meshes are registrated in a subject-shared reference system using our 3D-to-3D elastic matching algorithm. Registration is based upon the minimization of a distance between the high density mesh and a shared low density mesh, defined on the vertexes, in a multi resolution approach. A Principal Component analysis is performed on the normalised registrated data to build a statistical linear model of the skull and mandible shape variation. The accuracy of the reconstruction is under the millimetre in the shape...

  20. Retinal, anterior segment and full eye imaging using ultrahigh speed swept source OCT with vertical-cavity surface emitting lasers

    PubMed Central

    Grulkowski, Ireneusz; Liu, Jonathan J.; Potsaid, Benjamin; Jayaraman, Vijaysekhar; Lu, Chen D.; Jiang, James; Cable, Alex E.; Duker, Jay S.; Fujimoto, James G.

    2012-01-01

    We demonstrate swept source OCT utilizing vertical-cavity surface emitting laser (VCSEL) technology for in vivo high speed retinal, anterior segment and full eye imaging. The MEMS tunable VCSEL enables long coherence length, adjustable spectral sweep range and adjustable high sweeping rate (50–580 kHz axial scan rate). These features enable integration of multiple ophthalmic applications into one instrument. The operating modes of the device include: ultrahigh speed, high resolution retinal imaging (up to 580 kHz); high speed, long depth range anterior segment imaging (100 kHz) and ultralong range full eye imaging (50 kHz). High speed imaging enables wide-field retinal scanning, while increased light penetration at 1060 nm enables visualization of choroidal vasculature. Comprehensive volumetric data sets of the anterior segment from the cornea to posterior crystalline lens surface are also shown. The adjustable VCSEL sweep range and rate make it possible to achieve an extremely long imaging depth range of ~50 mm, and to demonstrate the first in vivo 3D OCT imaging spanning the entire eye for non-contact measurement of intraocular distances including axial eye length. Swept source OCT with VCSEL technology may be attractive for next generation integrated ophthalmic OCT instruments. PMID:23162712

  1. 100-inch 3D real-image rear-projection display system based on Fresnel lens

    NASA Astrophysics Data System (ADS)

    Jang, Sun-Joo; Kim, Seung-Chul; Koo, Jung-Sik; Park, Jung-Il; Kim, Eun-Soo

    2004-11-01

    In this paper, as an approach for a wide 3D real image display system without special glasses, a 100" Fresnel lens-based 3D real-projection display system is implemented and its physical size is designed by 2800x2800x1600 mm3 in length, width and depth, respectively. In this display system, the conventional 2D video image is projected into the air through some projection optics and a pair of Fresnel lens and as a result, it can form a floating video image having a real depth. From some experiments with the test video images, the floated 3D video images with some depth have been realistically viewed, in which forward depth of the floated 3D image from the display screen is found to be 35~47 inches and the viewing angle to be 60 degrees, respectively. This feasibility test for the prototype of 100" Fresnel lens-based 3D real image rear-projection display systems suggests a possibility of its practical applications to the 3D advertisements, 3D animations, 3D games and so on.

  2. Quality assessment of stereoscopic 3D image compression by binocular integration behaviors.

    PubMed

    Lin, Yu-Hsun; Wu, Ja-Ling

    2014-04-01

    The objective approaches of 3D image quality assessment play a key role for the development of compression standards and various 3D multimedia applications. The quality assessment of 3D images faces more new challenges, such as asymmetric stereo compression, depth perception, and virtual view synthesis, than its 2D counterparts. In addition, the widely used 2D image quality metrics (e.g., PSNR and SSIM) cannot be directly applied to deal with these newly introduced challenges. This statement can be verified by the low correlation between the computed objective measures and the subjectively measured mean opinion scores (MOSs), when 3D images are the tested targets. In order to meet these newly introduced challenges, in this paper, besides traditional 2D image metrics, the binocular integration behaviors-the binocular combination and the binocular frequency integration, are utilized as the bases for measuring the quality of stereoscopic 3D images. The effectiveness of the proposed metrics is verified by conducting subjective evaluations on publicly available stereoscopic image databases. Experimental results show that significant consistency could be reached between the measured MOS and the proposed metrics, in which the correlation coefficient between them can go up to 0.88. Furthermore, we found that the proposed metrics can also address the quality assessment of the synthesized color-plus-depth 3D images well. Therefore, it is our belief that the binocular integration behaviors are important factors in the development of objective quality assessment for 3D images. PMID:24569441

  3. Human conjunctival microvasculature assessed with a retinal function imager (RFI)

    PubMed Central

    Jiang, Hong; Ye, Yufeng; DeBuc, Delia Cabrera; Lam, Byron L; Rundek, Tatjana; Tao, Aizhu; Shao, Yilei; Wang, Jianhua

    2012-01-01

    The conjunctival and cerebral vasculatures share similar embryological origins, with similar structural and physiological characteristics. Tracking the conjunctival microvasculature may provide useful information for predicting the onset, progression and prognosis of both systemic and central nervous system (CNS) vascular diseases. The bulbar conjunctival vasculature was imaged using a retinal function imager (RFI, Optical Imaging Ltd, Rehovot, Israel). Hemoglobin in red blood cells was used as an intrinsic motion-contrast agent in the generation of detailed noninvasive capillary-perfusion maps (nCPMs) and the calculation of the blood flow velocity. Five healthy subjects were imaged under normal conditions and again under the stress condition of wearing a contact lens. The retina was also imaged in one eye of one subject for comparison. The nCPMs showed the conjunctival microvasculature in exquisite detail, which appeared as clear as the retinal nCPMs. The blood flow velocities in the temporal conjunctival microvasculature were 0.86 ± 0.08 (mean ± SD, mm/s) for the bare eye and 0.99 ± 0.11 mm/s with contact lens wear. It is feasible to use RFI for imaging the conjunctival vasculature. PMID:23084966

  4. Human conjunctival microvasculature assessed with a retinal function imager (RFI).

    PubMed

    Jiang, Hong; Ye, Yufeng; DeBuc, Delia Cabrera; Lam, Byron L; Rundek, Tatjana; Tao, Aizhu; Shao, Yilei; Wang, Jianhua

    2013-01-01

    The conjunctival and cerebral vasculatures share similar embryological origins, with similar structural and physiological characteristics. Tracking the conjunctival microvasculature may provide useful information for predicting the onset, progression and prognosis of both systemic and central nervous system (CNS) vascular diseases. The bulbar conjunctival vasculature was imaged using a retinal function imager (RFI, Optical Imaging Ltd, Rehovot, Israel). Hemoglobin in red blood cells was used as an intrinsic motion-contrast agent in the generation of detailed noninvasive capillary-perfusion maps (nCPMs) and the calculation of the blood flow velocity. Five healthy subjects were imaged under normal conditions and again under the stress condition of wearing a contact lens. The retina was also imaged in one eye of one subject for comparison. The nCPMs showed the conjunctival microvasculature in exquisite detail, which appeared as clear as the retinal nCPMs. The blood flow velocities in the temporal conjunctival microvasculature were 0.86±0.08 (mean±SD, mm/s) for the bare eye and 0.99±0.11 mm/s with contact lens wear. It is feasible to use RFI for imaging the conjunctival vasculature. PMID:23084966

  5. Movetent Analysis Of Digital 3D Images Derived From Serial Section Images

    NASA Astrophysics Data System (ADS)

    Tascini, Guido

    1984-08-01

    With the growth of digital and electronic i maging techniques in medicine particularly useful are the 3-D data for diagnosis and therapy. The representation of 3-D objects adopted uses the 'octree' data structure and is derived from serial section 2-D ima ges, as in CT. The slice-images, pre-proces sed with segmentation techniques and then processed to obtain quadtrees, allow by sim ple technique to reconstruct 3-D represen tation with octrees. For the movement analx sis and generation is adopted a syntax- di rected tree-transducer.The time-varying mages are represented by a sequence of 8-trees and the matching process is performed by parser. Many rules adopted for motion a nalysis and generation are described. Key-words: quadtree, octree, serial-section motion-primitive, tree-translation, parsing.

  6. Semiautomatic detection and evaluation of autofluorescent areas in retinal images.

    PubMed

    Kolár, Radim; Jan, Jirí; Laemmer, Robert; Jirík, Radovan

    2007-01-01

    A semiautomatic approach to the detection and evaluation of the autofluorescent zones in retinal images, recognized as having a diagnostic value, has been designed based on fusing information from two Heidelberg Retina Angiograph imaging modalities - autofluorescent and infrared modes. The procedure, initiated by automatic preprocessing and region-of-interest determination continues by manually initiated segmentation via constrained region growing and ends with evaluating the size and geometrical coordinates of the AF regions with respect to the centre of the optic disc. Results are compared with those obtained by experienced ophthalmologists. PMID:18002708

  7. Raman molecular chemical imaging: 3D Raman using deconvolution

    Microsoft Academic Search

    John S. Maier; Patrick J. Treado

    2004-01-01

    Chemical imaging is a powerful technique combining molecular spectroscopy and digital imaging for rapid, non-invasive and reagentless analysis of materials, including biological cells and tissues. Raman chemical imaging is suited to the characterization of molecular composition and structure of biomateials at submicron spatial resolution (< 250 nm). As a result, Raman imaging has potential as a routine tool for the

  8. AUTOMATIC OPTIC DISK DETECTION FROM LOW CONTRAST RETINAL IMAGES OF ROP INFANT USING GVF SNAKE

    Microsoft Academic Search

    Viranee Thongnuch; Bunyarit Uyyanonvara

    2007-01-01

    Reliable and efficient optic disk localization and segmentation are important tasks in automated retinal screening. General-purpose edge detection algorithms often fail to segment the optic disk (OD) due to fuzzy boundaries, inconsistent image contrast or missing edge features, especially in infants' retinal images where the image acquisition process has to be very quick and in low light conditions. This paper

  9. Disease-Oriented Evaluation of Dual-Bootstrap Retinal Image Registration

    E-print Network

    " registration. In pairwise registration, the new Dual-Bootstrap Iterative Closest Point (DB-ICP) algorithm (Fig-oriented evaluation of two re- cent retinal image registration algorithms, one for aligning pairs of retinal images-Bootstrap ICP algorithm, worked nearly as well, successfully aligning 99.5% of the image pairs having

  10. Image quality of a cone beam O-arm 3D imaging system

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Weir, Victor; Lin, Jingying; Hsiung, Hsiang; Ritenour, E. Russell

    2009-02-01

    The O-arm is a cone beam imaging system designed primarily to support orthopedic surgery and is also used for image-guided and vascular surgery. Using a gantry that can be opened or closed, the O-arm can function as a 2-dimensional (2D) fluoroscopy device or collect 3-dimensional (3D) volumetric imaging data like a CT system. Clinical applications of the O-arm in spine surgical procedures, assessment of pedicle screw position, and kyphoplasty procedures show that the O-arm 3D mode provides enhanced imaging information compared to radiographs or fluoroscopy alone. In this study, the image quality of an O-arm system was quantitatively evaluated. A 20 cm diameter CATPHAN 424 phantom was scanned using the pre-programmed head protocols: small/medium (120 kVp, 100 mAs), large (120 kVp, 128 mAs), and extra-large (120 kVp, 160 mAs) in 3D mode. High resolution reconstruction mode (512×512×0.83 mm) was used to reconstruct images for the analysis of low and high contrast resolution, and noise power spectrum. MTF was measured using the point spread function. The results show that the O-arm image is uniform but with a noise pattern which cannot be removed by simply increasing the mAs. The high contrast resolution of the O-arm system was approximately 9 lp/cm. The system has a 10% MTF at 0.45 mm. The low-contrast resolution cannot be decided due to the noise pattern. For surgery where locations of a structure are emphasized over a survey of all image details, the image quality of the O-arm is well accepted clinically.

  11. Imaging of human differentiated 3D neural aggregates using light sheet fluorescence microscopy

    PubMed Central

    Gualda, Emilio J.; Simão, Daniel; Pinto, Catarina; Alves, Paula M.; Brito, Catarina

    2014-01-01

    The development of three dimensional (3D) cell cultures represents a big step for the better understanding of cell behavior and disease in a more natural like environment, providing not only single but multiple cell type interactions in a complex 3D matrix, highly resembling physiological conditions. Light sheet fluorescence microscopy (LSFM) is becoming an excellent tool for fast imaging of such 3D biological structures. We demonstrate the potential of this technique for the imaging of human differentiated 3D neural aggregates in fixed and live samples, namely calcium imaging and cell death processes, showing the power of imaging modality compared with traditional microscopy. The combination of light sheet microscopy and 3D neural cultures will open the door to more challenging experiments involving drug testing at large scale as well as a better understanding of relevant biological processes in a more realistic environment. PMID:25161607

  12. Motion compensated frequency modulated continuous wave 3D coherent imaging ladar with scannerless architecture.

    PubMed

    Krause, Brian W; Tiemann, Bruce G; Gatt, Philip

    2012-12-20

    A principal difficulty of long dwell coherent imaging ladar is its extreme sensitivity to target or platform motion. This paper describes a motion compensated frequency modulated continuous wave 3D coherent imaging ladar method that overcomes this motion sensitivity, making it possible to work with nonstatic targets such as human faces, as well as imaging of targets through refractive turbulence. Key features of this method include scannerless imaging and high range resolution. The reduced motion sensitivity is shown with mathematical analysis and demonstration 3D images. Images of static and dynamic targets are provided demonstrating up to 600×800 pixel imaging with millimeter range resolution. PMID:23262614

  13. Coherent 3-D echo detection for ultrasonic imaging

    Microsoft Academic Search

    Bernard Chalmond; François Coldefy; Etienne Goubet; Blandine Lavayssière

    2003-01-01

    The purpose of the present paper is to present an ultrasonic processing set-up by which three-dimensional (3-D) echo location can be computed more efficiently than by other one-dimensional (1-D) methods. This set-up contains three successive tasks. The first one deals with a model for representing echoes. This model is based on a generic wavelet, which is a cosine function with

  14. Deformable M-Reps for 3D Medical Image Segmentation

    Microsoft Academic Search

    Stephen M. Pizer; P. Thomas Fletcher; Sarang C. Joshi; Andrew Thall; James Z. Chen; Yonatan Fridman; Daniel S. Fritsch; A. Graham Gash; John M. Glotzer; Michael R. Jiroutek; Conglin Lu; Keith E. Muller; Gregg Tracton; Paul A. Yushkevich; Edward L. Chaney

    2003-01-01

    M-reps (formerly called DSLs) are a multiscale medial means for modeling and rendering 3D solid geometry. They are particularly well suited to model anatomic objects and in particular to capture prior geometric information effectively in deformable models segmentation approaches. The representation is based on figural models, which define objects at coarse scale by a hierarchy of figures—each figure generally a

  15. 3D Lunar Terrain Reconstruction from Apollo Images

    Microsoft Academic Search

    Michael J. Broxton; Ara V. Nefian; Zachary Moratto; Taemin Kim; Michael Lundy; Aleksandr V. Segal

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to\\u000a return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core\\u000a aspects of this system: (1)

  16. 3D behaviour of Frieden filters in confocal imaging.

    PubMed

    Boyer, G

    2003-01-01

    The three-dimensional (3D) focal behaviour of the super-resolving Frieden filters is investigated numerically. It is shown that, as the central bright spot is sharpened, super-giant secondary maximums are formed on the optic axis. These lobes are much higher that the well-known side-lobes inherent to spatial filtering that surround the restricted, utilisable field, whose characteristics in the meridional plane are depicted for various values of the space-bandwidth parameter and for various numbers of terms that compose the window function. The two-terms filter is found to present, for the first time to my knowledge, some axial apodizing properties. To be compatible with practical realisation, the use of this class of filters in a single- and two-photon confocally scanned system is discussed in terms of 3D super-resolution with an intentionally limited light-power loss. It is shown that these filters match particularly well with recently designed axial apodizers for the transmission-mode confocal scanning microscope and provide a 3D intensity point-spread volume reduction of variable amount as high as 37 percent. The filtering process is shown to vary significantly with the mode of operation. PMID:12932770

  17. A dual-modal retinal imaging system with adaptive optics.

    PubMed

    Meadway, Alexander; Girkin, Christopher A; Zhang, Yuhua

    2013-12-01

    An adaptive optics scanning laser ophthalmoscope (AO-SLO) is adapted to provide optical coherence tomography (OCT) imaging. The AO-SLO function is unchanged. The system uses the same light source, scanning optics, and adaptive optics in both imaging modes. The result is a dual-modal system that can acquire retinal images in both en face and cross-section planes at the single cell level. A new spectral shaping method is developed to reduce the large sidelobes in the coherence profile of the OCT imaging when a non-ideal source is used with a minimal introduction of noise. The technique uses a combination of two existing digital techniques. The thickness and position of the traditionally named inner segment/outer segment junction are measured from individual photoreceptors. In-vivo images of healthy and diseased human retinas are demonstrated. PMID:24514529

  18. A dual-modal retinal imaging system with adaptive optics

    PubMed Central

    Meadway, Alexander; Girkin, Christopher A.; Zhang, Yuhua

    2013-01-01

    An adaptive optics scanning laser ophthalmoscope (AO-SLO) is adapted to provide optical coherence tomography (OCT) imaging. The AO-SLO function is unchanged. The system uses the same light source, scanning optics, and adaptive optics in both imaging modes. The result is a dual-modal system that can acquire retinal images in both en face and cross-section planes at the single cell level. A new spectral shaping method is developed to reduce the large sidelobes in the coherence profile of the OCT imaging when a non-ideal source is used with a minimal introduction of noise. The technique uses a combination of two existing digital techniques. The thickness and position of the traditionally named inner segment/outer segment junction are measured from individual photoreceptors. In-vivo images of healthy and diseased human retinas are demonstrated. PMID:24514529

  19. IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 12, DECEMBER 2006 1 3-D Discrete Analytical Ridgelet Transform

    E-print Network

    Paris-Sud XI, Université de

    IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 12, DECEMBER 2006 1 3-D Discrete Analytical an implementation of the 3-D ridgelet transform: The 3-D Discrete Analytical Ridgelet Transform (3-D DART). This transform uses the Fourier strategy for the computation of the associated 3-D discrete Radon transform

  20. Fast multicolor 3D imaging using aberration-corrected multifocus microscopy.

    PubMed

    Abrahamsson, Sara; Chen, Jiji; Hajj, Bassam; Stallinga, Sjoerd; Katsov, Alexander Y; Wisniewski, Jan; Mizuguchi, Gaku; Soule, Pierre; Mueller, Florian; Dugast Darzacq, Claire; Darzacq, Xavier; Wu, Carl; Bargmann, Cornelia I; Agard, David A; Dahan, Maxime; Gustafsson, Mats G L

    2013-01-01

    Conventional acquisition of three-dimensional (3D) microscopy data requires sequential z scanning and is often too slow to capture biological events. We report an aberration-corrected multifocus microscopy method capable of producing an instant focal stack of nine 2D images. Appended to an epifluorescence microscope, the multifocus system enables high-resolution 3D imaging in multiple colors with single-molecule sensitivity, at speeds limited by the camera readout time of a single image. PMID:23223154

  1. Non-contrast Enhanced MR Venography Using 3D Fresh Blood Imaging (FBI): Initial Experience

    Microsoft Academic Search

    Kenichi Yokoyama; Toshiaki Nitatori; Sayuki Inaoka; Taro Takahara; Junichi Hachiya

    Objective: This study examined the efficacy of 3D-fresh blood imaging (FBI) in patients with venous disease in the iliac region to lower extremity. Materials and Methods: Fourteen patients with venous disease were examined (8 deep venous thrombosis (DVT) and 6 varix) by 3D-FBI and 2D-TOF MRA. All FBI images and 2D-TOF images were evaluated in terms of visualization of the

  2. Facial feature detection and face recognition from 2D and 3D images

    Microsoft Academic Search

    Yingjie Wang; Chin-seng Chua; Yeong-khing Ho

    2002-01-01

    This paper presents a feature-based face recognition system based on both 3D range data as well as 2D gray-level facial images. Feature points are described by Gabor filter responses in the 2D domain and Point Signature in the 3D domain. Extracted shape features from 3D feature points and texture features from 2D feature points are first projected into their own

  3. Design of a 3-D Infrared Imaging System Using Structured Light

    Microsoft Academic Search

    Rongqian Yang; Yazhu Chen

    2011-01-01

    Two-dimensional infrared thermography (IRT) is widely used in various domains and can be extended to more applications if the spatial information of the temperature distri- bution is provided to form three-dimensional (3-D) thermography. A 3-D infrared (IR) imaging system based on structured light is designed to acquire the 3-D surface temperature distribution. The projector, color camera, and IR camera must

  4. 3D-imaging laser scanner for close-range metrology

    Microsoft Academic Search

    Aloysius Wehr

    1999-01-01

    This paper presents a 3D-Imaging Laser Scanner (3D-ILS) for close range survey for up to 10 meters. The 3D-ISL is eyesafe and works with a visible semiconductor laser diode transmitting at 670 nm. The large ranging dynamic is achieved by measuring the phase difference between the transmitted and received intensity modulated signal. Due to the high modulation frequency of 314

  5. Determining 3D flow fields via multi-camera light field imaging.

    PubMed

    Truscott, Tadd T; Belden, Jesse; Nielson, Joseph R; Daily, David J; Thomson, Scott L

    2013-01-01

    In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture (1). Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet. PMID:23486112

  6. Recovering 3D Shape and Motion from Image Streams using Non-Linear Least Squares

    Microsoft Academic Search

    Richard Szeliski; Sing Bing Kang

    1993-01-01

    The simultaneous recovery of 3D shape and motion from image sequences is one of the moredifficult problems in computer vision. Classical approaches to the problem rely on using algebraictechniques to solve for these unknowns given two or more images. More recently, a batch analysisof image streams (the temporal tracks of distinguishable image features) under orthography hasresulted in highly accurate reconstructions.

  7. Cultural Relic 3D Reconstruction from Digital Images and Laser Point Clouds

    Microsoft Academic Search

    Jie Liu; Jianqing Zhang; Jia Xu

    2008-01-01

    This paper proposes a method to combine the digital images and Laser point clouds to reconstruct the 3D model of the archaic glockenspiel. All the stations of the Laser point clouds are connected according to the ICP arithmetic. Then image matching is used to register the high resolution digital images and the Laser synchronous images to gain the corresponding texture

  8. Fully 3D Uniform Resolution Transmission microPET Image Reconstruction 1 Patrick Chow,

    E-print Network

    Leahy, Richard M.

    Fully 3D Uniform Resolution Transmission microPET Image Reconstruction 1 Bing Bai, Patrick Chow for use in localizing structures and image coregistration. The resolution of MAP images reconstructed are used here to achieve uniform resolution throughout the transmission image. We also investigate

  9. Inspection of High Magnification Fracture Surfaces using 3D from Stereo Images of Large Chamber SEM

    E-print Network

    Abidi, Mongi A.

    Inspection of High Magnification Fracture Surfaces using 3D from Stereo Images of Large Chamber SEM from affine stereo images. The images are captured with Large Chamber Scanning Electron Microscope (LC the stereo images shows the validity of our reconstruction. Furthermore, spectral information at two energy

  10. First results from the 3D near-infrared imaging array spectrometer

    Microsoft Academic Search

    Niranjan A. Thatte; L. Weitzel; M. Cameron; Lowell E. Tacconi-Garman; H. Kroker; Alfred Krabbe; R. Genzel

    1994-01-01

    We present the first astronomical results from the new 3D near IR imaging array spectrometer. These include K band (1.95 to 2.45 micrometers ) spectra and images of nearby starburst galaxies and active galactic nuclei with a spectral resolution of 1000. A special image slicer allows simultaneous spectra and imaging of an 8 arc second field of view. The background

  11. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country and high resolution satellite images are costly. In this study, proposed method is based on only simple video recording of area. Thus this proposed method is suitable for 3D city modeling. Photo-realistic, scalable, geo-referenced virtual 3D city model is useful for various kinds of applications such as for planning in navigation, tourism, disasters management, transportations, municipality, urban and environmental managements, real-estate industry. Thus this study will provide a good roadmap for geomatics community to create photo-realistic virtual 3D city model by using close range photogrammetry.

  12. Reconstruction of 3d Digital Image of Weepingforsythia Pollen

    NASA Astrophysics Data System (ADS)

    Liu, Dongwu; Chen, Zhiwei; Xu, Hongzhi; Liu, Wenqi; Wang, Lina

    Confocal microscopy, which is a major advance upon normal light microscopy, has been used in a number of scientific fields. By confocal microscopy techniques, cells and tissues can be visualized deeply, and three-dimensional images created. Compared with conventional microscopes, confocal microscope improves the resolution of images by eliminating out-of-focus light. Moreover, confocal microscope has a higher level of sensitivity due to highly sensitive light detectors and the ability to accumulate images captured over time. In present studies, a series of Weeping Forsythia pollen digital images (35 images in total) were acquired with confocal microscope, and the three-dimensional digital image of the pollen reconstructed with confocal microscope. Our results indicate that it's a very easy job to analysis threedimensional digital image of the pollen with confocal microscope and the probe Acridine orange (AO).

  13. Holographic imaging of 3D objects on dichromated polymer systems

    NASA Astrophysics Data System (ADS)

    Lemelin, Guylain; Jourdain, Anne; Manivannan, Gurusamy; Lessard, Roger A.

    1996-01-01

    Conventional volume transmission holograms of a 3D scene were recorded on dichromated poly(acrylic acid) (DCPAA) films under 488 nm light. The holographic characterization and quality of reconstruction have been studied by varying the influencing parameters such as concentration of dichromate and electron donor, and the molecular weight of the polymer matrix. Ammonium and potassium dichromate have been employed to sensitize the poly(acrylic) matrix. the recorded hologram can be efficiently reconstructed either with red light or with low energy in the blue region without any post thermal or chemical processing.

  14. [3-D imaging of skin cancers and survival].

    PubMed

    Piérard, G E

    2009-04-01

    The incidence of skin cancers is still on the rise despite information provided to the public and to cancer screening initiatives. We designed a 3D movie with the support of all Belgian university departments of dermatology. Looking at the impact of ultraviolet light in cells was the objective of this presentation. Several themes of actuality were addressed including genotoxicity of light, the primary prevention of skin cancers, the field actinodermatosis and cancerogenesis, the skin cancer epidemiology, the duality of skin melanomas with contrasted prognoses, and the recognition of melanoma stem cells. PMID:19514537

  15. Automatic extraction of retinal features from colour retinal images for glaucoma diagnosis: a review.

    PubMed

    Haleem, Muhammad Salman; Han, Liangxiu; van Hemert, Jano; Li, Baihua

    2013-01-01

    Glaucoma is a group of eye diseases that have common traits such as, high eye pressure, damage to the Optic Nerve Head and gradual vision loss. It affects peripheral vision and eventually leads to blindness if left untreated. The current common methods of pre-diagnosis of Glaucoma include measurement of Intra-Ocular Pressure (IOP) using Tonometer, Pachymetry, Gonioscopy; which are performed manually by the clinicians. These tests are usually followed by Optic Nerve Head (ONH) Appearance examination for the confirmed diagnosis of Glaucoma. The diagnoses require regular monitoring, which is costly and time consuming. The accuracy and reliability of diagnosis is limited by the domain knowledge of different ophthalmologists. Therefore automatic diagnosis of Glaucoma attracts a lot of attention. This paper surveys the state-of-the-art of automatic extraction of anatomical features from retinal images to assist early diagnosis of the Glaucoma. We have conducted critical evaluation of the existing automatic extraction methods based on features including Optic Cup to Disc Ratio (CDR), Retinal Nerve Fibre Layer (RNFL), Peripapillary Atrophy (PPA), Neuroretinal Rim Notching, Vasculature Shift, etc., which adds value on efficient feature extraction related to Glaucoma diagnosis. PMID:24139134

  16. 3D imaging from theory to practice: the Mona Lisa story

    NASA Astrophysics Data System (ADS)

    Blais, Francois; Cournoyer, Luc; Beraldin, J.-Angelo; Picard, Michel

    2008-08-01

    The warped poplar panel and the technique developed by Leonardo to paint the Mona Lisa present a unique research and engineering challenge for the design of a complete optical 3D imaging system. This paper discusses the solution developed to precisely measure in 3D the world's most famous painting despite its highly contrasted paint surface and reflective varnish. The discussion focuses on the opto-mechanical design and the complete portable 3D imaging system used for this unique occasion. The challenges associated with obtaining 3D color images at a resolution of 0.05 mm and a depth precision of 0.01 mm are illustrated by exploring the virtual 3D model of the Mona Lisa.

  17. Computer generated hologram of deep 3D scene from the data captured by integral imaging

    NASA Astrophysics Data System (ADS)

    Wakunami, Koki; Yamaguchi, Masahiro; Javidi, Bahram

    2012-06-01

    Various techniques to visualize a 3-D object/scene have been proposed until now; stereoscopic display, parallax barrier, lenticular approach, integral imaging display, and holographic display. Application for a real existing 3-D scene is one of important issues. In this paper, at first the fundamental limitation of integral imaging display for deep 3-D scene is discussed. Then a two main types of holographic display; digital holography approach that digitally capturing an interference pattern and a computer generated hologram (CGH) approach from a set of perspective images are overviewed with describing the radical advantages and disadvantages.

  18. Processing sequence for non-destructive inspection based on 3D terahertz images

    NASA Astrophysics Data System (ADS)

    Balacey, H.; Perraud, Jean-Baptiste; Bou Sleiman, J.; Guillet, Jean-Paul; Recur, B.; Mounaix, P.

    2014-11-01

    In this paper we present an innovative data and image processing sequence to perform non-destructive inspection from 3D terahertz (THz) images. We develop all the steps starting from a 3D tomographic reconstruction of a sample from its radiographs acquired with a monochromatic millimetre wave imaging system. Thus an automated segmentation provides the different volumes of interest (VOI) composing the sample. Then a 3D visualization and dimensional measurements are performed on these VOI, separately, in order to provide an accurate nondestructive testing (NDT) of the studied sample. This sequence is implemented onto an unique software and validated through the analysis of different objects

  19. Algorithm of pulmonary emphysema extraction using low dose thoracic 3D CT images

    NASA Astrophysics Data System (ADS)

    Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Omatsu, H.; Tominaga, K.; Eguchi, K.; Moriyama, N.

    2006-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to 100 thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  20. Algorithm of pulmonary emphysema extraction using thoracic 3D CT images

    NASA Astrophysics Data System (ADS)

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2007-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  1. Spatial Mutual Information as Similarity Measure for 3-D Brain Image Registration

    PubMed Central

    RAZLIGHI, QOLAMREZA R.; KEHTARNAVAZ, NASSER

    2014-01-01

    Information theoretic-based similarity measures, in particular mutual information, are widely used for intermodal/intersubject 3-D brain image registration. However, conventional mutual information does not consider spatial dependency between adjacent voxels in images, thus reducing its efficacy as a similarity measure in image registration. This paper first presents a review of the existing attempts to incorporate spatial dependency into the computation of mutual information (MI). Then, a recently introduced spatially dependent similarity measure, named spatial MI, is extended to 3-D brain image registration. This extension also eliminates its artifact for translational misregistration. Finally, the effectiveness of the proposed 3-D spatial MI as a similarity measure is compared with three existing MI measures by applying controlled levels of noise degradation to 3-D simulated brain images. PMID:24851197

  2. 3D-snapshot flash NMR imaging of the human heart.

    PubMed

    Henrich, D; Haase, A; Matthaei, D

    1990-01-01

    SNAPSHOT-FLASH is a recently developed, ultrafast imaging technique, based on conventional FLASH imaging. The application of this new variant to 3D imaging allows the acquisition of a 128 x 128 x 32 data set in 12.5 seconds without triggering, or for cardiac imaging with gating within 32 heartbeats. Compared to standard 3D-FLASH this is 128 times faster, because triggering is only required when the 3D phase-encoding gradient is incremented. The method depicts for the first time fast three-dimensional views of the human heart without motional artifacts. The images are spin-density weighted. Using suitable prepulses any desired T1- or T2-contrast may be achieved. The generation of 3D movies is possible without an increase of the total scan time. PMID:2392025

  3. Robust Gradient-Based 3-D\\/2-D Registration of CT and MR to X-Ray Images

    Microsoft Academic Search

    Primoz Markelj; Dejan Tomazevic; Franjo Pernus; Bostjan Likar

    2008-01-01

    One of the most important technical challenges in image-guided intervention is to obtain a precise transformation between the intrainterventional patient's anatomy and corresponding preinterventional 3-D image on which the intervention was planned. This goal can be achieved by acquiring intrainterventional 2-D images and matching them to the preinterventional 3-D image via 3-D\\/2-D image registration. A novel 3-D\\/2-D registration method is

  4. Mutual information as a measure of image quality for 3D dynamic lung imaging with EIT.

    PubMed

    Crabb, M G; Davidson, J L; Little, R; Wright, P; Morgan, A R; Miller, C A; Naish, J H; Parker, G J M; Kikinis, R; McCann, H; Lionheart, W R B

    2014-05-01

    We report on a pilot study of dynamic lung electrical impedance tomography (EIT) at the University of Manchester. Low-noise EIT data at 100 frames per second were obtained from healthy male subjects during controlled breathing, followed by magnetic resonance imaging (MRI) subsequently used for spatial validation of the EIT reconstruction. The torso surface in the MR image and electrode positions obtained using MRI fiducial markers informed the construction of a 3D finite element model extruded along the caudal-distal axis of the subject. Small changes in the boundary that occur during respiration were accounted for by incorporating the sensitivity with respect to boundary shape into a robust temporal difference reconstruction algorithm. EIT and MRI images were co-registered using the open source medical imaging software, 3D Slicer. A quantitative comparison of quality of different EIT reconstructions was achieved through calculation of the mutual information with a lung-segmented MR image. EIT reconstructions using a linear shape correction algorithm reduced boundary image artefacts, yielding better contrast of the lungs, and had 10% greater mutual information compared with a standard linear EIT reconstruction. PMID:24710978

  5. Validating retinal fundus image analysis algorithms: issues and a proposal.

    PubMed

    Trucco, Emanuele; Ruggeri, Alfredo; Karnowski, Thomas; Giancardo, Luca; Chaum, Edward; Hubschman, Jean Pierre; Al-Diri, Bashir; Cheung, Carol Y; Wong, Damon; Abràmoff, Michael; Lim, Gilbert; Kumar, Dinesh; Burlina, Philippe; Bressler, Neil M; Jelinek, Herbert F; Meriaudeau, Fabrice; Quellec, Gwénolé; Macgillivray, Tom; Dhillon, Bal

    2013-05-01

    This paper concerns the validation of automatic retinal image analysis (ARIA) algorithms. For reasons of space and consistency, we concentrate on the validation of algorithms processing color fundus camera images, currently the largest section of the ARIA literature. We sketch the context (imaging instruments and target tasks) of ARIA validation, summarizing the main image analysis and validation techniques. We then present a list of recommendations focusing on the creation of large repositories of test data created by international consortia, easily accessible via moderated Web sites, including multicenter annotations by multiple experts, specific to clinical tasks, and capable of running submitted software automatically on the data stored, with clear and widely agreed-on performance criteria, to provide a fair comparison. PMID:23794433

  6. Noninvasive Imaging of Retinal Morphology and Microvasculature in Obese Mice Using Optical Coherence Tomography and Optical Microangiography

    PubMed Central

    Zhi, Zhongwei; Chao, Jennifer R.; Wietecha, Tomasz; Hudkins, Kelly L.; Alpers, Charles E.; Wang, Ruikang K.

    2014-01-01

    Purpose. To evaluate early diabetes-induced changes in retinal thickness and microvasculature in a type 2 diabetic mouse model by using optical coherence tomography (OCT)/optical microangiography (OMAG). Methods. Twenty-two-week-old obese (OB) BTBR mice (n = 10) and wild-type (WT) control mice (n = 10) were imaged. Three-dimensional (3D) data volumes were captured with spectral domain OCT using an ultrahigh-sensitive OMAG scanning protocol for 3D volumetric angiography of the retina and dense A-scan protocol for measurement of the total retinal blood flow (RBF) rate. The thicknesses of the nerve fiber layer (NFL) and that of the NFL to the inner plexiform layer (IPL) were measured and compared between OB and WT mice. The linear capillary densities within intermediate and deep capillary layers were determined by the number of capillaries crossing a 500-?m line. The RBF rate was evaluated using an en face Doppler approach. These quantitative measurements were compared between OB and WT mice. Results. The retinal thickness of the NFL to IPL was significantly reduced in OB mice (P < 0.01) compared to that in WT mice, whereas the NFL thickness between the two was unchanged. 3D depth-resolved OMAG angiography revealed the first in vivo 3D model of mouse retinal microcirculation. Although no obvious differences in capillary vessel densities of the intermediate and deep capillary layers were detected between normal and OB mice, the total RBF rate was significantly lower (P < 0.05) in OB mice than in WT mice. Conclusions. We conclude that OB BTBR mice have significantly reduced NFL–IPL thicknesses and total RBF rates compared with those of WT mice, as imaged by OCT/OMAG. OMAG provides an unprecedented capability for high-resolution depth-resolved imaging of mouse retinal vessels and blood flow that may play a pivotal role in providing a noninvasive method for detecting early microvascular changes in patients with diabetic retinopathy. PMID:24458155

  7. Least Committed Splines in 3D Modelling of Free Form Objects from Intensity Images

    Microsoft Academic Search

    Kuntal Sengupta; Prabir Burman; Sumit Gupta

    2002-01-01

    Generating 3D models of objects from video sequences is an important problem in many multimedia applications ranging from teleconferencing to virtual reality. In this paper, we present a method of estimating the 3D face model from a monocular image sequence, using a few standard results from the affine camera geometry literature in computer vision, and spline fitting techniques using a

  8. Compact Ambient Light Cancellation Design and Optimization for 3D Time-of-Flight Image Sensors

    E-print Network

    Fossum, Eric R.

    Compact Ambient Light Cancellation Design and Optimization for 3D Time-of-Flight Image Sensors performance is also presented. The QVGA sensor has been demonstrated at up to 40k lux of ambient light, Gunsan, Jeonbuk, South Korea A highly compact ambient-light-cancellation (ALC) circuit for 3D time

  9. 3-D object recognition based on SVM and stereo-vision: Application in endoscopic imaging

    Microsoft Academic Search

    Jad Ayoub; Bertrand Granado; Olivier Romain; Yasser Mhanna

    2010-01-01

    In this paper we focus on the recognition of threedimensional objects captured by an active stereo vision sensor. The study is related to our research project Cyclope, this embedded sensor based on active stereo-vision approach allows real time 3D objects reconstruction. Our medical application requires differentiation between hyperplastic and adenomatous polyps during 3D endoscopic imaging. The detection algorithm consists of

  10. Adaptive Multiresolution Non-Local Means Filter for 3D MR Image Denoising

    E-print Network

    Paris-Sud XI, Université de

    1 Adaptive Multiresolution Non-Local Means Filter for 3D MR Image Denoising Pierrick Coup´e1 , Jos, an adaptive multiresolution version of the Blockwise Non-Local (NL-) means filter is presented for 3D Magnetic multiresolution filter obtained competitive performance compared to recently proposed Rician NL-means filters

  11. Real-time pose estimation of 3D objects from camera images using neural networks

    Microsoft Academic Search

    P. Wunsch; S. Winkler; G. Hirzinger

    1997-01-01

    This paper deals with the problem of obtaining a rough estimate of three dimensional object position and orientation from a single two dimensional camera image. Such an estimate is required by most 3-D to 2-D registration and tracking methods that can efficiently refine an initial value by numerical optimization to precisely recover 3-D pose. However the analytic computation of an

  12. CONSTRUCTION 3D URBAN MODEL FROM LIDAR AND IMAGE SEQUENCE Fei Deng ; Zuxun Zhang ; Jianqing Zhang

    E-print Network

    Salvaggio, Carl

    and geometric information from a laser scanned DSM . A homologue line matching, based on geometry constraint a intergrate step to 3-D model reconstruction by multiple data source based on Probabilistic approach. KEY urban planners , architects,and telecommunication engineers. Manual 3D processing of aerial images

  13. 3-D Target Location from Stereoscopic SAR Images

    SciTech Connect

    DOERRY,ARMIN W.

    1999-10-01

    SAR range-Doppler images are inherently 2-dimensional. Targets with a height offset lay over onto offset range and azimuth locations. Just which image locations are laid upon depends on the imaging geometry, including depression angle, squint angle, and target bearing. This is the well known layover phenomenon. Images formed with different aperture geometries will exhibit different layover characteristics. These differences can be exploited to ascertain target height information, in a stereoscopic manner. Depending on the imaging geometries, height accuracy can be on the order of horizontal position accuracies, thereby rivaling the best IFSAR capabilities in fine resolution SAR images. All that is required for this to work are two distinct passes with suitably different geometries from any plain old SAR.

  14. 2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.

    PubMed

    Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

    2014-01-01

    3D imaging has a significant impact on many challenges in life sciences, because biology is a 3-dimensional phenomenon. Current 3D imaging-technologies (various types MRI, PET, SPECT) are labeled, i.e. they trace the localization of a specific compound in the body. In contrast, 3D MALDI mass spectrometry-imaging (MALDI-MSI) is a label-free method imaging the spatial distribution of molecular compounds. It complements 3D imaging labeled methods, immunohistochemistry, and genetics-based methods. However, 3D MALDI-MSI cannot tap its full potential due to the lack of statistical methods for analysis and interpretation of large and complex 3D datasets. To overcome this, we established a complete and robust 3D MALDI-MSI pipeline combined with efficient computational data analysis methods for 3D edge preserving image denoising, 3D spatial segmentation as well as finding colocalized m/z values, which will be reviewed here in detail. Furthermore, we explain, why the integration and correlation of the MALDI imaging data with other imaging modalities allows to enhance the interpretation of the molecular data and provides visualization of molecular patterns that may otherwise not be apparent. Therefore, a 3D data acquisition workflow is described generating a set of 3 different dimensional images representing the same anatomies. First, an in-vitro MRI measurement is performed which results in a three-dimensional image modality representing the 3D structure of the measured object. After sectioning the 3D object into N consecutive slices, all N slices are scanned using an optical digital scanner, enabling for performing the MS measurements. Scanning the individual sections results into low-resolution images, which define the base coordinate system for the whole pipeline. The scanned images conclude the information from the spatial (MRI) and the mass spectrometric (MALDI-MSI) dimension and are used for the spatial three-dimensional reconstruction of the object performed by image registration techniques. Different strategies for automatic serial image registration applied to MS datasets are outlined in detail. The third image modality is histology driven, i.e. a digital scan of the histological stained slices in high-resolution. After fusion of reconstructed scan images and MRI the slice-related coordinates of the mass spectra can be propagated into 3D-space. After image registration of scan images and histological stained images, the anatomical information from histology is fused with the mass spectra from MALDI-MSI. As a result of the described pipeline we have a set of 3 dimensional images representing the same anatomies, i.e. the reconstructed slice scans, the spectral images as well as corresponding clustering results, and the acquired MRI. Great emphasis is put on the fact that the co-registered MRI providing anatomical details improves the interpretation of 3D MALDI images. The ability to relate mass spectrometry derived molecular information with in vivo and in vitro imaging has potentially important implications. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. PMID:23467008

  15. 3D printing based on imaging data: review of medical applications

    Microsoft Academic Search

    F. Rengier; A. Mehndiratta; H. von Tengg-Kobligk; C. M. Zechmann; R. Unterhinninghofen; H.-U. Kauczor; F. L. Giesel

    2010-01-01

    Purpose  Generation of graspable three-dimensional objects applied for surgical planning, prosthetics and related applications using\\u000a 3D printing or rapid prototyping is summarized and evaluated.\\u000a \\u000a \\u000a \\u000a \\u000a Materials and methods  Graspable 3D objects overcome the limitations of 3D visualizations which can only be displayed on flat screens. 3D objects\\u000a can be produced based on CT or MRI volumetric medical images. Using dedicated post-processing algorithms, a

  16. 3D motion estimation of atmospheric layers from image sequences

    E-print Network

    Paris-Sud XI, Université de

    of satellite images and we believe that it is very important that the computer vision community gets involved the problem of esti- mating three-dimensional motions of a stratified atmosphere from satellite image of atmospheric layers due to the sparsity of cloud systems is very difficult. This makes the estimation of dense

  17. Intelligent auto tracking in 3D space by image processing

    Microsoft Academic Search

    Khalid Khalid Al Khateeb; Mat Kamil Awang; Othman O. Khalifa

    2009-01-01

    A robotic vision system has been designed and analyzed for real time tracking of maneuvering objects. Passive detection using live TV images provides the tracking signals derived from the video data. The calibration and orientation of two cameras is done by a bundle adjustment technique. The target location algorithm determines the centroid coordinates of the target in the image plane

  18. THE USE OF PANORAMIC IMAGES FOR 3-D ARCHAEOLOGICAL SURVEY

    Microsoft Academic Search

    Henrik Haggrén; Hanne Junnilainen; Jaakko Järvinen; Terhi Nuutinen; Mika Laventob; Mika Huotarib

    Panoramic images are efficiently used for documenting archaeological sites and objects. In our paper we present a new approach in developing the use of panoramic images for archaeological survey. The work is part of the Finnish Jabal Haroun Project, in Petra, Jordan. The primary motivation has been in developing a procedure for field invention, in which photogrammetric documentation could be

  19. Automatic 3D Ultrasound Calibration for Image Guided Therapy Using Intramodality Image Registration

    PubMed Central

    Schlosser, Jeffrey; Kirmizibayrak, Can; Shamdasani, Vijay; Metz, Steve; Hristov, Dimitre

    2013-01-01

    Many real time ultrasound (US) guided therapies can benefit from management of motion-induced anatomical changes with respect to a previously acquired computerized anatomy model. Spatial calibration is a prerequisite to transforming US image information to the reference frame of the anatomy model. We present a new method for calibrating 3D US volumes using intramodality image registration, derived from the “hand eye” calibration technique. The method is fully automated by implementing data rejection based on sensor displacements, automatic registration over overlapping image regions, and a self-consistency error metric evaluated continuously during calibration. We also present a novel method for validating US calibrations based on measurement of physical phantom displacements within US images. Both calibration and validation can be performed on arbitrary phantoms. Results indicate that normalized mutual information and localized cross correlation produce the most accurate 3D US registrations for calibration. Volumetric image alignment is more accurate and reproducible than point selection for validating the calibrations, yielding <1.5 mm root mean square error, a significant improvement relative to previously reported hand eye US calibration results. Comparison of two different phantoms for calibration and for validation revealed significant differences for validation (p=0.003) but not for calibration (p=0.795). PMID:24099806

  20. Semi-implicit finite volume scheme for image processing in 3D cylindrical geometry

    NASA Astrophysics Data System (ADS)

    Mikula, Karol; Sgallari, Fiorella

    2003-12-01

    Nowadays, 3D echocardiography is a well-known technique in medical diagnosis. Inexpensive echocardiographic acquisition devices are applied to scan 2D slices rotated along a prescribed direction. Then the discrete 3D image information is given on a cylindrical grid. Usually, this original discrete image intensity function is interpolated to a uniform rectangular grid and then numerical schemes for 3D image processing operations (e.g. nonlinear smoothing) in the uniform rectangular geometry are used. However, due to the generally large amount of noise present in echocardiographic images, the interpolation step can yield undesirable results. In this paper, we avoid this step and suggest a 3D finite volume method for image selective smoothing directly in the cylindrical image geometry. Specifically, we study a semi-implicit 3D cylindrical finite volume scheme for solving a Perona-Malik-type nonlinear diffusion equation and apply the scheme to 3D cylindrical echocardiographic images. The L?-stability and convergence of the scheme to the weak solution of the regularized Perona-Malik equation is proved.

  1. A novel quality metric for evaluating depth distribution of artifacts in coded 3D images

    NASA Astrophysics Data System (ADS)

    Olsson, Roger; Sjöström, Mårten

    2008-02-01

    The two-dimensional quality metric Peak-Signal-To-Noise-Ratio (PSNR) is often used to evaluate the quality of coding schemes for different types of light field based 3D-images, e.g. integral imaging or multi-view. The metric results in a single accumulated quality value for the whole 3D-image. Evaluating single views -- seen from specific viewing angles -- gives a quality matrix that present the 3D-image quality as a function of viewing angle. However, these two approaches do not capture all aspects of the induced distortion in a coded 3D-image. We have previously shown coding schemes of similar kind for which coding artifacts are distributed differently with respect to the 3D-image's depth. In this paper we propose a novel metric that captures the depth distribution of coding-induced distortion. Each element in the resulting quality vector corresponds to the quality at a specific depth. First we introduce the proposed full-reference metric and the operations on which it is based. Second, the experimental setup is presented. Finally, the metric is evaluated on a set of differently coded 3D-images and the results are compared, both with previously proposed quality metrics and with visual inspection.

  2. Double depth-enhanced 3D integral imaging in projection-type system without diffuser

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Jiao, Xiao-xue; Sun, Yu; Xie, Yan; Liu, Shao-peng

    2015-05-01

    Integral imaging is a three dimensional (3D) display technology without any additional equipment. A new system is proposed in this paper which consists of the elemental images of real images in real mode (RIRM) and the ones of virtual images in real mode (VIRM). The real images in real mode are the same as the conventional integral images. The virtual images in real mode are obtained by changing the coordinates of the corresponding points in elemental images which can be reconstructed by the lens array in virtual space. In order to reduce the spot size of the reconstructed images, the diffuser in conventional integral imaging is given up in the proposed method. Then the spot size is nearly 1/20 of that in the conventional system. And an optical integral imaging system is constructed to confirm that our proposed method opens a new way for the application of the passive 3D display technology.

  3. Towards Vessel Characterisation in the Vicinity of the Optic Disc in Digital Retinal Images

    Microsoft Academic Search

    H. F. Jelinek; C. Depardieu; C. Lucas; D. J. Cornforth; W. Huang; M. J. Cree

    Automated image processing has the potential to assist in the early detection of diabetes, by detecting changes in blood vessel patterns in the retina. This paper describes progress towards the development of an integrated automated analyser of the retinal blood vessels in the vicinity of the optic disc using digital colour retinal images. First the optic disc was detected using

  4. Edge Detection of the Optic Disc in Retinal Images Based on Identification of a Round Shape

    Microsoft Academic Search

    Thanapong Chaichana; Sarat Yoowattana; Zhonghua Sun; Supan Tangjitkusolmun; Supot Sookpotharom; Manas Sangworasil

    2008-01-01

    This paper presents a novel method for identification of the position of the optic disc in retinal images. The method is based on the preliminary detection of the main edge detection of retinal image. The segmentation optic disc is estimated as a circular area. We searched for areas of optic disc using Hough transform which detected several straight lines and

  5. Multi-resolution Vessel Segmentation Using Normalized Cuts in Retinal Images

    E-print Network

    Chung, Albert C. S.

    Multi-resolution Vessel Segmentation Using Normalized Cuts in Retinal Images Wenchao Cai and Albert. In this paper, we focus on the vessel segmentation in retinal images. Vessel segmentation is very important a fuzzy C-means clustering algorithm. They started the fuzzy tracking algorithm from the optic located

  6. Improvement of in-vivo en-face OCT retinal images using adaptive optics

    E-print Network

    Dainty, Chris

    Improvement of in-vivo en-face OCT retinal images using adaptive optics D. Merinoa, A. Bradub, C retinal images using en-face optical coherence tomography is presented. The system also includes an adaptive optics closed-loop system that uses a Shack-Hartmann wavefront sensor and a 37 OKO membrane

  7. In Vivo Autofluorescence Imaging of the Human and Macaque Retinal Pigment Epithelial Cell Mosaic

    E-print Network

    In Vivo Autofluorescence Imaging of the Human and Macaque Retinal Pigment Epithelial Cell Mosaic. Retinal pigment epithelial (RPE) cells are critical for the health of the retina, especially the photoreceptors. A re- cent study demonstrated that individual RPE cells could be imaged in macaque in vivo

  8. The Dual Bootstrap Iterative Closest Point Algorithm with Application to Retinal Image Registration

    E-print Network

    The Dual Bootstrap Iterative Closest Point Algorithm with Application to Retinal Image Registration registration algorithm called Dual-Bootstrap Iterative Closest Point (ICP). The approach is to start from one Point (ICP) algorithm. In registering retinal image pairs, Dual-Bootstrap ICP is initialized

  9. Retinal layer segmentation of macular OCT images using boundary classification

    PubMed Central

    Lang, Andrew; Carass, Aaron; Hauser, Matthew; Sotirchos, Elias S.; Calabresi, Peter A.; Ying, Howard S.; Prince, Jerry L.

    2013-01-01

    Optical coherence tomography (OCT) has proven to be an essential imaging modality for ophthalmology and is proving to be very important in neurology. OCT enables high resolution imaging of the retina, both at the optic nerve head and the macula. Macular retinal layer thicknesses provide useful diagnostic information and have been shown to correlate well with measures of disease severity in several diseases. Since manual segmentation of these layers is time consuming and prone to bias, automatic segmentation methods are critical for full utilization of this technology. In this work, we build a random forest classifier to segment eight retinal layers in macular cube images acquired by OCT. The random forest classifier learns the boundary pixels between layers, producing an accurate probability map for each boundary, which is then processed to finalize the boundaries. Using this algorithm, we can accurately segment the entire retina contained in the macular cube to an accuracy of at least 4.3 microns for any of the nine boundaries. Experiments were carried out on both healthy and multiple sclerosis subjects, with no difference in the accuracy of our algorithm found between the groups. PMID:23847738

  10. Hands-on guide for 3D image creation for geological purposes

    NASA Astrophysics Data System (ADS)

    Frehner, Marcel; Tisato, Nicola

    2013-04-01

    Geological structures in outcrops or hand specimens are inherently three dimensional (3D), and therefore better understandable if viewed in 3D. While 3D models can easily be created, manipulated, and looked at from all sides on the computer screen (e.g., using photogrammetry or laser scanning data), 3D visualizations for publications or conference posters are much more challenging as they have to live in a 2D-world (i.e., on a sheet of paper). Perspective 2D visualizations of 3D models do not fully transmit the "feeling and depth of the third dimension" to the audience; but this feeling is desirable for a better examination and understanding in 3D of the structure under consideration. One of the very few possibilities to generate real 3D images, which work on a 2D display, is by using so-called stereoscopic images. Stereoscopic images are two images of the same object recorded from two slightly offset viewpoints. Special glasses and techniques have to be used to make sure that one image is seen only by one eye, and the other image is seen by the other eye, which together lead to the "3D effect". Geoscientists are often familiar with such 3D images. For example, geomorphologists traditionally view stereographic orthophotos by employing a mirror-steroscope. Nowadays, petroleum-geoscientists examine high-resolution 3D seismic data sets in special 3D visualization rooms. One of the methods for generating and viewing a stereoscopic image, which does not require a high-tech viewing device, is to create a so-called anaglyph. The principle is to overlay two images saturated in red and cyan, respectively. The two images are then viewed through red-cyan-stereoscopic glasses. This method is simple and cost-effective, but has some drawbacks in preserving colors accurately. A similar method is used in 3D movies, where polarized light or shuttering techniques are used to separate the left from the right image, which allows preserving the original colors. The advantage of red-cyan anaglyphs is their simplicity and the possibility to print them on normal paper or project them using a conventional projector. Producing 3D stereoscopic images is much easier than commonly thought. Our hands-on poster provides an easy-to-use guide for producing 3D stereoscopic images. Few simple rules-of-thumb are presented that define how photographs of any scene or object have to be shot to produce good-looking 3D images. We use the free software Stereophotomaker (http://stereo.jpn.org/eng/stphmkr) to produce anaglyphs and provide red-cyan 3D glasses for viewing them. Our hands-on poster is easy to adapt and helps any geologist to present his/her field or hand specimen photographs in a much more fashionable 3D way for future publications or conference posters.

  11. Developing New Image Registration Techniques and 3D Displays for Neuroimaging and Neurosurgery Yuese Zheng1

    E-print Network

    Zhou, Yaoqi

    Developing New Image Registration Techniques and 3D Displays for Neuroimaging and Neurosurgery accuracy to best fit the neurosurgery application. In the recent phase, we focus on feature extraction neurosurgery applications, intelligent preprocessing provides a promising solution. Final results

  12. 3D reconstruction from 2D images and applications to cell cytoskeleton

    E-print Network

    Cheng, Yuan, 1971-

    2001-01-01

    Approaches to achieve three dimensional (3D) reconstruction from 2D images can be grouped into two categories: computer-vision-based reconstruction and tomographic reconstruction. By exploring both the differences and ...

  13. A low-cost defocus blur module for video rate quantified 3D imaging

    E-print Network

    Ho, Leeway, 1982-

    2004-01-01

    Existing three-dimensional surface imaging systems are expensive, difficult to use, time consuming, and do not always provide the best accuracy or resolution. By using an offset aperture on a rotating disc, the 3D Monocular ...

  14. 3D Quantitative microwave imaging from sparsely measured data with Huber regularization

    E-print Network

    Pizurica, Aleksandra

    3D Quantitative microwave imaging from sparsely measured data with Huber regularization Funing Bai information: (Send correspondence to Funing Bai) Funing Bai: E-mail: Funing.Bai@telin.ugent.be, Telephone: +32

  15. A Comparison of Simularity Measures for use in 2D-3D Medical Image Registration

    Microsoft Academic Search

    Graeme P. Penney; Jürgen Weese; John A. Little; Paul Desmedt; Derek L. G. Hill; David J. Hawkes

    1998-01-01

    A comparison of six similarity measures for use in intensity-based two-dimensional-three-dimensional (2-D-3-D) image registration is presented. The accuracy of the similarity measures are compared to a \\

  16. Multiresolution 3-D reconstruction from side-scan sonar images.

    PubMed

    Coiras, Enrique; Petillot, Yvan; Lane, David M

    2007-02-01

    In this paper, a new method for the estimation of seabed elevation maps from side-scan sonar images is presented. The side-scan image formation process is represented by a Lambertian diffuse model, which is then inverted by a multiresolution optimization procedure inspired by expectation-maximization to account for the characteristics of the imaged seafloor region. On convergence of the model, approximations for seabed reflectivity, side-scan beam pattern, and seabed altitude are obtained. The performance of the system is evaluated against a real structure of known dimensions. Reconstruction results for images acquired by different sonar sensors are presented. Applications to augmented reality for the simulation of targets in sonar imagery are also discussed. PMID:17269632

  17. Hyperspectral image compression based on the framework of DSC using 3D-wavelet and LDPC

    Microsoft Academic Search

    Jiaji Wu; Kun Jiang; Yong Fang; Licheng Jiao

    2009-01-01

    In this paper, we propose a method based on both 3D-wavelet transform and low-density parity-check codes to realize the compression of hyperspectral images on the framework of DSC (Distributed Source Coding). The new approach which combines DSC and 3D-wavelet transform technique makes it possible to realize low encoding complexity at the encoder and achieve efficient performance of hyperspectral image compression.

  18. 3D processing of range image data for vision applications in manufacturing

    Microsoft Academic Search

    Dongming Zhao; Songtao Li; Jin Deng

    1998-01-01

    3D data present pertinent information about geometrical features of an object. It has been a classical approach that image data acquired by range sensors are processed as traditional 2.5D images. Range data have rich information that needs some special treatment in order to fully understand and utilize them. In this report, two case studies are presented to investigate the 3D

  19. On 3-D scene flow and structure recovery from multiview image sequences

    Microsoft Academic Search

    Ye Zhang; Chandra Kambhamettu

    2003-01-01

    Two novel systems computing dense three-dimensional (3-D) scene flow and structure from multiview image sequences are described in this paper. We do not assume rigidity of the scene motion, thus allowing for nonrigid motion in the scene. The first system, integrated model-based system (IMS), assumes that each small local image region is undergoing 3-D affine motion. Non-linear motion model fitting

  20. 3-D reconstruction of biological objects using underwater video technique and image processing

    Microsoft Academic Search

    S. Cocito; S. Sgorbini; A. Peirano; M. Valle

    2003-01-01

    This paper describes a 3-D reconstruction method which allows accurate measurements of volume, surface area and other morphometric measurements of three-dimensional biological objects, without removing them from the sea. It represents a novel approach based on multiple views (eight resulted to be sufficient) from underwater video images and a new image processing procedure (MOD3D), whose application has met the basic

  1. A multi-emitter localization comparison of 3D superresolution imaging modalities

    PubMed Central

    Liu, Sheng

    2014-01-01

    Single-molecule localization-based superresolution imaging is complicated by emission from multiple emitters overlapping at the detector. The potential for overlapping emitters is even greater for 3D imaging than for 2D imaging due to the large effective ‘volume’ of the 3D point spread function (PSF). Overlapping emission can be accounted for in the estimation model, recovering the ability to localize the emitters, but with the caveat that the localization precision has a dependence on the amount of overlap from other emitters. We investigate if a particular 3D imaging modality has a significant advantage in facilitating the position estimation of overlapping emitters. We compare variants of two commonly used and easily implemented imaging modalities for 3D single-molecule imaging: astigmatic imaging; dual focal plane imaging; and the combination of the two approaches- dual focal plane imaging with astigmatism. We use the Cramér-Rao lower bound (CRLB) to quantify the multi-emitter estimation performance by calculating the theoretical best localization precision under a multi-emitter estimation model. We investigate the performance of these 3D modalities under a wide range of conditions including various distributions of collected photons per emitter, background counts, pixel sizes, and camera readout noise values. Differences between modalities were small and we therefore conclude that multi-emitter fitting performance should not be a primary factor in selecting between these modalities. PMID:24281982

  2. RECONSTRUCTION OF 3D DIGITAL IMAGE OF WEEPINGFORSYTHIA POLLEN

    Microsoft Academic Search

    Dongwu Liu; Zhiwei Chen; Hongzhi Xu; Wenqi Liu; Lina Wang

    2009-01-01

    \\u000a Confocal microscopy, which is a major advance upon normal light microscopy, has been used in a number of scientific fields.\\u000a By confocal microscopy techniques, cells and tissues can be visualized deeply, and three-dimensional images created. Compared\\u000a with conventional microscopes, confocal microscope improves the resolution of images by eliminating out-of-focus light. Moreover,\\u000a confocal microscope has a higher level of sensitivity due

  3. Improved 3D cellular imaging by multispectral focus assessment

    NASA Astrophysics Data System (ADS)

    Zhao, Tong; Xiong, Yizhi; Chung, Alice P.; Wachman, Elliot S.; Farkas, Daniel L.

    2005-03-01

    Biological specimens are three-dimensional structures. However, when capturing their images through a microscope, there is only one plane in the field of view that is in focus, and out-of-focus portions of the specimen affect image quality in the in-focus plane. It is well-established that the microscope"s point spread function (PSF) can be used for blur quantitation, for the restoration of real images. However, this is an ill-posed problem, with no unique solution and with high computational complexity. In this work, instead of estimating and using the PSF, we studied focus quantitation in multi-spectral image sets. A gradient map we designed was used to evaluate the sharpness degree of each pixel, in order to identify blurred areas not to be considered. Experiments with realistic multi-spectral Pap smear images showed that measurement of their sharp gradients can provide depth information roughly comparable to human perception (through a microscope), while avoiding PSF estimation. Spectrum and morphometrics-based statistical analysis for abnormal cell detection can then be implemented in an image database where the axial structure has been refined.

  4. In vivo integrated photoacoustic ophthalmoscopy, optical coherence tomography, and scanning laser ophthalmoscopy for retinal imaging

    NASA Astrophysics Data System (ADS)

    Song, Wei; Zhang, Rui; Zhang, Hao F.; Wei, Qing; Cao, Wenwu

    2012-12-01

    The physiological and pathological properties of retina are closely associated with various optical contrasts. Hence, integrating different ophthalmic imaging technologies is more beneficial in both fundamental investigation and clinical diagnosis of several blinding diseases. Recently, photoacoustic ophthalmoscopy (PAOM) was developed for in vivo retinal imaging in small animals, which demonstrated the capability of imaging retinal vascular networks and retinal pigment epithelium (RPE) at high sensitivity. We combined PAOM with traditional imaging modalities, such as fluorescein angiography (FA), spectral-domain optical coherence tomography (SD-OCT), and auto-fluorescence scanning laser ophthalmoscopy (AF-SLO), for imaging rats and mice. The multimodal imaging system provided more comprehensive evaluation of the retina based on the complementary imaging contrast mechanisms. The high-quality retinal images show that the integrated ophthalmic imaging system has great potential in the investigation of blinding disorders.

  5. A novel 2D-3D registration algorithm for aligning fluoro images with 3D pre-op CT/MR images

    NASA Astrophysics Data System (ADS)

    Sundar, Hari; Khamene, Ali; Xu, Chenyang; Sauer, Frank; Davatzikos, Christos

    2006-03-01

    We propose a novel and fast way to perform 2D-3D registration between available intra-operative 2D images with pre-operative 3D images in order to provide better image-guidance. The current work is a feature based registration algorithm that allows the similarity to be evaluated in a very efficient and faster manner than that of intensity based approaches. The current approach is focused on solving the problem for neuro-interventional applications and therefore we use blood vessels, and specifically their centerlines as the features for registration. The blood vessels are segmented from the 3D datasets and their centerline is extracted using a sequential topological thinning algorithm. Segmentation of the 3D datasets is straightforward because of the injection of contrast agents. For the 2D image, segmentation of the blood vessel is performed by subtracting the image with no contrast (native) from the one with a contrast injection (fill). Following this we compute a modified version of the 2D distance transform. The modified distance transform is computed such that distance is zero on the centerline and increases as we move away from the centerline. This allows us a smooth metric that is minimal at the centerline and large as we move away from the vessel. This is a one time computation, and need not be reevaluated during the iterations. Also we simply sum over all the points rather than evaluating distances over all point pairs as would be done for similar Iterative Closest Point (ICP) based approaches. We estimate the three rotational and three translational parameters by minimizing this cost over all points in the 3D centerline. The speed improvement allows us to perform the registration in under a second on current workstations and therefore provides interactive registration for the interventionalist.

  6. 3D Wavelet Sub-Bands Mixing for Image Pierrick Coupe1,2,4

    E-print Network

    Boyer, Edmond

    3D Wavelet Sub-Bands Mixing for Image Denoising Pierrick Coup´e1,2,4 , Pierre Hellier1 of image denoising filter. Quantitative validation was carried out on synthetic datasets generated, qualitative results on real data are presented. 1 Introduction Image denoising can be considered

  7. Stochastic Tracking of 3D Human Figures Using 2D Image Motion

    Microsoft Academic Search

    Hedvig Sidenbladh; Michael J. Black; David J. Fleet

    2000-01-01

    A probabilistic method for tracking 3D articulated human figures in monocular image sequences is presented. Within a Bayesian framework, we define a generative model of image appearance, a robust likelihood function based on image graylevel differences, and a prior pro- bability distribution over pose and joint angles that models how humans move. The posterior probability distribution over model parameters is

  8. 3D Reconstruction of Buildings and Vegetation from Synthetic Aperture Radar (SAR) Images

    E-print Network

    3D Reconstruction of Buildings and Vegetation from Synthetic Aperture Radar (SAR) Images Martin of buildings and vegetation can be obtained from the shadows in synthetic aperture radar (SAR) images. First in the SAR image. Then the shad­ ows are assigned to suitable elevated objects. Since it is assumed

  9. Lossless compression of hyperspectral images based on 3D context prediction

    Microsoft Academic Search

    Lin Bai; Mingyi He; Yuchao Dai

    2008-01-01

    Prediction algorithms play an important role in lossless compression of hyperspectral images. However, conventional lossless compression algorithms based on prediction are usually inefficient in exploiting correlation in hyperspectral images. In this paper, a new algorithm for lossless compression of hyperspectral images based on 3D context prediction is proposed. The proposed algorithm consists of three parts to exploit the high spectral

  10. MAPPPING DIGITAL IMAGE TEXTURE ONTO 3D MODEL FROM LIDAR DATA

    Microsoft Academic Search

    Chunmei Hu; Yanmin Wang; Wentao Yu

    In this paper, an experiment system is developed to settle the problem about how to mapping a digital image onto 3D model from LIDAR data. Firstly, it chooses the corresponding points between point cloud and digital image, then, uses these corresponding points to calculate the exterior and interior orientation elements and systematic error corrections of image. For the purpose of

  11. Pearling: 3D interactive extraction of tubular structures from volumetric images

    E-print Network

    Rossignac, Jarek

    Pearling: 3D interactive extraction of tubular structures from volumetric images J. Rossignac1 , B and Reasoning Department, Princeton, NJ 08540 Abstract. This paper presents Pearling, a novel three image. Given a user-supplied initialization, Pearling extracts runs of pearls (balls) from the image

  12. Synchrotron infrared confocal microscope: Application to infrared 3D spectral imaging

    E-print Network

    Paris-Sud XI, Université de

    Synchrotron infrared confocal microscope: Application to infrared 3D spectral imaging F Jamme1, 2 coupled to an infrared microscope allows imaging at the so-called diffraction limit. Thus, numerous infrared beamlines around the world have been developed for infrared chemical imaging. Infrared microscopes

  13. Construction of Animal Models and Motion Synthesis in 3D Virtual Environments using Image Sequences

    E-print Network

    Tziritas, Georgios

    Construction of Animal Models and Motion Synthesis in 3D Virtual Environments using Image Sequences. The model is constructed by 2D images cap- tured by specific views. The animation is synthesised by us- ing physical motion models of the animal and tracking data from image sequences. Finally, the user selects some

  14. Classification and Characterization of Image Acquisition for 3D Scene Visualization and Reconstruction Applications

    Microsoft Academic Search

    Shou Kang Wei; Fay Huang; Reinhard Klette

    2000-01-01

    This paper discusses the techniques of image acquisition for 3D scene visualization andreconstruction applications (3DSVR). The existing image acquisition approaches in3DSVR applications are briefly reviewed. There are still lacks of the studies aboutwhat principles are essential in the design and how we can characterize the limitationsof an image acquisition model in a formal way. This paper addresses so me of

  15. Estimation of In Situ 3-D Particle Distributions From a Stereo Laser Imaging Profiler

    Microsoft Academic Search

    Paul Leo Drinkwater Roberts; Jonah V. Steinbuck; Jules S. Jaffe; Alexander R. Horner-Devine; Peter J. S. Franks; Fernando Simonet

    2011-01-01

    In this paper, an image processing system for esti- mating 3-D particle distributions from stereo light scatter images is described. The system incorporates measured, three-component velocity data to mitigate particle blur associated with instrument motion. An iterative background estimation algorithm yields a local threshold operator that dramatically reduces bias in particle counts over the full image field. Algorithms are tested

  16. Investigating 3D Geometry of Porous Media from High Resolution Images

    E-print Network

    New York at Stoney Brook, State University of

    Investigating 3D Geometry of Porous Media from High Resolution Images W. B. Lindquist and A Resolution Images W. B. Lindquist and A. Venkatarangan State University of New York at Stony Brook, Stony diameter core of basalt from a vesiculated lava flow imaged at 20 micron resolution. 1 Introduction

  17. Enhanced 3D Perception using Super-Resolution and Saturation Control Techniques for Solar Images

    Microsoft Academic Search

    Anaglyphs are an interesting way of generating stereoscopic images, especially in a cost-efficient and technically simple way. An anaglyph is generated by combining stereo pair of images for left and right scenes with appropriate offset with respect to each other, where each image is shown using a different color in order to reflect the 3D effect for the users who

  18. 3D Image Interpolation Based on Directional Coherence Yongmei Wang Zhunping Zhang

    E-print Network

    Duncan, James S.

    3D Image Interpolation Based on Directional Coherence Yongmei Wang Zhunping Zhang Baining Guo Dept participated in this work when he was working as an intern at Microsoft Research Asia. Abstract Image when compared with traditional image interpolation methods. The basis of DCI is a form of directional

  19. 3D Model Assisted Image Segmentation Srimal Jayawardena and Di Yang and Marcus Hutter

    E-print Network

    Hutter, Marcus

    . We present our results on photographs of a real car. Keywords. Image segmentation; 3D-2D Registration of the car, windshield, fender, front and back doors/windows. Many industry applications require an image focus our work on sub-segmentation of known car images. Cars pose a difficult segmen- tation task due

  20. Integration of virtual and real scenes within an integral 3D imaging environment

    NASA Astrophysics Data System (ADS)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  1. FPGA-based real-time anisotropic diffusion filtering of 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Castro-Pareja, Carlos R.; Dandekar, Omkar S.; Shekhar, Raj

    2005-02-01

    Three-dimensional ultrasonic imaging, especially the emerging real-time version of it, is particularly valuable in medical applications such as echocardiography, obstetrics and surgical navigation. A known problem with ultrasound images is their high level of speckle noise. Anisotropic diffusion filtering has been shown to be effective in enhancing the visual quality of 3D ultrasound images and as preprocessing prior to advanced image processing. However, due to its arithmetic complexity and the sheer size of 3D ultrasound images, it is not possible to perform online, real-time anisotropic diffusion filtering using standard software implementations. We present an FPGA-based architecture that allows performing anisotropic diffusion filtering of 3D images at acquisition rates, thus enabling the use of this filtering technique in real-time applications, such as visualization, registration and volume rendering.

  2. 3D segmentation and image annotation for quantitative diagnosis in lung CT images with pulmonary lesions

    NASA Astrophysics Data System (ADS)

    Li, Suo; Zhu, Yanjie; Sun, Jianyong; Zhang, Jianguo

    2013-03-01

    Pulmonary nodules and ground glass opacities are highly significant findings in high-resolution computed tomography (HRCT) of patients with pulmonary lesion. The appearances of pulmonary nodules and ground glass opacities show a relationship with different lung diseases. According to corresponding characteristic of lesion, pertinent segment methods and quantitative analysis are helpful for control and treat diseases at an earlier and potentially more curable stage. Currently, most of the studies have focused on two-dimensional quantitative analysis of these kinds of deceases. Compared to two-dimensional images, three-dimensional quantitative analysis can take full advantage of isotropic image data acquired by using thin slicing HRCT in space and has better quantitative precision for clinical diagnosis. This presentation designs a computer-aided diagnosis component to segment 3D disease areas of nodules and ground glass opacities in lung CT images, and use AIML (Annotation and image makeup language) to annotate the segmented 3D pulmonary lesions with information of quantitative measurement which may provide more features and information to the radiologists in clinical diagnosis.

  3. 3D transvaginal ultrasound imaging for identification of endometrial abnormality

    NASA Astrophysics Data System (ADS)

    Olstad, Bjoern; Berg, Sevald; Torp, Anders H.; Schipper, Klaus P.; Eik-Nes, Sturla H.

    1995-05-01

    A multi-center study has previously evaluated the use of 2-dimensional transvaginal ultrasound (TVS) to measure the thickness of the endometrium as a risk indicator for endometrial abnormality in women with postmenopausal bleeding. In this paper we present methods using 3-dimensional TVS in order to improve the measurement, shape analysis and visualization of the endometrium. Active contour techniques are applied to identify the endometrium in a 3D dataset. The shape of the endometrium is then visualized and utilized to do quantitative measurements of the thickness. The voxels inside the endometrium are volume rendered in order to emphasize inhomogeneities. Since these inhomogeneities can exist both on the outside and the inside of the endometrium, the rendering algorithm has a controllable opacity function. A 3-dimensional distance transform is performed on the data volume measuring the shortest distance to the detected endometrium border for each voxel. This distance is used as a basis for opacity computations which allows the user to emphasize different regions of the endometrium. In particular, the opacity function can be computed such that regions that violate the risk indicator for the endometrium thickness are highlighted.

  4. 3D imaging of nuclear reactions using GEM TPC

    NASA Astrophysics Data System (ADS)

    Biha?owicz, Jan S.; ?wiok, Miko?aj; Dominik, Wojciech; Kasprowicz, Grzegorz; Po?niak, Krzysztof

    2014-11-01

    We present a prototype of time projection chamber with planar electronic readout. The particular aspect of the readout is the arrangement and connection of pads in three linear arrays. A track of an ionizing particle may be reconstructed by applying the specially developed algorithms to the signals generated simultaneously in the three linear arrays of strips rotated by 60°. This provides the measurement of the coordinates of the track segment corresponding to the defined time slice in the plane perpendicular to the drift vector. The relative coordinate in the orthogonal direction is provided by the measurement of time sequence of signals at the known drift velocity value. It is expected to achieve comparable precision of 3D reconstruction of charged tracks in nuclear reactions at low energies as for the pixel readout but with significantly reduced costs of electronics. In this work the results of the first experiments using this TPC are presented. The reconstructed tracks of ? particles from decay of 222Rn using simple algorithm are shown. The encouraging results confirm the capability of such TPC to measure low-energy charged products of nuclear reactions and nuclear decays.

  5. 3D imaging of microstructure of spruce wood.

    PubMed

    Trtik, P; Dual, J; Keunecke, D; Mannes, D; Niemz, P; Stähli, P; Kaestner, A; Groso, A; Stampanoni, M

    2007-07-01

    Synchrotron radiation phase-contrast X-ray tomographic microscopy (srPCXTM) was applied to observation and identification of the features of spruce anatomy at the cellular lengthscale. The pilot experiments presented in the paper clearly revealed the features of the heartwood of Spruce (Picea abies [L.] Karst.), such as lumina and pits connecting the lumina, with a theoretical voxel size of 0.7 x 0.7 x 0.7 microm(3). The experiments were carried out on microspecimens of heartwood, measuring approximately 200 by 200 micrometers in cross-section. The technique for production and preparation of wood microsamples was developed within the framework of this investigation. The total porosity of the samples was derived and the values of the microstructural parameters, such as the diameters of tracheid, cell wall thicknesses and pit diameters were assessed non-invasively. Microstructural features as thin/small as approximately 1.5 microm were revealed and reconstructed in 3D. It is suggested that the position of sub-voxel-sized features (such as position of tori in the bordered pit pairs) can be determined indirectly using watershed segmentation. Moreover, the paper discusses the practical issues connected with a pipelined phase-contrast synchrotron-based microtomography experiment and the possible future potentials of this technique in the domain of wood science. PMID:17398115

  6. Intensity-Based 2D-3D Spine Image Registration Incorporating One Fiducial Marker

    E-print Network

    Pratt, Vaughan

    x-ray computed tomography (CT) image to one or more two- dimensional (2D) x-ray projection images (e with that of the x-ray projection images and the operating room. Figure 1 shows a schematic representation of the 2D-based 2D- 3D registration, the reference image is an intra-operative x-ray projection (2D) image

  7. Robot control from sequential image planes of a 3D object

    NASA Astrophysics Data System (ADS)

    Premkumar, Saganti B.; Harman, Thomas L.; Houston, A. G.; Nguyen, Luong A.

    1993-10-01

    Movement of a robot head between desired points in a 3D volume from (x1,y1,z1) to (x2,y2,z2) is crucial for high accuracy. When the knowledge of a 3D volume is only partial, obtained as a data set of cross-sectional image planes, control parameters for movement of the robot head are critical for best accuracy. In the present approach an attempt is being made to develop an interface for transforming control parameters of a robot system for desired movements of the robot head in the 3D volume from a sequence of cross-sectional image planes. Coordinates of a desired location from image data are obtained, and their corresponding locations on the object are estimated. These coordinates are transformed through matrix transformation into control parameters for the desired movements of the robot system. Most diagnostic medical imaging modalities obtain cross-sectional image planes of vital human organs. Treatment procedures often require 3D volume considerations. In the present approach a hypothetical radiation treatment procedure for a prostate cancer tumor in a 3D volume from given 2D cross-sectional sequential image planes is presented. Diagnostic ultrasound images of the prostate are obtained as sequential cross-sectional image planes at 2 mm apart from base to apex of the gland. An approach for robot coordinate movements for a simple robotic system with five degrees of freedom (Eshed Robotics, ER VII) is presented.

  8. Digital breast tomosynthesis image reconstruction using 2D and 3D total variation minimization

    PubMed Central

    2013-01-01

    Background Digital breast tomosynthesis (DBT) is an emerging imaging modality which produces three-dimensional radiographic images of breast. DBT reconstructs tomographic images from a limited view angle, thus data acquired from DBT is not sufficient enough to reconstruct an exact image. It was proven that a sparse image from a highly undersampled data can be reconstructed via compressed sensing (CS) techniques. This can be done by minimizing the l1 norm of the gradient of the image which can also be defined as total variation (TV) minimization. In tomosynthesis imaging problem, this idea was utilized by minimizing total variation of image reconstructed by algebraic reconstruction technique (ART). Previous studies have largely addressed 2-dimensional (2D) TV minimization and only few of them have mentioned 3-dimensional (3D) TV minimization. However, quantitative analysis of 2D and 3D TV minimization with ART in DBT imaging has not been studied. Methods In this paper two different DBT image reconstruction algorithms with total variation minimization have been developed and a comprehensive quantitative analysis of these two methods and ART has been carried out: The first method is ART?+?TV2D where TV is applied to each slice independently. The other method is ART?+?TV3D in which TV is applied by formulating the minimization problem 3D considering all slices. Results A 3D phantom which roughly simulates a breast tomosynthesis image was designed to evaluate the performance of the methods both quantitatively and qualitatively in the sense of visual assessment, structural similarity (SSIM), root means square error (RMSE) of a specific layer of interest (LOI) and total error values. Both methods show superior results in reducing out-of-focus slice blur compared to ART. Conclusions Computer simulations show that ART + TV3D method substantially enhances the reconstructed image with fewer artifacts and smaller error rates than the other two algorithms under the same configuration and parameters and it provides faster convergence rate. PMID:24172584

  9. Making 3D Binary Digital Images Well-Composed Marcelo Siqueiraa, Longin Jan Lateckib and Jean Galliera

    E-print Network

    Latecki, Longin Jan

    Making 3D Binary Digital Images Well-Composed Marcelo Siqueiraa, Longin Jan Lateckib and Jean, USA ABSTRACT A 3D binary digital image is said to be well-composed if and only if the set of points randomized algorithm for making 3D binary digital images that are not well-composed into well-composed ones

  10. 3-D capacitance density imaging of fluidized bed

    DOEpatents

    Fasching, George E. (653 Vista Pl., Morgantown, WV 26505)

    1990-01-01

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved.

  11. Applications of Panoramic Images: from 720° Panorama to Interior 3d Models of Augmented Reality

    NASA Astrophysics Data System (ADS)

    Lee, I.-C.; Tsai, F.

    2015-05-01

    A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The results presented in this paper demonstrate the potential of using panoramic images to generate 3D point clouds and 3D models. However, it is currently a manual and labor-intensive process. A research is being carried out to Increase the degree of automation of these procedures.

  12. Atherosclerosis imaging using 3D black blood TSE SPACE vs 2D TSE

    PubMed Central

    Wong, Stephanie K; Mobolaji-Iawal, Motunrayo; Arama, Leron; Cambe, Joy; Biso, Sylvia; Alie, Nadia; Fayad, Zahi A; Mani, Venkatesh

    2014-01-01

    AIM: To compare 3D Black Blood turbo spin echo (TSE) sampling perfection with application-optimized contrast using different flip angle evolution (SPACE) vs 2D TSE in evaluating atherosclerotic plaques in multiple vascular territories. METHODS: The carotid, aortic, and femoral arterial walls of 16 patients at risk for cardiovascular or atherosclerotic disease were studied using both 3D black blood magnetic resonance imaging SPACE and conventional 2D multi-contrast TSE sequences using a consolidated imaging approach in the same imaging session. Qualitative and quantitative analyses were performed on the images. Agreement of morphometric measurements between the two imaging sequences was assessed using a two-sample t-test, calculation of the intra-class correlation coefficient and by the method of linear regression and Bland-Altman analyses. RESULTS: No statistically significant qualitative differences were found between the 3D SPACE and 2D TSE techniques for images of the carotids and aorta. For images of the femoral arteries, however, there were statistically significant differences in all four qualitative scores between the two techniques. Using the current approach, 3D SPACE is suboptimal for femoral imaging. However, this may be due to coils not being optimized for femoral imaging. Quantitatively, in our study, higher mean total vessel area measurements for the 3D SPACE technique across all three vascular beds were observed. No significant differences in lumen area for both the right and left carotids were observed between the two techniques. Overall, a significant-correlation existed between measures obtained between the two approaches. CONCLUSION: Qualitative and quantitative measurements between 3D SPACE and 2D TSE techniques are comparable. 3D-SPACE may be a feasible approach in the evaluation of cardiovascular patients. PMID:24876923

  13. Digital holography particle image velocimetry for the measurement of 3D t-3c flows

    NASA Astrophysics Data System (ADS)

    Shen, Gongxin; Wei, Runjie

    2005-10-01

    In this paper a digital in-line holographic recording and reconstruction system was set up and used in the particle image velocimetry for the 3D t-3c (the three-component (3c), velocity vector field measurements in a three-dimensional (3D), space field with time history ( t)) flow measurements that made up of the new full-flow field experimental technique—digital holographic particle image velocimetry (DHPIV). The traditional holographic film was replaced by a CCD chip that records instantaneously the interference fringes directly without the darkroom processing, and the virtual image slices in different positions were reconstructed by computation using Fresnel-Kirchhoff integral method from the digital holographic image. Also a complex field signal filter (analyzing image calculated by its intensity and phase from real and image parts in fast fourier transform (FFT)) was applied in image reconstruction to achieve the thin focus depth of image field that has a strong effect with the vertical velocity component resolution. Using the frame-straddle CCD device techniques, the 3c velocity vector was computed by 3D cross-correlation through space interrogation block matching through the reconstructed image slices with the digital complex field signal filter. Then the 3D-3c-velocity field (about 20 000 vectors), 3D-streamline and 3D-vorticiry fields, and the time evolution movies (30 field/s) for the 3D t-3c flows were displayed by the experimental measurement using this DHPIV method and techniques.

  14. Adaptive optics retinal imaging in the living mouse eye

    PubMed Central

    Geng, Ying; Dubra, Alfredo; Yin, Lu; Merigan, William H.; Sharma, Robin; Libby, Richard T.; Williams, David R.

    2012-01-01

    Correction of the eye’s monochromatic aberrations using adaptive optics (AO) can improve the resolution of in vivo mouse retinal images [Biss et al., Opt. Lett. 32(6), 659 (2007) and Alt et al., Proc. SPIE 7550, 755019 (2010)], but previous attempts have been limited by poor spot quality in the Shack-Hartmann wavefront sensor (SHWS). Recent advances in mouse eye wavefront sensing using an adjustable focus beacon with an annular beam profile have improved the wavefront sensor spot quality [Geng et al., Biomed. Opt. Express 2(4), 717 (2011)], and we have incorporated them into a fluorescence adaptive optics scanning laser ophthalmoscope (AOSLO). The performance of the instrument was tested on the living mouse eye, and images of multiple retinal structures, including the photoreceptor mosaic, nerve fiber bundles, fine capillaries and fluorescently labeled ganglion cells were obtained. The in vivo transverse and axial resolutions of the fluorescence channel of the AOSLO were estimated from the full width half maximum (FWHM) of the line and point spread functions (LSF and PSF), and were found to be better than 0.79 ?m ± 0.03 ?m (STD)(45% wider than the diffraction limit) and 10.8 ?m ± 0.7 ?m (STD)(two times the diffraction limit), respectively. The axial positional accuracy was estimated to be 0.36 ?m. This resolution and positional accuracy has allowed us to classify many ganglion cell types, such as bistratified ganglion cells, in vivo. PMID:22574260

  15. Automated retinal vessel type classification in color fundus images

    NASA Astrophysics Data System (ADS)

    Yu, H.; Barriga, S.; Agurto, C.; Nemeth, S.; Bauman, W.; Soliz, P.

    2013-02-01

    Automated retinal vessel type classification is an essential first step toward machine-based quantitative measurement of various vessel topological parameters and identifying vessel abnormalities and alternations in cardiovascular disease risk analysis. This paper presents a new and accurate automatic artery and vein classification method developed for arteriolar-to-venular width ratio (AVR) and artery and vein tortuosity measurements in regions of interest (ROI) of 1.5 and 2.5 optic disc diameters from the disc center, respectively. This method includes illumination normalization, automatic optic disc detection and retinal vessel segmentation, feature extraction, and a partial least squares (PLS) classification. Normalized multi-color information, color variation, and multi-scale morphological features are extracted on each vessel segment. We trained the algorithm on a set of 51 color fundus images using manually marked arteries and veins. We tested the proposed method in a previously unseen test data set consisting of 42 images. We obtained an area under the ROC curve (AUC) of 93.7% in the ROI of AVR measurement and 91.5% of AUC in the ROI of tortuosity measurement. The proposed AV classification method has the potential to assist automatic cardiovascular disease early detection and risk analysis.

  16. Adaptive optics retinal imaging in the living mouse eye.

    PubMed

    Geng, Ying; Dubra, Alfredo; Yin, Lu; Merigan, William H; Sharma, Robin; Libby, Richard T; Williams, David R

    2012-04-01

    Correction of the eye's monochromatic aberrations using adaptive optics (AO) can improve the resolution of in vivo mouse retinal images [Biss et al., Opt. Lett. 32(6), 659 (2007) and Alt et al., Proc. SPIE 7550, 755019 (2010)], but previous attempts have been limited by poor spot quality in the Shack-Hartmann wavefront sensor (SHWS). Recent advances in mouse eye wavefront sensing using an adjustable focus beacon with an annular beam profile have improved the wavefront sensor spot quality [Geng et al., Biomed. Opt. Express 2(4), 717 (2011)], and we have incorporated them into a fluorescence adaptive optics scanning laser ophthalmoscope (AOSLO). The performance of the instrument was tested on the living mouse eye, and images of multiple retinal structures, including the photoreceptor mosaic, nerve fiber bundles, fine capillaries and fluorescently labeled ganglion cells were obtained. The in vivo transverse and axial resolutions of the fluorescence channel of the AOSLO were estimated from the full width half maximum (FWHM) of the line and point spread functions (LSF and PSF), and were found to be better than 0.79 ?m ± 0.03 ?m (STD)(45% wider than the diffraction limit) and 10.8 ?m ± 0.7 ?m (STD)(two times the diffraction limit), respectively. The axial positional accuracy was estimated to be 0.36 ?m. This resolution and positional accuracy has allowed us to classify many ganglion cell types, such as bistratified ganglion cells, in vivo. PMID:22574260

  17. A survey of 3D medical imaging technologies

    Microsoft Academic Search

    G. T. Herman

    1990-01-01

    Three-dimensional medical imaging methodologies are surveyed with respect to hardware versus software, stand-alone versus on-the-scanner, speed, interaction, rendering methodology, fidelity, ease of use, cost, and quantitative capability. The question of volume versus surface rendering is considered in more detail. Research results are cited to illustrate the capabilities discussed

  18. Nondestructive imaging of stem cell in 3D scaffold

    NASA Astrophysics Data System (ADS)

    Chen, Chao-Wei; Yeatts, Andrew B.; Fisher, John P.; Chen, Yu

    2012-06-01

    We have developed a line-scanning angled fluorescent laminar optical tomography (LS-aFLOT) system. This system enables three-dimensional imaging of fluorescent-labeled stem cell distribution within engineered tissue scaffold over a several-millimeter field-of-view.

  19. Radiometric modeling of a 3D imaging laser scanner

    Microsoft Academic Search

    Sergio Ortiz; Jose Diaz-Caro; Rosario Pareja

    2005-01-01

    Active imaging systems allow obtaining data in more than two dimensions. In addition to the spatial information, these systems are able to provide the intensity distribution of one scene. From this data channel a certain number of physic magnitudes that show some features of the illuminated surface can be recovered. The different behaviours of the scene elements about the directionality

  20. Space Radar Image of Kilauea, Hawaii in 3-D

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a three-dimensional perspective view of a false-color image of the eastern part of the Big Island of Hawaii. It was produced using all three radar frequencies -- X-band, C-band and L-band -- from the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) flying on the space shuttle Endeavour, overlaid on a U.S. Geological Survey digital elevation map. Visible in the center of the image in blue are the summit crater (Kilauea Caldera) which contains the smaller Halemaumau Crater, and the line of collapse craters below them that form the Chain of Craters Road. The image was acquired on April 12, 1994 during orbit 52 of the space shuttle. The area shown is approximately 34 by 57 kilometers (21 by 35 miles) with the top of the image pointing toward northwest. The image is centered at about 155.25 degrees west longitude and 19.5 degrees north latitude. The false colors are created by displaying three radar channels of different frequency. Red areas correspond to high backscatter at L-HV polarization, while green areas exhibit high backscatter at C-HV polarization. Finally, blue shows high return at X-VV polarization. Using this color scheme, the rain forest appears bright on the image, while the green areas correspond to lower vegetation. The lava flows have different colors depending on their types and are easily recognizable due to their shapes. The flows at the top of the image originated from the Mauna Loa volcano. Kilauea volcano has been almost continuously active for more than the last 11 years. Field teams that were on the ground specifically to support these radar observations report that there was vigorous surface activity about 400 meters (one-quartermile) inland from the coast. A moving lava flow about 200 meters (650 feet) in length was observed at the time of the shuttle overflight, raising the possibility that subsequent images taken during this mission will show changes in the landscape. Currently, most of the lava that is erupted travels the 8 kilometers (5 miles) from the Pu'u O'o crater (the active vent) just outside this image to the coast through a series of lava tubes, but in the past there have been many large lava flows that have traveled this distance, destroying houses and parts of the Hawaii Volcanoes National Park. This SIR-C/X-SAR image shows two types of lava flows that are common to Hawaiian volcanoes. Pahoehoe lava flows are relatively smooth, and appear very dark blue because much of the radar energy is reflected away from the radar. In contrast other lava flows are relatively rough and bounce much of the radar energy back to the radar, making that part of the image bright blue. This radar image is valuable because it allows scientists to study an evolving lava flow field from the Pu'u O'o vent. Much of the area on the northeast side (right) of the volcano is covered with tropical rain forest, and because trees reflect a lot of the radar energy, the forest appears bright in this radar scene. The linear feature running from Kilauea Crater to the right of the image is Highway 11leading to the city of Hilo which is located just beyond the right edge of this image. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA)

  1. High-resolution 3D X-ray imaging of intracranial nitinol stents

    Microsoft Academic Search

    Rudolph M. Snoeren; Michael Söderman; Johannes N. Kroon; Ruben B. Roijers; Drazenko Babic

    2011-01-01

    Introduction  To assess an optimized 3D imaging protocol for intracranial nitinol stents in 3D C-arm flat detector imaging. For this purpose,\\u000a an image quality simulation and an in vitro study was carried out.\\u000a \\u000a \\u000a \\u000a \\u000a Methods  Nitinol stents of various brands were placed inside an anthropomorphic head phantom, using iodine contrast. Experiments with\\u000a objects were preceded by image quality and dose simulations. We varied

  2. Simultaneous whole-animal 3D-imaging of neuronal activity using light field microscopy

    E-print Network

    Prevedel, R; Hoffmann, M; Pak, N; Wetzstein, G; Kato, S; Schrödel, T; Raskar, R; Zimmer, M; Boyden, E S; Vaziri, A

    2014-01-01

    3D functional imaging of neuronal activity in entire organisms at single cell level and physiologically relevant time scales faces major obstacles due to trade-offs between the size of the imaged volumes, and spatial and temporal resolution. Here, using light-field microscopy in combination with 3D deconvolution, we demonstrate intrinsically simultaneous volumetric functional imaging of neuronal population activity at single neuron resolution for an entire organism, the nematode Caenorhabditis elegans. The simplicity of our technique and possibility of the integration into epi-fluoresence microscopes makes it an attractive tool for high-speed volumetric calcium imaging.

  3. Mixed reality orthognathic surgical simulation by entity model manipulation and 3D-image display

    NASA Astrophysics Data System (ADS)

    Shimonagayoshi, Tatsunari; Aoki, Yoshimitsu; Fushima, Kenji; Kobayashi, Masaru

    2005-12-01

    In orthognathic surgery, the framing of 3D-surgical planning that considers the balance between the front and back positions and the symmetry of the jawbone, as well as the dental occlusion of teeth, is essential. In this study, a support system for orthodontic surgery to visualize the changes in the mandible and the occlusal condition and to determine the optimum position in mandibular osteotomy has been developed. By integrating the operating portion of a tooth model that is to determine the optimum occlusal position by manipulating the entity tooth model and the 3D-CT skeletal images (3D image display portion) that are simultaneously displayed in real-time, the determination of the mandibular position and posture in which the improvement of skeletal morphology and occlusal condition is considered, is possible. The realistic operation of the entity model and the virtual 3D image display enabled the construction of a surgical simulation system that involves augmented reality.

  4. An adaptive-optics scanning laser ophthalmoscope for imaging murine retinal microstructure

    NASA Astrophysics Data System (ADS)

    Alt, Clemens; Biss, David P.; Tajouri, Nadja; Jakobs, Tatjana C.; Lin, Charles P.

    2010-02-01

    In vivo retinal imaging is an outstanding tool to observe biological processes unfold in real-time. The ability to image microstructure in vivo can greatly enhance our understanding of function in retinal microanatomy under normal conditions and in disease. Transgenic mice are frequently used for mouse models of retinal diseases. However, commercially available retinal imaging instruments lack the optical resolution and spectral flexibility necessary to visualize detail comprehensively. We developed an adaptive optics scanning laser ophthalmoscope (AO-SLO) specifically for mouse eyes. Our SLO is a sensor-less adaptive optics system (no Shack Hartmann sensor) that employs a stochastic parallel gradient descent algorithm to modulate a deformable mirror, ultimately aiming to correct wavefront aberrations by optimizing confocal image sharpness. The resulting resolution allows detailed observation of retinal microstructure. The AO-SLO can resolve retinal microglia and their moving processes, demonstrating that microglia processes are highly motile, constantly probing their immediate environment. Similarly, retinal ganglion cells are imaged along with their axons and sprouting dendrites. Retinal blood vessels are imaged both using evans blue fluorescence and backscattering contrast.

  5. Mechanically assisted 3D prostate ultrasound imaging and biopsy needle-guidance system

    NASA Astrophysics Data System (ADS)

    Bax, Jeffrey; Williams, Jackie; Cool, Derek; Gardi, Lori; Montreuil, Jacques; Karnik, Vaishali; Sherebrin, Shi; Romagnoli, Cesare; Fenster, Aaron

    2010-02-01

    Prostate biopsy procedures are currently limited to using 2D transrectal ultrasound (TRUS) imaging to guide the biopsy needle. Being limited to 2D causes ambiguity in needle guidance and provides an insufficient record to allow guidance to the same suspicious locations or avoid regions that are negative during previous biopsy sessions. We have developed a mechanically assisted 3D ultrasound imaging and needle tracking system, which supports a commercially available TRUS probe and integrated needle guide for prostate biopsy. The mechanical device is fixed to a cart and the mechanical tracking linkage allows its joints to be manually manipulated while fully supporting the weight of the ultrasound probe. The computer interface is provided in order to track the needle trajectory and display its path on a corresponding 3D TRUS image, allowing the physician to aim the needle-guide at predefined targets within the prostate. The system has been designed for use with several end-fired transducers that can be rotated about the longitudinal axis of the probe in order to generate 3D image for 3D navigation. Using the system, 3D TRUS prostate images can be generated in approximately 10 seconds. The system reduces most of the user variability from conventional hand-held probes, which make them unsuitable for precision biopsy, while preserving some of the user familiarity and procedural workflow. In this paper, we describe the 3D TRUS guided biopsy system and report on the initial clinical use of this system for prostate biopsy.

  6. Segmentation of vertebral bodies in CT and MR images based on 3D deterministic models

    NASA Astrophysics Data System (ADS)

    Štern, Darko; Vrtovec, Tomaž; Pernuš, Franjo; Likar, Boštjan

    2011-03-01

    The evaluation of vertebral deformations is of great importance in clinical diagnostics and therapy of pathological conditions affecting the spine. Although modern clinical practice is oriented towards the computed tomography (CT) and magnetic resonance (MR) imaging techniques, as they can provide a detailed 3D representation of vertebrae, the established methods for the evaluation of vertebral deformations still provide only a two-dimensional (2D) geometrical description. Segmentation of vertebrae in 3D may therefore not only improve their visualization, but also provide reliable and accurate 3D measurements of vertebral deformations. In this paper we propose a method for 3D segmentation of individual vertebral bodies that can be performed in CT and MR images. Initialized with a single point inside the vertebral body, the segmentation is performed by optimizing the parameters of a 3D deterministic model of the vertebral body to achieve the best match of the model to the vertebral body in the image. The performance of the proposed method was evaluated on five CT (40 vertebrae) and five T2-weighted MR (40 vertebrae) spine images, among them five are normal and five are pathological. The results show that the proposed method can be used for 3D segmentation of vertebral bodies in CT and MR images and that the proposed model can describe a variety of vertebral body shapes. The method may be therefore used for initializing whole vertebra segmentation or reliably describing vertebral body deformations.

  7. Simulation of a new 3D imaging sensor for identifying difficult military targets

    NASA Astrophysics Data System (ADS)

    Harvey, Christophe; Wood, Jonathan; Randall, Peter; Watson, Graham; Smith, Gordon

    2008-04-01

    This paper reports the successful application of automatic target recognition and identification (ATR/I) algorithms to simulated 3D imagery of 'difficult' military targets. QinetiQ and Selex S&AS are engaged in a joint programme to build a new 3D laser imaging sensor for UK MOD. The sensor is a 3D flash system giving an image containing range and intensity information suitable for targeting operations from fast jet platforms, and is currently being integrated with an ATR/I suite for demonstration and testing. The sensor has been extensively modelled and a set of high fidelity simulated imagery has been generated using the CAMEO-SIM scene generation software tool. These include a variety of different scenarios (varying range, platform altitude, target orientation and environments), and some 'difficult' targets such as concealed military vehicles. The ATR/I algorithms have been tested on this image set and their performance compared to 2D passive imagery from the airborne trials using a Wescam MX-15 infrared sensor and real-time ATR/I suite. This paper outlines the principles behind the sensor model and the methodology of 3D scene simulation. An overview of the 3D ATR/I programme and algorithms is presented, and the relative performance of the ATR/I against the simulated image set is reported. Comparisons are made to the performance of typical 2D sensors, confirming the benefits of 3D imaging for targeting applications.

  8. Model-based 3-D scene analysis from stereoscopic image sequences

    NASA Astrophysics Data System (ADS)

    Koch, Reinhard

    A vision-based 3-D scene analysis system is described that is capable to model complex real-world scences like buildings automatically from stereoscopic image pairs. Input to the system is a sequence of stereoscopic images taken with two standard CCD Cameras and TV lenses. The relative orientation of both cameras to each other is known by calibration. The camera pair is then moved throughout the scene and a long sequence of closely spaced views is recorded. Each of the stereoscopic image pairs is rectified and a dense map of 3-D surface points is obtained by area correlation, object segmentation, interpolation, and triangulation. 3-D camera motion relative to the scene coordinate system is tracked directly from the image sequence which allows to fuse 3-D surface measurements from different view points into a consistent 3-D model scence. The surface geometry of each scene object is approximated by a triangular surface mesh which stores the surface texture in a texture map. From the textured 3-D models, realistic looking image sequences from arbitrary view points can be synthesized using computer graphics.

  9. Space Radar Image of Long Valley, California - 3D view

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a three-dimensional perspective view of Long Valley, California by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This view was constructed by overlaying a color composite SIR-C image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle and, which then, are compared to obtain elevation information. The data were acquired on April 13, 1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR radar instrument. The color composite radar image was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is off the image to the left. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.V.(DLR), the major partner in science, operations and data processing of X-SAR.

  10. Space Radar Image of Long Valley, California in 3-D

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This three-dimensional perspective view of Long Valley, California was created from data taken by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This image was constructed by overlaying a color composite SIR-C radar image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle. The two data passes are compared to obtain elevation information. The interferometry data were acquired on April 13,1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR instrument. The color composite radar image was taken in October and was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is the large dark feature in the foreground. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.v. (DLR), the major partner in science, operations and data processing of X-SAR.

  11. Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery.

    PubMed

    Wang, Junchen; Suenaga, Hideyuki; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro; Liao, Hongen

    2014-04-01

    Computer-assisted oral and maxillofacial surgery (OMS) has been rapidly evolving since the last decade. State-of-the-art surgical navigation in OMS still suffers from bulky tracking sensors, troublesome image registration procedures, patient movement, loss of depth perception in visual guidance, and low navigation accuracy. We present an augmented reality navigation system with automatic marker-free image registration using 3-D image overlay and stereo tracking for dental surgery. A customized stereo camera is designed to track both the patient and instrument. Image registration is performed by patient tracking and real-time 3-D contour matching, without requiring any fiducial and reference markers. Real-time autostereoscopic 3-D imaging is implemented with the help of a consumer-level graphics processing unit. The resulting 3-D image of the patient's anatomy is overlaid on the surgical site by a half-silvered mirror using image registration and IP-camera registration to guide the surgeon by exposing hidden critical structures. The 3-D image of the surgical instrument is also overlaid over the real one for an augmented display. The 3-D images present both stereo and motion parallax from which depth perception can be obtained. Experiments were performed to evaluate various aspects of the system; the overall image overlay error of the proposed system was 0.71 mm. PMID:24658253

  12. Free-Breathing 3D Whole Heart Black Blood Imaging with Motion Sensitized Driven Equilibrium

    PubMed Central

    Srinivasan, Subashini; Hu, Peng; Kissinger, Kraig V.; Goddu, Beth; Goepfert, Lois; Schmidt, Ehud J.; Kozerke, Sebastian; Nezafat, Reza

    2012-01-01

    Purpose To assess the efficacy and robustness of motion sensitized driven equilibrium (MSDE) for blood suppression in volumetric 3D whole heart cardiac MR. Materials and Methods To investigate the efficacy of MSDE on blood suppression and myocardial SNR loss on different imaging sequences. 7 healthy adult subjects were imaged using 3D ECG-triggered MSDE-prep T1-weighted turbo spin echo (TSE), and spoiled gradient echo (GRE), after optimization of MSDE parameters in a pilot study of 5 subjects. Imaging artifacts, myocardial and blood SNR were assessed. Subsequently, the feasibility of isotropic spatial resolution MSDE-prep black-blood was assessed in 6 subjects. Finally, 15 patients with known or suspected cardiovascular disease were recruited to be imaged using conventional multi-slice 2D DIR TSE imaging sequence and 3D MSDE-prep spoiled GRE. Results The MSDE-prep yields significant blood suppression (75-92%), enabling a volumetric 3D black-blood assessment of the whole heart with significantly improved visualization of the chamber walls. The MSDE-prep also allowed successful acquisition of black-blood images with isotropic spatial resolution. In the patient study, 3D black-blood MSDE-prep and DIR resulted in similar blood suppression in LV and RV walls but the MSDE prep had superior myocardial signal and wall sharpness. Conclusion MSDE-prep allows volumetric black-blood imaging of the heart. PMID:22517477

  13. A molecular image-directed, 3D ultrasound-guided biopsy system for the prostate

    NASA Astrophysics Data System (ADS)

    Fei, Baowei; Schuster, David M.; Master, Viraj; Akbari, Hamed; Fenster, Aaron; Nieh, Peter

    2012-02-01

    Systematic transrectal ultrasound (TRUS)-guided biopsy is the standard method for a definitive diagnosis of prostate cancer. However, this biopsy approach uses two-dimensional (2D) ultrasound images to guide biopsy and can miss up to 30% of prostate cancers. We are developing a molecular image-directed, three-dimensional (3D) ultrasound imageguided biopsy system for improved detection of prostate cancer. The system consists of a 3D mechanical localization system and software workstation for image segmentation, registration, and biopsy planning. In order to plan biopsy in a 3D prostate, we developed an automatic segmentation method based wavelet transform. In order to incorporate PET/CT images into ultrasound-guided biopsy, we developed image registration methods to fuse TRUS and PET/CT images. The segmentation method was tested in ten patients with a DICE overlap ratio of 92.4% +/- 1.1 %. The registration method has been tested in phantoms. The biopsy system was tested in prostate phantoms and 3D ultrasound images were acquired from two human patients. We are integrating the system for PET/CT directed, 3D ultrasound-guided, targeted biopsy in human patients.

  14. Critical Comparison of 3-d Imaging Approaches for NGST

    E-print Network

    Charles L. Bennett

    1999-08-22

    Currently three imaging spectrometer architectures, tunable filter, dispersive, and Fourier transform, are viable for imaging the universe in three dimensions. There are domains of greatest utility for each of these architectures. The optimum choice among the various alternative architectures is dependent on the nature of the desired observations, the maturity of the relevant technology, and the character of the backgrounds. The domain appropriate for each of the alternatives is delineated; both for instruments having ideal performance as well as for instrumentation based on currently available technology. The environment and science objectives for the Next Generation Space Telescope will be used as a specific representative case to provide a basis for comparison of the various alternatives.

  15. Make3D: Depth Perception from a Single Still Image

    Microsoft Academic Search

    Ashutosh Saxena; Min Sun; Andrew Y. Ng

    2008-01-01

    Humans have an amazing ability to perceive depth from a single still image; however, it remains a challenging problem for current computer vision systems. In this paper, we will present algorithms for estimating depth from a single still im- age. There are numerous monocular cues—such as texture vari- ations and gradients, defocus, color\\/haze, etc.—that can be used for depth perception.

  16. 3D and 4D magnetic susceptibility tomography based on complex MR images

    DOEpatents

    Chen, Zikuan; Calhoun, Vince D

    2014-11-11

    Magnetic susceptibility is the physical property for T2*-weighted magnetic resonance imaging (T2*MRI). The invention relates to methods for reconstructing an internal distribution (3D map) of magnetic susceptibility values, .chi. (x,y,z), of an object, from 3D T2*MRI phase images, by using Computed Inverse Magnetic Resonance Imaging (CIMRI) tomography. The CIMRI technique solves the inverse problem of the 3D convolution by executing a 3D Total Variation (TV) regularized iterative convolution scheme, using a split Bregman iteration algorithm. The reconstruction of .chi. (x,y,z) can be designed for low-pass, band-pass, and high-pass features by using a convolution kernel that is modified from the standard dipole kernel. Multiple reconstructions can be implemented in parallel, and averaging the reconstructions can suppress noise. 4D dynamic magnetic susceptibility tomography can be implemented by reconstructing a 3D susceptibility volume from a 3D phase volume by performing 3D CIMRI magnetic susceptibility tomography at each snapshot time.

  17. High-resolution wide-field imaging of retinal and choroidal blood perfusion with optical microangiography

    PubMed Central

    An, Lin; Subhush, Hrebesh M.; Wilson, David J.; Wang, Ruikang K.

    2010-01-01

    We present high-resolution wide-field imaging of retinal and choroidal blood perfusion with optical microangiography (OMAG) technology. Based on spatial frequency analysis, OMAG is capable of visualizing the vascular perfusion map down to capillary-level resolution. An OMAG system operating at 840 nm is used with an A-scan rate of 27,000 Hz, axial resolution of 8 ?m, and sensitivity of 98 dB. To achieve wide-field imaging, we capture 16 optical coherence tomography (OCT) 3-D datasets in a sequential order, which together provide an area of ?7.4×7.4 mm2 at the posterior segment of the human eye. For each of these datasets, the bulk tissue motion artifacts are eliminated by applying a phase compensation method based on histogram estimation of bulk motion phases, while the displacements occurring between adjacent B-frames are compensated for by 2-D cross correlation between two adjacent OMAG flow images. The depth-resolved capability of OMAG imaging also provides volumetric information on the ocular circulations. Finally, we compare the clinical fluorescein angiography and indocyanine green angiography imaging results with the OMAG results of blood perfusion map within the retina and choroid, and show excellent agreement between these modalities. PMID:20459256

  18. High-resolution wide-field imaging of retinal and choroidal blood perfusion with optical microangiography.

    PubMed

    An, Lin; Subhush, Hrebesh M; Wilson, David J; Wang, Ruikang K

    2010-01-01

    We present high-resolution wide-field imaging of retinal and choroidal blood perfusion with optical microangiography (OMAG) technology. Based on spatial frequency analysis, OMAG is capable of visualizing the vascular perfusion map down to capillary-level resolution. An OMAG system operating at 840 nm is used with an A-scan rate of 27,000 Hz, axial resolution of 8 mum, and sensitivity of 98 dB. To achieve wide-field imaging, we capture 16 optical coherence tomography (OCT) 3-D datasets in a sequential order, which together provide an area of approximately 7.4 x 7.4 mm(2) at the posterior segment of the human eye. For each of these datasets, the bulk tissue motion artifacts are eliminated by applying a phase compensation method based on histogram estimation of bulk motion phases, while the displacements occurring between adjacent B-frames are compensated for by 2-D cross correlation between two adjacent OMAG flow images. The depth-resolved capability of OMAG imaging also provides volumetric information on the ocular circulations. Finally, we compare the clinical fluorescein angiography and indocyanine green angiography imaging results with the OMAG results of blood perfusion map within the retina and choroid, and show excellent agreement between these modalities. PMID:20459256

  19. Studying disagreements among retinal experts through image analysis.

    PubMed

    Quellec, Gwénolé; Lamard, Mathieu; Cochener, Béatrice; Droueche, Zakarya; Lay, Bruno; Chabouis, Agnès; Roux, Christian; Cazuguel, Guy

    2012-01-01

    In recent years, many image analysis algorithms have been presented to assist Diabetic Retinopathy (DR) screening. The goal was usually to detect healthy examination records automatically, in order to reduce the number of records that should be analyzed by retinal experts. In this paper, a novel application is presented: these algorithms are used to 1) discover image characteristics that sometimes cause an expert to disagree with his/her peers and 2) warn the expert whenever these characteristics are detected in an examination record. In a DR screening program, each examination record is only analyzed by one expert, therefore analyzing disagreements among experts is challenging. A statistical framework, based on Parzen-windowing and the Patrick-Fischer distance, is presented to solve this problem. Disagreements among eleven experts from the Ophdiat screening program were analyzed, using an archive of 25,702 examination records. PMID:23367286

  20. Three factors that influence the overall quality of the stereoscopic 3D content: image quality, comfort and realism

    E-print Network

    Paris-Sud XI, Université de

    quality. Keywords: 3D quality, stereoscopic quality, subjective evaluation, 3D database classification 1Three factors that influence the overall quality of the stereoscopic 3D content: image quality, there are many discussions on controlling and improving the 3D quality. But what does this notion represent

  1. Discrimination of retinal images containing bright lesions using sparse coded features and SVM.

    PubMed

    Sidibé, Désiré; Sadek, Ibrahim; Mériaudeau, Fabrice

    2015-07-01

    Diabetic Retinopathy (DR) is a chronic progressive disease of the retinal microvasculature which is among the major causes of vision loss in the world. The diagnosis of DR is based on the detection of retinal lesions such as microaneurysms, exudates and drusen in retinal images acquired by a fundus camera. However, bright lesions such as exudates and drusen share similar appearances while being signs of different diseases. Therefore, discriminating between different types of lesions is of interest for improving screening performances. In this paper, we propose to use sparse coding techniques for retinal images classification. In particular, we are interested in discriminating between retinal images containing either exudates or drusen, and normal images free of lesions. Extensive experiments show that dictionary learning techniques can capture strong structures of retinal images and produce discriminant descriptors for classification. In particular, using a linear SVM with the obtained sparse coded features, the proposed method achieves superior performance as compared with the popular Bag-of-Visual-Word approach for image classification. Experiments with a dataset of 828 retinal images collected from various sources show that the proposed approach provides excellent discrimination results for normal, drusen and exudates images. It achieves a sensitivity and a specificity of 96.50% and 97.70% for the normal class; 99.10% and 100% for the drusen class; and 97.40% and 98.20% for the exudates class with a medium size dictionary of 100 atoms. PMID:25935125

  2. Dual wavelength digital holography for 3D particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Grare, S.; Coëtmellec, S.,; Allano, D.; Grehan, G.; Brunel, M.; Lebrun, D.

    2015-02-01

    A multi-exposure digital in-line hologram of a moving particle field is recorded by two different wavelengths and at different times. As a result, during the reconstruction step, each hologram can be independently and accurately reconstructed for each wavelength. This procedure enables avoiding the superimposition of particles images that may be close to each other in multi-exposure holography. The feasibility is demonstrated by using a standard particle sizing reticle and shows the potential of this method for particle velocity measurement.

  3. A new gold-standard dataset for 2D/3D image registration evaluation

    NASA Astrophysics Data System (ADS)

    Pawiro, Supriyanto; Markelj, Primoz; Gendrin, Christelle; Figl, Michael; Stock, Markus; Bloch, Christoph; Weber, Christoph; Unger, Ewald; Nöbauer, Iris; Kainberger, Franz; Bergmeister, Helga; Georg, Dietmar; Bergmann, Helmar; Birkfellner, Wolfgang

    2010-02-01

    In this paper, we propose a new gold standard data set for the validation of 2D/3D image registration algorithms for image guided radiotherapy. A gold standard data set was calculated using a pig head with attached fiducial markers. We used several imaging modalities common in diagnostic imaging or radiotherapy which include 64-slice computed tomography (CT), magnetic resonance imaging (MRI) using T1, T2 and proton density (PD) sequences, and cone beam CT (CBCT) imaging data. Radiographic data were acquired using kilovoltage (kV) and megavoltage (MV) imaging techniques. The image information reflects both anatomy and reliable fiducial marker information, and improves over existing data sets by the level of anatomical detail and image data quality. The markers of three dimensional (3D) and two dimensional (2D) images were segmented using Analyze 9.0 (AnalyzeDirect, Inc) and an in-house software. The projection distance errors (PDE) and the expected target registration errors (TRE) over all the image data sets were found to be less than 1.7 mm and 1.3 mm, respectively. The gold standard data set, obtained with state-of-the-art imaging technology, has the potential to improve the validation of 2D/3D registration algorithms for image guided therapy.

  4. Fully automatic 2D to 3D conversion with aid of high-level image features

    NASA Astrophysics Data System (ADS)

    Appia, Vikram; Batur, Umit

    2014-03-01

    With the recent advent in 3D display technology, there is an increasing need for conversion of existing 2D content into rendered 3D views. We propose a fully automatic 2D to 3D conversion algorithm that assigns relative depth values to the various objects in a given 2D image/scene and generates two different views (stereo pair) using a Depth Image Based Rendering (DIBR) algorithm for 3D displays. The algorithm described in this paper creates a scene model for each image based on certain low-level features like texture, gradient and pixel location and estimates a pseudo depth map. Since the capture environment is unknown, using low-level features alone creates inaccuracies in the depth map. Using such flawed depth map for 3D rendering will result in various artifacts, causing an unpleasant viewing experience. The proposed algorithm also uses certain high-level image features to overcome these imperfections and generates an enhanced depth map for improved viewing experience. Finally, we show several 3D results generated with our algorithm in the results section.

  5. 3D Imaging for hand gesture recognition: Exploring the software-hardware interaction of current technologies

    NASA Astrophysics Data System (ADS)

    Periverzov, Frol; Ilie?, Horea T.

    2012-09-01

    Interaction with 3D information is one of the fundamental and most familiar tasks in virtually all areas of engineering and science. Several recent technological advances pave the way for developing hand gesture recognition capabilities available to all, which will lead to more intuitive and efficient 3D user interfaces (3DUI). These developments can unlock new levels of expression and productivity in all activities concerned with the creation and manipulation of virtual 3D shapes and, specifically, in engineering design. Building fully automated systems for tracking and interpreting hand gestures requires robust and efficient 3D imaging techniques as well as potent shape classifiers. We survey and explore current and emerging 3D imaging technologies, and focus, in particular, on those that can be used to build interfaces between the users' hands and the machine. The purpose of this paper is to categorize and highlight the relevant differences between these existing 3D imaging approaches in terms of the nature of the information provided, output data format, as well as the specific conditions under which these approaches yield reliable data. Furthermore we explore the impact of each of these approaches on the computational cost and reliability of the required image processing algorithms. Finally we highlight the main challenges and opportunities in developing natural user interfaces based on hand gestures, and conclude with some promising directions for future research. [Figure not available: see fulltext.

  6. Sample preparation for 3D SIMS chemical imaging of cells.

    PubMed

    Winograd, Nicholas; Bloom, Anna

    2015-01-01

    Time-of-flight secondary ion mass spectrometry (ToF-SIMS) is an emerging technique for the characterization of biological systems. With the development of novel ion sources such as cluster ion beams, ionization efficiency has been increased, allowing for greater amounts of information to be obtained from the sample of interest. This enables the plotting of the distribution of chemical compounds against position with submicrometer resolution, yielding a chemical map of the material. In addition, by combining imaging with molecular depth profiling, a complete 3-dimensional rendering of the object is possible. The study of single biological cells presents significant challenges due to the fundamental complexity associated with any biological material. Sample preparation is of critical importance in controlling this complexity, owing to the fragile nature of biological cells and to the need to characterize them in their native state, free of chemical or physical changes. Here, we describe the four most widely used sample preparation methods for cellular imaging using ToF-SIMS, and provide guidance for data collection and analysis procedures. PMID:25361662

  7. 3D Imaging of Twin Domain Defects in Gold Nanoparticles.

    PubMed

    Ulvestad, Andrew; Clark, Jesse N; Harder, Ross; Robinson, Ian K; Shpyrko, Oleg G

    2015-06-10

    Topological defects are ubiquitous in physics and include crystallographic imperfections such as defects in condensed matter systems. Defects can determine many of the material's properties, thus providing novel opportunities for defect engineering. However, it is difficult to track buried defects and their interfaces in three dimensions with nanoscale resolution. Here, we report three-dimensional visualization of gold nanocrystal twin domains using Bragg coherent X-ray diffractive imaging in an aqueous environment. We capture the size and location of twin domains, which appear as voids in the Bragg electron density, in addition to a component of the strain field. Twin domains can interrupt the stacking order of the parent crystal, leading to a phase offset between the separated parent crystal pieces. We utilize this phase offset to estimate the roughness of the twin boundary. We measure the diffraction signal from the crystal twin and show its Bragg electron density fits into the parent crystal void. Defect imaging will likely facilitate improvement and rational design of nanostructured materials. PMID:25965558

  8. Registration of 3-D images using weighted geometrical features

    SciTech Connect

    Maurer, C.R. Jr.; Aboutanos, G.B.; Dawant, B.M.; Maciunas, R.J.; Fitzpatrick, J.M. [Vanderbilt Univ., Nashville, TN (United States)] [Vanderbilt Univ., Nashville, TN (United States)

    1996-12-01

    In this paper, the authors present a weighted geometrical features (WGF) registration algorithm. Its efficacy is demonstrated by combining points and a surface. The technique is an extension of Besl and McKay`s iterative closest point (ICP) algorithm. The authors use the WGF algorithm to register X-ray computed tomography (CT) and T2-weighted magnetic resonance (MR) volume head images acquired from eleven patients that underwent craniotomies in a neurosurgical clinical trial. Each patient had five external markers attached to transcutaneous posts screwed into the outer table of the skull. The authors define registration error as the distance between positions of corresponding markers that are not used for registration. The CT and MR images are registered using fiducial points (marker positions) only, a surface only, and various weighted combinations of points and a surface. The CT surface is derived from contours corresponding to the inner surface of the skull. The MR surface is derived from contours corresponding to the cerebrospinal fluid (CSF)-dura interface. Registration using points and a surface is found to be significantly more accurate than registration using only points or a surface.

  9. Segmentation of Three-dimensional Retinal Image Data

    Microsoft Academic Search

    Alfred R. Fuller; Robert J. Zawadzki; Stacey Choi; David F. Wiley; John S. Werner; Bernd Hamann

    2007-01-01

    We have combined methods from volume visualization and data analysis to support better diagnosis and treatment of human retinal diseases. Many diseases can be identified by abnormalities in the thicknesses of various retinal layers captured using optical coherence tomography (OCT). We used a support vector machine (SVM) to perform semi-automatic segmentation of retinal layers for subsequent analysis including a comparison

  10. Terahertz Lasers Reveal Information for 3D Images

    NASA Technical Reports Server (NTRS)

    2013-01-01

    After taking off her shoes and jacket, she places them in a bin. She then takes her laptop out of its case and places it in a separate bin. As the items move through the x-ray machine, the woman waits for a sign from security personnel to pass through the metal detector. Today, she was lucky; she did not encounter any delays. The man behind her, however, was asked to step inside a large circular tube, raise his hands above his head, and have his whole body scanned. If you have ever witnessed a full-body scan at the airport, you may have witnessed terahertz imaging. Terahertz wavelengths are located between microwave and infrared on the electromagnetic spectrum. When exposed to these wavelengths, certain materials such as clothing, thin metal, sheet rock, and insulation become transparent. At airports, terahertz radiation can illuminate guns, knives, or explosives hidden underneath a passenger s clothing. At NASA s Kennedy Space Center, terahertz wavelengths have assisted in the inspection of materials like insulating foam on the external tanks of the now-retired space shuttle. "The foam we used on the external tank was a little denser than Styrofoam, but not much," says Robert Youngquist, a physicist at Kennedy. The problem, he explains, was that "we lost a space shuttle by having a chunk of foam fall off from the external fuel tank and hit the orbiter." To uncover any potential defects in the foam covering, such as voids or air pockets, that could keep the material from staying in place, NASA employed terahertz imaging to see through the foam. For many years, the technique ensured the integrity of the material on the external tanks.

  11. Optimized Protocol for Retinal Wholemount Preparation for Imaging and Immunohistochemistry

    PubMed Central

    Ivanova, Elena; Toychiev, Abduqodir H; Yee, Christopher W; Sagdullaev, Botir T

    2014-01-01

    Working with delicate tissue can be a complicating factor when performing immunohistochemical assessment. Here, we present a method that utilizes a ring-supported hydrophilized PTFE membrane to provide structural support to both living and fixed tissue during immunohistochemical processing, which allows for the use of a variety of protocols that would otherwise cause damage to the tissue. First, this is demonstrated with bolus loading of fluorescent markers into living retinal tissue. This method allows for quick visualization of targeted structures, while the membrane support maintains tissue integrity during the injection and allows for easy transfer of the preparation for further imaging or processing. Second, a procedure for antibody staining in tissue fixed with carbodiimide is described. Though paraformaldehyde fixation is more common, carbodiimide fixation provides a superior substrate for the visualization of synaptic proteins. A limitation of carbodiimide is that the resulting fixed tissue is relatively fragile; however, this is overcome with the use of the supporting membrane. Retinal tissue is used to demonstrate these techniques, but they may be applied to any fragile tissue. PMID:24379013

  12. Denoising for 3-d photon-limited imaging data using nonseparable filterbanks.

    PubMed

    Santamaria-Pang, Alberto; Bildea, Teodor Stefan; Tan, Shan; Kakadiaris, Ioannis A

    2008-12-01

    In this paper, we present a novel frame-based denoising algorithm for photon-limited 3-D images. We first construct a new 3-D nonseparable filterbank by adding elements to an existing frame in a structurally stable way. In contrast with the traditional 3-D separable wavelet system, the new filterbank is capable of using edge information in multiple directions. We then propose a data-adaptive hysteresis thresholding algorithm based on this new 3-D nonseparable filterbank. In addition, we develop a new validation strategy for denoising of photon-limited images containing sparse structures, such as neurons (the structure of interest is less than 5% of total volume). The validation method, based on tubular neighborhoods around the structure, is used to determine the optimal threshold of the proposed denoising algorithm. We compare our method with other state-of-the-art methods and report very encouraging results on applications utilizing both synthetic and real data. PMID:19004704

  13. Empower PACS with content-based queries and 3D image visualization

    NASA Astrophysics Data System (ADS)

    Wong, Stephen T. C.; Hoo, Kent S., Jr.; Huang, H. K.

    1996-05-01

    Current generation of picture archiving and communication systems (PACS) lacks the capabilities to permit content-based searches to be made on image data and to visualize and render 3D image data in a cost effective manner. The purpose of this research project is to investigate a framework that will combine the storage and communication components of PACS with the power of content-based image indexing and 3D visualization. This presentation will describe the integrated architecture and tools of our experimental system with examples taken from applications of neurological surgical planning and assessment of pediatric bone age.

  14. Segmentation and length measurement of the abdominal blood vessels in 3-D MRI images.

    PubMed

    Babin, Danilo; Vansteenkiste, Ewout; Pizurica, Aleksandra; Philips, Wilfried

    2009-01-01

    In diagnosing diseases and planning surgeries the structure and length of blood vessels is of great importance. In this research we develop a novel method for the segmentation of 2-D and 3-D images with an application to blood vessel length measurements in 3-D abdominal MRI images. Our approach is robust to noise and does not require contrast-enhanced images for segmentation. We use an effective algorithm for skeletonization, graph construction and shortest path estimation to measure the length of blood vessels of interest. PMID:19964361

  15. Nonrigid registration and classification of the kidneys in 3D dynamic contrast enhanced (DCE) MR images

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Ghafourian, Pegah; Sharma, Puneet; Salman, Khalil; Martin, Diego; Fei, Baowei

    2012-02-01

    We have applied image analysis methods in the assessment of human kidney perfusion based on 3D dynamic contrast-enhanced (DCE) MRI data. This approach consists of 3D non-rigid image registration of the kidneys and fuzzy C-mean classification of kidney tissues. The proposed registration method reduced motion artifacts in the dynamic images and improved the analysis of kidney compartments (cortex, medulla, and cavities). The dynamic intensity curves show the successive transition of the contrast agent through kidney compartments. The proposed method for motion correction and kidney compartment classification may be used to improve the validity and usefulness of further model-based pharmacokinetic analysis of kidney function.

  16. Polarization imaging of a 3D object by use of digital holography and its application

    NASA Astrophysics Data System (ADS)

    Nomura, Takanori; Javidi, Bahram

    2010-04-01

    A polarimetric imaging method of a 3D object by use of on-axis phase-shifting digital holography is presented. The polarimetric image results from a combination of two kinds of holographic imaging using orthogonal polarized reference waves. Experimental demonstration of a 3D polarimetric imaging is presented. Pattern recognition by use of polarimetric phase-shifting digital holography is also presented. Using holography, the amplitude and phase difference distributions between two orthogonal polarizations of 3D phase objects are obtained. This information contains both complex amplitude and polarimetric characteristics of the object, and it can be used for improving the discrimination capability of object recognition. Preliminary experimental results are presented to demonstrate the idea.

  17. Real-time 3D adaptive filtering for portable imaging systems

    NASA Astrophysics Data System (ADS)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often not able to run with sufficient performance on a portable platform. In recent years, advanced multicore DSPs have been introduced that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms like 3D adaptive filtering, improving the image quality of portable medical imaging devices. In this study, the performance of a 3D adaptive filtering algorithm on a digital signal processor (DSP) is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec.

  18. Synthesis of 3D Model of a Magnetic Field-Influenced Body from a Single Image

    NASA Technical Reports Server (NTRS)

    Wang, Cuilan; Newman, Timothy; Gallagher, Dennis

    2006-01-01

    A method for recovery of a 3D model of a cloud-like structure that is in motion and deforming but approximately governed by magnetic field properties is described. The method allows recovery of the model from a single intensity image in which the structure's silhouette can be observed. The method exploits envelope theory and a magnetic field model. Given one intensity image and the segmented silhouette in the image, the method proceeds without human intervention to produce the 3D model. In addition to allowing 3D model synthesis, the method's capability to yield a very compact description offers further utility. Application of the method to several real-world images is demonstrated.

  19. Reconstruction of 3-D road geometry from images for autonomous land vehicles

    Microsoft Academic Search

    K. Kanatani; K. Watanabe

    1990-01-01

    A novel algorithm for reconstructing 3-D road geometry from images is presented for the purpose of autonomously navigating land vehicles. The reconstruction is based on an idealized road model: a road is assumed to be generated by a horizontal line segment of a fixed length sweeping in the scene. The constraints that ideal road images must satisfy are expressed as

  20. Structured light 3D tracking system for measuring motions in PET brain imaging

    Microsoft Academic Search

    Oline V. Olesen; Morten R. Jørgensen; Rasmus R. Paulsen; Liselotte Højgaard; Bjarne Roed; Rasmus Larsen

    2010-01-01

    Patient motion during scanning deteriorates image quality, especially for high resolution PET scanners. A new proposal for a 3D head tracking system for motion correction in high resolution PET brain imaging is set up and demonstrated. A prototype tracking system based on structured light with a DLP projector and a CCD camera is set up on a model of the