Science.gov

Sample records for 3-d retinal imaging

  1. Stereo capture: local rematching driven by binocularly attended 3-D configuration rather than retinal images.

    PubMed

    Wu, X N; Zhou, Q; Qi, X L; Wang, Y J

    1998-07-01

    Previous explanations for stereo capture were mainly based on the low-level perceptual processing of binocular stereopsis which usually shows that one pair of retinal images corresponds to only one 3-D perceptual configuration. Stereo capture, however, may encounter multiple perceptual configurations due to the matching ambiguity of wallpaper elements that may not be solved merely by bottom-up processing of the retinal stimuli. The present study suggests that binocular attention plays an important role in stereo capture by way of selecting and enhancing a perceptual configuration that is often ambiguous without attention involved. Stereo capture results from wallpaper's local rematching driven by binocularly attended 3-D configuration rather than retinal images. PMID:9797968

  2. Automated 3-D retinal layer segmentation of macular optical coherence tomography images with serous pigment epithelial detachments.

    PubMed

    Shi, Fei; Chen, Xinjian; Zhao, Heming; Zhu, Weifang; Xiang, Dehui; Gao, Enting; Sonka, Milan; Chen, Haoyu

    2015-02-01

    Automated retinal layer segmentation of optical coherence tomography (OCT) images has been successful for normal eyes but becomes challenging for eyes with retinal diseases if the retinal morphology experiences critical changes. We propose a method to automatically segment the retinal layers in 3-D OCT data with serous retinal pigment epithelial detachments (PED), which is a prominent feature of many chorioretinal disease processes. The proposed framework consists of the following steps: fast denoising and B-scan alignment, multi-resolution graph search based surface detection, PED region detection and surface correction above the PED region. The proposed technique was evaluated on a dataset with OCT images from 20 subjects diagnosed with PED. The experimental results showed the following. 1) The overall mean unsigned border positioning error for layer segmentation is 7.87±3.36 ?m , and is comparable to the mean inter-observer variability ( 7.81±2.56 ?m). 2) The true positive volume fraction (TPVF), false positive volume fraction (FPVF) and positive predicative value (PPV) for PED volume segmentation are 87.1%, 0.37%, and 81.2%, respectively. 3) The average running time is 220 s for OCT data of 512 × 64 × 480 voxels. PMID:25265605

  3. A statistical model for 3D segmentation of retinal choroid in optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Ghasemi, F.; Rabbani, H.

    2014-03-01

    The choroid is a densely layer under the retinal pigment epithelium (RPE). Its deeper boundary is formed by the sclera, the outer fibrous shell of the eye. However, the inhomogeneity within the layers of choroidal Optical Coherence Tomography (OCT)-tomograms presents a significant challenge to existing segmentation algorithms. In this paper, we performed a statistical study of retinal OCT data to extract the choroid. This model fits a Gaussian mixture model (GMM) to image intensities with Expectation Maximization (EM) algorithm. The goodness of fit for proposed GMM model is computed using Chi-square measure and is obtained lower than 0.04 for our dataset. After fitting GMM model on OCT data, Bayesian classification method is employed for segmentation of the upper and lower border of boundary of retinal choroid. Our simulations show the signed and unsigned error of -1.44 +/- 0.5 and 1.6 +/- 0.53 for upper border, and -5.7 +/- 13.76 and 6.3 +/- 13.4 for lower border, respectively.

  4. Retinal Imaging and Image Analysis

    PubMed Central

    Abràmoff, Michael D.; Garvin, Mona K.; Sonka, Milan

    2011-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships. PMID:21743764

  5. 3D Imaging.

    ERIC Educational Resources Information Center

    Hastings, S. K.

    2002-01-01

    Discusses 3 D imaging as it relates to digital representations in virtual library collections. Highlights include X-ray computed tomography (X-ray CT); the National Science Foundation (NSF) Digital Library Initiatives; output peripherals; image retrieval systems, including metadata; and applications of 3 D imaging for libraries and museums. (LRW)

  6. 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.

    2010-06-01

    Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of a 3D photoacoustic imaging system, and (ii) that reconstruction algorithms which favor sparseness can significantly improve imaging performance. These methodologies should provide a means to optimize detector count and geometry for a multitude of 3D photoacoustic imaging applications.

  7. A framework for retinal layer intensity analysis for retinal artery occlusion patient based on 3D OCT

    NASA Astrophysics Data System (ADS)

    Liao, Jianping; Chen, Haoyu; Zhou, Chunlei; Chen, Xinjian

    2014-03-01

    Occlusion of retinal artery leads to severe ischemia and dysfunction of retina. Quantitative analysis of the reflectivity in the retina is very needed to quantitative assessment of the severity of retinal ischemia. In this paper, we proposed a framework for retinal layer intensity analysis for retinal artery occlusion patient based on 3D OCT images. The proposed framework consists of five main steps. First, a pre-processing step is applied to the input OCT images. Second, the graph search method was applied to segment multiple surfaces in OCT images. Third, the RAO region was detected based on texture classification method. Fourth, the layer segmentation was refined using the detected RAO regions. Finally, the retinal layer intensity analysis was performed. The proposed method was tested on tested on 27 clinical Spectral domain OCT images. The preliminary results show the feasibility and efficacy of the proposed method.

  8. 3-D segmentation of retinal blood vessels in spectral-domain OCT volumes of the optic nerve head

    NASA Astrophysics Data System (ADS)

    Lee, Kyungmoo; Abràmoff, Michael D.; Niemeijer, Meindert; Garvin, Mona K.; Sonka, Milan

    2010-03-01

    Segmentation of retinal blood vessels can provide important information for detecting and tracking retinal vascular diseases including diabetic retinopathy, arterial hypertension, arteriosclerosis and retinopathy of prematurity (ROP). Many studies on 2-D segmentation of retinal blood vessels from a variety of medical images have been performed. However, 3-D segmentation of retinal blood vessels from spectral-domain optical coherence tomography (OCT) volumes, which is capable of providing geometrically accurate vessel models, to the best of our knowledge, has not been previously studied. The purpose of this study is to develop and evaluate a method that can automatically detect 3-D retinal blood vessels from spectral-domain OCT scans centered on the optic nerve head (ONH). The proposed method utilized a fast multiscale 3-D graph search to segment retinal surfaces as well as a triangular mesh-based 3-D graph search to detect retinal blood vessels. An experiment on 30 ONH-centered OCT scans (15 right eye scans and 15 left eye scans) from 15 subjects was performed, and the mean unsigned error in 3-D of the computer segmentations compared with the independent standard obtained from a retinal specialist was 3.4 +/- 2.5 voxels (0.10 +/- 0.07 mm).

  9. 3D Computational Ghost Imaging

    E-print Network

    Baoqing Sun; Matthew P. Edgar; Richard Bowman; Liberty E. Vittert; Stephen S. Welsh; Ardrian Bowman; Miles J. Padgett

    2013-05-15

    Computational ghost imaging retrieves the spatial information of a scene using a single pixel detector. By projecting a series of known random patterns and measuring the back reflected intensity for each one, it is possible to reconstruct a 2D image of the scene. In this work we overcome previous limitations of computational ghost imaging and capture the 3D spatial form of an object by using several single pixel detectors in different locations. From each detector we derive a 2D image of the object that appears to be illuminated from a different direction, using only a single digital projector as illumination. Comparing the shading of the images allows the surface gradient and hence the 3D form of the object to be reconstructed. We compare our result to that obtained from a stereo- photogrammetric system utilizing multiple high resolution cameras. Our low cost approach is compatible with consumer applications and can readily be extended to non-visible wavebands.

  10. Quantitative analysis of retinal layers' optical intensities on 3D optical coherence tomography for central retinal artery occlusion

    PubMed Central

    Chen, Haoyu; Chen, Xinjian; Qiu, Zhiqiao; Xiang, Dehui; Chen, Weiqi; Shi, Fei; Zheng, Jianlong; Zhu, Weifang; Sonka, Milan

    2015-01-01

    Optical coherence tomography (OCT) provides not only morphological information but also information about layer-specific optical intensities, which may represent the underlying tissue properties. The purpose of this study is to quantitatively investigate the optical intensity of each retinal layers in central retinal artery occlusion (CRAO). Twenty-nine CRAO cases at acute phase and 33 normal controls were included. Macula-centered 3D OCT images were segmented with a fully-automated Iowa Reference Algorithm into 10 layers. Layer-specific mean intensities were determined and compared between the patient and control groups using multiple regression analysis while adjusting for age and optical intensity of the entire region. The optical intensities were higher in CRAO than in controls in layers spanning from the retinal ganglion cell layer to outer plexiform layer (standardized beta = 0.657 to 0.777, all p < 0.001), possibly due to ischemia. Optical intensities were lower at the photoreceptor, retinal pigment epithelium (RPE), and choroid layers (standardized beta = ?0.412 to ?0.611, all p < 0.01), possibly due to shadowing effects. Among the intraretinal layers, the inner nuclear layer was identified as the best indicator of CRAO. Our study provides in vivo information of the optical intensity changes in each retinal layer in CRAO patients. PMID:25784298

  11. Synthetic 3D diamond-based electrodes for flexible retinal neuroprostheses: Model, production and in vivo biocompatibility.

    PubMed

    Bendali, Amel; Rousseau, Lionel; Lissorgues, Gaëlle; Scorsone, Emmanuel; Djilas, Milan; Dégardin, Julie; Dubus, Elisabeth; Fouquet, Stéphane; Benosman, Ryad; Bergonzo, Philippe; Sahel, José-Alain; Picaud, Serge

    2015-10-01

    Two retinal implants have recently received the CE mark and one has obtained FDA approval for the restoration of useful vision in blind patients. Since the spatial resolution of current vision prostheses is not sufficient for most patients to detect faces or perform activities of daily living, more electrodes with less crosstalk are needed to transfer complex images to the retina. In this study, we modelled planar and three-dimensional (3D) implants with a distant ground or a ground grid, to demonstrate greater spatial resolution with 3D structures. Using such flexible 3D implant prototypes, we showed that the degenerated retina could mould itself to the inside of the wells, thereby isolating bipolar neurons for specific, independent stimulation. To investigate the in vivo biocompatibility of diamond as an electrode or an isolating material, we developed a procedure for depositing diamond onto flexible 3D retinal implants. Taking polyimide 3D implants as a reference, we compared the number of neurones integrating the 3D diamond structures and their ratio to the numbers of all cells, including glial cells. Bipolar neurones were increased whereas there was no increase even a decrease in the total cell number. SEM examinations of implants confirmed the stability of the diamond after its implantation in vivo. This study further demonstrates the potential of 3D designs for increasing the resolution of retinal implants and validates the safety of diamond materials for retinal implants and neuroprostheses in general. PMID:26210174

  12. 3D Imaging Of Wet Granular Matter

    E-print Network

    Anlage, Steven

    3D Imaging Of Wet Granular Matter Leonard Goff Advisor: Dr. Wolfgang Losert With Application to Penetrometer Insertion #12;3D Imaging Of Wet Granular Matter Leonard Goff, Advisor: Dr. Wolfgang Losert CoffeeSand Gravel Oops! #12;3D Imaging Of Wet Granular Matter Leonard Goff, Advisor: Dr. Wolfgang Losert

  13. 3D Imaging Technology Conference & Applications Workshop

    E-print Network

    Aristomenis, Antoniadis

    2nd London 3D Imaging Technology Conference & Applications Workshop 3D scanning and vertical, Greece, bilalis@dpem.tuc.gr Abstract. The new 3D scanning technology had changed the way and opened new from some 3D scanning approaches, which were applied for the first time in the southern part of Europe

  14. SPATIAL COMPOUNDING OF 3D ULTRASOUND IMAGES

    E-print Network

    Drummond, Tom

    of the echoes are used to create a 2­D grey­level image (B­scan) of a cross­section of the body in the scan reconstruct 3­D anatomy given multiple 2­D slices. Research is underway to overcome this limitation using 3­D. Subsequent processing can build up a 3­D description of the imaged anatomy, in much the same manner

  15. Analysis of posterior retinal layers in spectral optical coherence tomography images of the normal retina and retinal pathologies

    E-print Network

    Szkulmowski, Maciej

    We present a computationally efficient, semiautomated method for analysis of posterior retinal layers in three-dimensional (3-D) images obtained by spectral optical coherence tomography (SOCT). The method consists of two ...

  16. Retinal Image Quality During Accommodation

    PubMed Central

    López-Gil, N.; Martin, J.; Liu, T.; Bradley, A.; Díaz-Muñoz, D.; Thibos, L.

    2013-01-01

    Purpose We asked if retinal image quality is maximum during accommodation, or sub-optimal due to accommodative error, when subjects perform an acuity task. Methods Subjects viewed a monochromatic (552nm), high-contrast letter target placed at various viewing distances. Wavefront aberrations of the accommodating eye were measured near the endpoint of an acuity staircase paradigm. Refractive state, defined as the optimum target vergence for maximising retinal image quality, was computed by through-focus wavefront analysis to find the power of the virtual correcting lens that maximizes visual Strehl ratio. Results Despite changes in ocular aberrations and pupil size during binocular viewing, retinal image quality and visual acuity typically remain high for all target vergences. When accommodative errors lead to sub-optimal retinal image quality, acuity and measured image quality both decline. However, the effect of accommodation errors of on visual acuity are mitigated by pupillary constriction associated with accommodation and binocular convergence and also to binocular summation of dissimilar retinal image blur. Under monocular viewing conditions some subjects displayed significant accommodative lag that reduced visual performance, an effect that was exacerbated by pharmacological dilation of the pupil. Conclusions Spurious measurement of accommodative error can be avoided when the image quality metric used to determine refractive state is compatible with the focusing criteria used by the visual system to control accommodation. Real focusing errors of the accommodating eye do not necessarily produce a reliably measurable loss of image quality or clinically significant loss of visual performance, probably because of increased depth-of-focus due to pupil constriction. When retinal image quality is close to maximum achievable (given the eye’s higher-order aberrations), acuity is also near maximum. A combination of accommodative lag, reduced image quality, and reduced visual function may be a useful sign for diagnosing functionally-significant accommodative errors indicating the need for therapeutic intervention. PMID:23786386

  17. 3D holoscopic video imaging system

    NASA Astrophysics Data System (ADS)

    Steurer, Johannes H.; Pesch, Matthias; Hahne, Christopher

    2012-03-01

    Since many years, integral imaging has been discussed as a technique to overcome the limitations of standard still photography imaging systems where a three-dimensional scene is irrevocably projected onto two dimensions. With the success of 3D stereoscopic movies, a huge interest in capturing three-dimensional motion picture scenes has been generated. In this paper, we present a test bench integral imaging camera system aiming to tailor the methods of light field imaging towards capturing integral 3D motion picture content. We estimate the hardware requirements needed to generate high quality 3D holoscopic images and show a prototype camera setup that allows us to study these requirements using existing technology. The necessary steps that are involved in the calibration of the system as well as the technique of generating human readable holoscopic images from the recorded data are discussed.

  18. Miniaturized 3D microscope imaging system

    NASA Astrophysics Data System (ADS)

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  19. 3D Segmentation of Fluid-Associated Abnormalities in Retinal OCT: Probability Constrained Graph-Search–Graph-Cut

    PubMed Central

    Chen, Xinjian; Niemeijer, Meindert; Zhang, Li; Lee, Kyungmoo; Abràmoff, Michael D.; Sonka, Milan

    2013-01-01

    An automated method is reported for segmenting 3D fluid and fluid-associated abnormalities in the retina, so-called Symptomatic Exudate-Associated Derangements (SEAD), from 3D OCT retinal images of subjects suffering from exudative age-related macular degeneration. In the first stage of a two-stage approach, retinal layers are segmented, candidate SEAD regions identified, and the retinal OCT image is flattened using a candidate-SEAD aware approach. In the second stage, a probability constrained combined graph search – graph cut method refines the candidate SEADs by integrating the candidate volumes into the graph cut cost function as probability constraints. The proposed method was evaluated on 15 spectral domain OCT images from 15 subjects undergoing intravitreal anti-VEGF injection treatment. Leave-one-out evaluation resulted in a true positive volume fraction (TPVF), false positive volume fraction (FPVF) and relative volume difference ratio (RVDR) of 86.5%, 1.7% and 12.8%, respectively. The new graph cut – graph search method significantly outperformed both the traditional graph cut and traditional graph search approaches (p<0.01, p<0.04) and has the potential to improve clinical management of patients with choroidal neovascularization due to exudative age-related macular degeneration. PMID:22453610

  20. [Improvements in 3-D ultrasonic imaging].

    PubMed

    Sohn, C; Stolz, W; Nuber, B; Hesse, A; Hornung, B; Wallwiener, D; Bastert, G

    1991-01-01

    The problem of gaining a coordinated sequence of section cuts by ultrasound is solved by rotating the plane section around a vertical and a horizontal axle. There are two possibilities in presenting the 3D image of the examined organ: ring structure image or transparent image. Precondition in the ring structure image is time consuming and defective since it has to be done by hand using a cursor. These pitfalls can be avoided when using a method without contouring by showing the 3D image transparently. Each section cut has to be calculated transparently. Both methods are presented in this paper and compared as to their applicability in medicine. The problem of the transparent 3D presentation lies in the fact that it cannot be well presented on printed paper, as is done her. The moving picture on the computer screen gives an optimal 3D image. This new method seem to be useful in tumor diagnostic and diagnostic of malformations in early pregnancies. PMID:1747557

  1. Acquisition and applications of 3D images

    NASA Astrophysics Data System (ADS)

    Sterian, Paul; Mocanu, Elena

    2007-08-01

    The moiré fringes method and their analysis up to medical and entertainment applications are discussed in this paper. We describe the procedure of capturing 3D images with an Inspeck Camera that is a real-time 3D shape acquisition system based on structured light techniques. The method is a high-resolution one. After processing the images, using computer, we can use the data for creating laser fashionable objects by engraving them with a Q-switched Nd:YAG. In medical field we mention the plastic surgery and the replacement of X-Ray especially in pediatric use.

  2. 3D Biological Tissue Image Rendering Software

    Cancer.gov

    Available for commercial development is software that provides automatic visualization of features inside biological image volumes in 3D. The software provides a simple and interactive visualization for the exploration of biological datasets through dataset-specific transfer functions and direct volume rendering.

  3. 3D Imaging Symposium, Friday 11:00 3D APPROACHES IN PALEOANTHROPOLOGY USING GEOMETRIC MOR-

    E-print Network

    Delson, Eric

    3D Imaging Symposium, Friday 11:00 3D APPROACHES IN PALEOANTHROPOLOGY USING GEOMETRIC MOR, Eugene, OR; ROSENBERGER, Alfred, Brooklyn College/CUNY, Brooklyn, NY The emergence of 3D GM (geometric- ble to easily collect data in a true 3D sense, such as sets of homologous landmarks or com- plete

  4. 3D Thermography Imaging Standardization Technique for Inflammation Diagnosis

    E-print Network

    Nebel, Jean-Christophe

    3D Thermography Imaging Standardization Technique for Inflammation Diagnosis Xiangyang Ju1 Jean@dcs.gla.ac.uk} ABSTRACT We develop a 3D thermography imaging standardization technique to allow quantitative data analysis maps the 2D thermography images on to 3D anatomical model. Then we rectify the 3D thermogram

  5. 3D goes digital: from stereoscopy to modern 3D imaging techniques

    NASA Astrophysics Data System (ADS)

    Kerwien, N.

    2014-11-01

    In the 19th century, English physicist Charles Wheatstone discovered stereopsis, the basis for 3D perception. His construction of the first stereoscope established the foundation for stereoscopic 3D imaging. Since then, many optical instruments were influenced by these basic ideas. In recent decades, the advent of digital technologies revolutionized 3D imaging. Powerful readily available sensors and displays combined with efficient pre- or post-processing enable new methods for 3D imaging and applications. This paper draws an arc from basic concepts of 3D imaging to modern digital implementations, highlighting instructive examples from its 175 years of history.

  6. 3D Reconstruction from Hyperspectral Images Yongsheng Gao3

    E-print Network

    Zhou, Jun

    - construct a 3D model from hyperspectral images. Our pro- posed method first generates 3D point sets from industry [18]. 3D model was generated based on depth data captured by a laser scanner, with hy- perspectral- formation as 3D model. They first constructed 3D model using two different hardware and then mapped

  7. 3D GPR Imaging of Wooden Logs

    NASA Astrophysics Data System (ADS)

    Halabe, Udaya B.; Pyakurel, Sandeep

    2007-03-01

    There has been a lack of an effective NDE technique to locate internal defects within wooden logs. The few available elastic wave propagation based techniques are limited to predicting E values. Other techniques such as X-rays have not been very successful in detecting internal defects in logs. If defects such as embedded metals could be identified before the sawing process, the saw mills could significantly increase their production by reducing the probability of damage to the saw blade and the associated downtime and the repair cost. Also, if the internal defects such as knots and decayed areas could be identified in logs, the sawing blade can be oriented to exclude the defective portion and optimize the volume of high valued lumber that can be obtained from the logs. In this research, GPR has been successfully used to locate internal defects (knots, decays and embedded metals) within the logs. This paper discusses GPR imaging and mapping of the internal defects using both 2D and 3D interpretation methodology. Metal pieces were inserted in a log and the reflection patterns from these metals were interpreted from the radargrams acquired using 900 MHz antenna. Also, GPR was able to accurately identify the location of knots and decays. Scans from several orientations of the log were collected to generate 3D cylindrical volume. The actual location of the defects showed good correlation with the interpreted defects in the 3D volume. The time/depth slices from 3D cylindrical volume data were useful in understanding the extent of defects inside the log.

  8. Integrated computational system for portable retinal imaging

    E-print Network

    Boggess, Jason (Jason Robert)

    2012-01-01

    This thesis introduces a system to improve image quality obtained from a low-light CMOS camera-specifically designed to image the surface of the retina. The retinal tissue, as well as having various diseases of its own, ...

  9. Ames Lab 101: Real-Time 3D Imaging

    SciTech Connect

    Zhang, Song

    2010-01-01

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  10. Ames Lab 101: Real-Time 3D Imaging

    ScienceCinema

    Zhang, Song

    2012-08-29

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  11. Computational 3D and reflectivity imaging with high photon efficiency

    E-print Network

    Shin, Dongeek

    2014-01-01

    Imaging the 3D structure and reflectivity of a scene can be done using photon-counting detectors. Traditional imagers of this type typically require hundreds of detected photons per pixel for accurate 3D and reflectivity ...

  12. 3D imaging and ranging by time-correlated single

    E-print Network

    Buller, Gerald S.

    3D imaging and ranging by time-correlated single photon counting by A. M. Wallace, 6. S. Buller and A. C. Walker 3D imaging is an important tool for metrology and reverse engineering of components and architecturalsurveying. In this article, we review briefly the principal methods in current use for 3D imaging

  13. 3D RECONSTRUCTION FROM A SINGLE IMAGE Diego Rother

    E-print Network

    3D RECONSTRUCTION FROM A SINGLE IMAGE By Diego Rother and Guillermo Sapiro IMA Preprint Series. 1 3D Reconstruction from a Single Image Diego Rother and Guillermo Sapiro Abstract-- A probabilistic framework for 3D object reconstruction from a single image is introduced in this work. First

  14. Fundus autofluorescence applications in retinal imaging

    PubMed Central

    Gabai, Andrea; Veritti, Daniele; Lanzetta, Paolo

    2015-01-01

    Fundus autofluorescence (FAF) is a relatively new imaging technique that can be used to study retinal diseases. It provides information on retinal metabolism and health. Several different pathologies can be detected. Peculiar AF alterations can help the clinician to monitor disease progression and to better understand its pathogenesis. In the present article, we review FAF principles and clinical applications. PMID:26139802

  15. Photogrammetric 3D reconstruction using mobile imaging

    NASA Astrophysics Data System (ADS)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  16. Imaging a Sustainable Future in 3D

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.; Kanngieser, E.

    2012-07-01

    It is the intention of this paper, to contribute to a sustainable future by providing objective object information based on 3D photography as well as promoting 3D photography not only for scientists, but also for amateurs. Due to the presentation of this article by CIPA Task Group 3 on "3D Photographs in Cultural Heritage", the presented samples are masterpieces of historic as well as of current 3D photography concentrating on cultural heritage. In addition to a report on exemplarily access to international archives of 3D photographs, samples for new 3D photographs taken with modern 3D cameras, as well as by means of a ground based high resolution XLITE staff camera and also 3D photographs taken from a captive balloon and the use of civil drone platforms are dealt with. To advise on optimum suited 3D methodology, as well as to catch new trends in 3D, an updated synoptic overview of the 3D visualization technology, even claiming completeness, has been carried out as a result of a systematic survey. In this respect, e.g., today's lasered crystals might be "early bird" products in 3D, which, due to lack in resolution, contrast and color, remember to the stage of the invention of photography.

  17. 3D Laser Imaging at Highway Speed Kelvin CP Wang

    E-print Network

    3D Laser Imaging at Highway Speed Kelvin CP Wang And the Team Formerly at the University Concrete Consortium Meeting Oklahoma City Sheraton Hotel #12;3D Laser Imaging for Pavements Mature Potential to Cover Most if Not All Data Collection on Pavement Surface How to Obtain True 1mm 3D Visual

  18. 3D ULTRASONIC STRAIN IMAGING USING FREEHAND SCANNING AND

    E-print Network

    Drummond, Tom

    3D ULTRASONIC STRAIN IMAGING USING FREEHAND SCANNING AND A MECHANICALLY-SWEPT PROBE R. J. Housden.cam.ac.uk #12;3D ultrasonic strain imaging using freehand scanning and a mechanically-swept probe R. James of Engineering Trumpington Street Cambridge CB2 1PZ Abstract This paper compares two approaches to 3D ultrasonic

  19. Computers in Physics Simple Programs Create 3D Images

    E-print Network

    Sprott, Julien Clinton

    Anaglyphs have also been widely used in comic books. Color 3-D movies using 1. C. Sprott (sprott in the early 1900s was accompanied by 3-D movies using overlapping red- green images, which were viewed through

  20. Toward a compact underwater structured light 3-D imaging system

    E-print Network

    Dawson, Geoffrey E

    2013-01-01

    A compact underwater 3-D imaging system based on the principles of structured light was created for classroom demonstration and laboratory research purposes. The 3-D scanner design was based on research by the Hackengineer ...

  1. Image Selection for 3d Measurement Based on Network Design

    NASA Astrophysics Data System (ADS)

    Fuse, T.; Harada, R.

    2015-05-01

    3D models have been widely used by spread of many available free-software. On the other hand, enormous images can be easily acquired, and images are utilized for creating the 3D models recently. However, the creation of 3D models by using huge amount of images takes a lot of time and effort, and then efficiency for 3D measurement are required. In the efficiency strategy, the accuracy of the measurement is also required. This paper develops an image selection method based on network design that means surveying network construction. The proposed method uses image connectivity graph. By this, the image selection problem is regarded as combinatorial optimization problem and the graph cuts technique can be applied. Additionally, in the process of 3D reconstruction, low quality images and similarity images are extracted and removed. Through the experiments, the significance of the proposed method is confirmed. Potential to efficient and accurate 3D measurement is implied.

  2. Automatic Detection, Segmentation and Classification of Retinal Horizontal Neurons in Large-scale 3D Confocal Imagery

    SciTech Connect

    Karakaya, Mahmut; Kerekes, Ryan A; Gleason, Shaun Scott; Martins, Rodrigo; Dyer, Michael

    2011-01-01

    Automatic analysis of neuronal structure from wide-field-of-view 3D image stacks of retinal neurons is essential for statistically characterizing neuronal abnormalities that may be causally related to neural malfunctions or may be early indicators for a variety of neuropathies. In this paper, we study classification of neuron fields in large-scale 3D confocal image stacks, a challenging neurobiological problem because of the low spatial resolution imagery and presence of intertwined dendrites from different neurons. We present a fully automated, four-step processing approach for neuron classification with respect to the morphological structure of their dendrites. In our approach, we first localize each individual soma in the image by using morphological operators and active contours. By using each soma position as a seed point, we automatically determine an appropriate threshold to segment dendrites of each neuron. We then use skeletonization and network analysis to generate the morphological structures of segmented dendrites, and shape-based features are extracted from network representations of each neuron to characterize the neuron. Based on qualitative results and quantitative comparisons, we show that we are able to automatically compute relevant features that clearly distinguish between normal and abnormal cases for postnatal day 6 (P6) horizontal neurons.

  3. Automated multilayer segmentation and characterization in 3D spectral-domain optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Hu, Zhihong; Wu, Xiaodong; Hariri, Amirhossein; Sadda, SriniVas R.

    2013-03-01

    Spectral-domain optical coherence tomography (SD-OCT) is a 3-D imaging technique, allowing direct visualization of retinal morphology and architecture. The various layers of the retina may be affected differentially by various diseases. In this study, an automated graph-based multilayer approach was developed to sequentially segment eleven retinal surfaces including the inner retinal bands to the outer retinal bands in normal SD-OCT volume scans at three different stages. For stage 1, the four most detectable and/or distinct surfaces were identified in the four-times-downsampled images and were used as a priori positional information to limit the graph search for other surfaces at stage 2. Eleven surfaces were then detected in the two-times-downsampled images at stage 2, and refined in the original image space at stage 3 using the graph search integrating the estimated morphological shape models. Twenty macular SD-OCT (Heidelberg Spectralis) volume scans from 20 normal subjects (one eye per subject) were used in this study. The overall mean and absolute mean differences in border positions between the automated and manual segmentation for all 11 segmented surfaces were -0.20 +/- 0.53 voxels (-0.76 +/- 2.06 ?m) and 0.82 +/- 0.64 voxels (3.19 +/- 2.46 ?m). Intensity and thickness properties in the resultant retinal layers were investigated. This investigation in normal subjects may provide a comparative reference for subsequent investigations in eyes with disease.

  4. Digital tracking and control of retinal images

    NASA Astrophysics Data System (ADS)

    Barrett, Steven F.; Jerath, Maya R.; Rylander, Henry G., III; Welch, Ashley J.

    1993-06-01

    Laser induced retinal lesions are used to treat a variety of eye diseases such as diabetic retinopathy and retinal detachment. An instrumentation system has been developed to track a specific lesion coordinate on the retinal surface and provide corrective signals to maintain laser position on the coordinate. High resolution retinal images are acquired via a CCD camera coupled to a fundus camera and video frame grabber. Optical filtering and histogram modification are used to enhance the retinal vessel network against the lighter retinal background. Six distinct retinal landmarks are tracked on the high contrast image obtained from the frame grabber using two-dimensional blood vessel templates. The frame grabber is hosted on a 486 PC. The PC performs correction signal calculations using an exhaustive search on selected image portions. An X and Y laser correction signal is derived from the landmark tracking information and provided to a pair of galvanometer steered mirrors via a data acquisition and control subsystem. This subsystem also responds to patient inputs and the system monitoring lesion growth. This paper begins with an overview of the robotic laser system design followed by implementation and testing of a development system for proof of concept. The paper concludes with specifications for a real time system.

  5. 3D widefield light microscope image reconstruction without dyes

    NASA Astrophysics Data System (ADS)

    Larkin, S.; Larson, J.; Holmes, C.; Vaicik, M.; Turturro, M.; Jurkevich, A.; Sinha, S.; Ezashi, T.; Papavasiliou, G.; Brey, E.; Holmes, T.

    2015-03-01

    3D image reconstruction using light microscope modalities without exogenous contrast agents is proposed and investigated as an approach to produce 3D images of biological samples for live imaging applications. Multimodality and multispectral imaging, used in concert with this 3D optical sectioning approach is also proposed as a way to further produce contrast that could be specific to components in the sample. The methods avoid usage of contrast agents. Contrast agents, such as fluorescent or absorbing dyes, can be toxic to cells or alter cell behavior. Current modes of producing 3D image sets from a light microscope, such as 3D deconvolution algorithms and confocal microscopy generally require contrast agents. Zernike phase contrast (ZPC), transmitted light brightfield (TLB), darkfield microscopy and others can produce contrast without dyes. Some of these modalities have not previously benefitted from 3D image reconstruction algorithms, however. The 3D image reconstruction algorithm is based on an underlying physical model of scattering potential, expressed as the sample's 3D absorption and phase quantities. The algorithm is based upon optimizing an objective function - the I-divergence - while solving for the 3D absorption and phase quantities. Unlike typical deconvolution algorithms, each microscope modality, such as ZPC or TLB, produces two output image sets instead of one. Contrast in the displayed image and 3D renderings is further enabled by treating the multispectral/multimodal data as a feature set in a mathematical formulation that uses the principal component method of statistics.

  6. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    SciTech Connect

    Morimoto, A.K.; Bow, W.J.; Strong, D.S.

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  7. 3D Imaging by Mass Spectrometry: A New Frontier

    PubMed Central

    Seeley, Erin H.; Caprioli, Richard M.

    2012-01-01

    Summary Imaging mass spectrometry can generate three-dimensional volumes showing molecular distributions in an entire organ or animal through registration and stacking of serial tissue sections. Here we review the current state of 3D imaging mass spectrometry as well as provide insights and perspectives on the process of generating 3D mass spectral data along with a discussion of the process necessary to generate a 3D image volume. PMID:22276611

  8. High Volume Rate, High Resolution 3D Plane Wave Imaging

    E-print Network

    Wenisch, Thomas F.

    High Volume Rate, High Resolution 3D Plane Wave Imaging Ming Yang, Richard Sampson, Siyuan Wei Department of Radiology, University of Michigan, Ann Arbor, MI 48109 Abstract--3D plane-wave imaging systems the image quality of plane-wave systems at the expense of significant increase in beamforming computational

  9. Live-cell 3D super-resolution imaging in

    E-print Network

    Cai, Long

    Live-cell 3D super- resolution imaging in thick biological samples Francesca Cella Zanacchi1, Zeno,2 We demonstrate three-dimensional (3D) super-resolution live-cell imaging through thick specimens (50 running read- out laser to image the nuclear protein distribution during the overall experiment, and we

  10. Retinal imaging using adaptive optics technology?

    PubMed Central

    Kozak, Igor

    2014-01-01

    Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effect of wave front distortions. Retinal imaging using AO aims to compensate for higher order aberrations originating from the cornea and the lens by using deformable mirror. The main application of AO retinal imaging has been to assess photoreceptor cell density, spacing, and mosaic regularity in normal and diseased eyes. Apart from photoreceptors, the retinal pigment epithelium, retinal nerve fiber layer, retinal vessel wall and lamina cribrosa can also be visualized with AO technology. Recent interest in AO technology in eye research has resulted in growing number of reports and publications utilizing this technology in both animals and humans. With the availability of first commercially available instruments we are making transformation of AO technology from a research tool to diagnostic instrument. The current challenges include imaging eyes with less than perfect optical media, formation of normative databases for acquired images such as cone mosaics, and the cost of the technology. The opportunities for AO will include more detailed diagnosis with description of some new findings in retinal diseases and glaucoma as well as expansion of AO into clinical trials which has already started. PMID:24843304

  11. Automatic 2D-to-3D image conversion using 3D examples from the internet

    NASA Astrophysics Data System (ADS)

    Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.

    2012-03-01

    The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D repository. While far from perfect, the presented results demonstrate that on-line repositories of 3D content can be used for effective 2D-to-3D image conversion. With the continuously increasing amount of 3D data on-line and with the rapidly growing computing power in the cloud, the proposed framework seems a promising alternative to operator-assisted 2D-to-3D conversion.

  12. A 3D image analysis tool for SPECT imaging

    NASA Astrophysics Data System (ADS)

    Kontos, Despina; Wang, Qiang; Megalooikonomou, Vasileios; Maurer, Alan H.; Knight, Linda C.; Kantor, Steve; Fisher, Robert S.; Simonian, Hrair P.; Parkman, Henry P.

    2005-04-01

    We have developed semi-automated and fully-automated tools for the analysis of 3D single-photon emission computed tomography (SPECT) images. The focus is on the efficient boundary delineation of complex 3D structures that enables accurate measurement of their structural and physiologic properties. We employ intensity based thresholding algorithms for interactive and semi-automated analysis. We also explore fuzzy-connectedness concepts for fully automating the segmentation process. We apply the proposed tools to SPECT image data capturing variation of gastric accommodation and emptying. These image analysis tools were developed within the framework of a noninvasive scintigraphic test to measure simultaneously both gastric emptying and gastric volume after ingestion of a solid or a liquid meal. The clinical focus of the particular analysis was to probe associations between gastric accommodation/emptying and functional dyspepsia. Employing the proposed tools, we outline effectively the complex three dimensional gastric boundaries shown in the 3D SPECT images. We also perform accurate volume calculations in order to quantitatively assess the gastric mass variation. This analysis was performed both with the semi-automated and fully-automated tools. The results were validated against manual segmentation performed by a human expert. We believe that the development of an automated segmentation tool for SPECT imaging of the gastric volume variability will allow for other new applications of SPECT imaging where there is a need to evaluate complex organ function or tumor masses.

  13. Screening Diabetic Retinopathy Through Color Retinal Images

    NASA Astrophysics Data System (ADS)

    Li, Qin; Jin, Xue-Min; Gao, Quan-Xue; You, Jane; Bhattacharya, Prabir

    Diabetic Retinopathy (DR) is a common complication of diabetes that damages the eye's retina. Recognition DR as early as possible is very important to protect patients' vision. We propose a method for screening DR and distin-guishing Prolifetive Diabetic Retinopathy (PDR) from Non-Prolifetive Retino-pathy (NPDR) automatatically through color retinal images. This method evaluates the severity of DR by analyzing the appearnce of bright lesions and retinal vessel patterns. The bright lesions are extracted through morphlogical re-consturction. After that, the retinal vessels are automatically extracted using multiscale matched filters. Then the vessel patterns are analyzed by extracting the vessel net density. The experimental results domonstrate that it is a effective solution to screen DR and distinguish PDR from NPDR by only using color retinal images.

  14. Retinal detachment repair - series (image)

    MedlinePLUS

    Retinal detachments are associated with a tear or hole in the retina through which the internal fluids of the eye may leak, causing separation of the retina from the underlying tissues. This is most often caused by trauma, and ...

  15. Measurable realistic image-based 3D mapping

    NASA Astrophysics Data System (ADS)

    Liu, W.; Wang, J.; Wang, J. J.; Ding, W.; Almagbile, A.

    2011-12-01

    Maps with 3D visual models are becoming a remarkable feature of 3D map services. High-resolution image data is obtained for the construction of 3D visualized models.The3D map not only provides the capabilities of 3D measurements and knowledge mining, but also provides the virtual experienceof places of interest, such as demonstrated in the Google Earth. Applications of 3D maps are expanding into the areas of architecture, property management, and urban environment monitoring. However, the reconstruction of high quality 3D models is time consuming, and requires robust hardware and powerful software to handle the enormous amount of data. This is especially for automatic implementation of 3D models and the representation of complicated surfacesthat still need improvements with in the visualisation techniques. The shortcoming of 3D model-based maps is the limitation of detailed coverage since a user can only view and measure objects that are already modelled in the virtual environment. This paper proposes and demonstrates a 3D map concept that is realistic and image-based, that enables geometric measurements and geo-location services. Additionally, image-based 3D maps provide more detailed information of the real world than 3D model-based maps. The image-based 3D maps use geo-referenced stereo images or panoramic images. The geometric relationships between objects in the images can be resolved from the geometric model of stereo images. The panoramic function makes 3D maps more interactive with users but also creates an interesting immersive circumstance. Actually, unmeasurable image-based 3D maps already exist, such as Google street view, but only provide virtual experiences in terms of photos. The topographic and terrain attributes, such as shapes and heights though are omitted. This paper also discusses the potential for using a low cost land Mobile Mapping System (MMS) to implement realistic image 3D mapping, and evaluates the positioning accuracy that a measureable realistic image-based (MRI) system can produce. The major contribution here is the implementation of measurable images on 3D maps to obtain various measurements from real scenes.

  16. Abstract Title: Image Informatics Tools for the Analysis of Retinal Images

    E-print Network

    California at Santa Barbara, University of

    Abstract Title: Image Informatics Tools for the Analysis of Retinal Images Presentation Start Barbara, Santa Barbara, CA. Keywords: 682 retinal detachment, 541 image processing, 543 imaging/image and quantitative analysis of retinal images, and to test these methods on a large retinal image database. Methods

  17. 3D Imaging with Structured Illumination for Advanced Security Applications

    SciTech Connect

    Birch, Gabriel Carisle; Dagel, Amber Lynn; Kast, Brian A.; Smith, Collin S.

    2015-09-01

    Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

  18. Fringe projection 3D microscopy with the general imaging model.

    PubMed

    Yin, Yongkai; Wang, Meng; Gao, Bruce Z; Liu, Xiaoli; Peng, Xiang

    2015-03-01

    Three-dimensional (3D) imaging and metrology of microstructures is a critical task for the design, fabrication, and inspection of microelements. Newly developed fringe projection 3D microscopy is presented in this paper. The system is configured according to camera-projector layout and long working distance lenses. The Scheimpflug principle is employed to make full use of the limited depth of field. For such a specific system, the general imaging model is introduced to reach a full 3D reconstruction. A dedicated calibration procedure is developed to realize quantitative 3D imaging. Experiments with a prototype demonstrate the accessibility of the proposed configuration, model, and calibration approach. PMID:25836904

  19. Digital imaging-based retinal photocoagulation system

    NASA Astrophysics Data System (ADS)

    Barrett, Steven F.; Wright, Cameron H. G.; Oberg, Erik D.; Rockwell, Benjamin A.; Cain, Clarence P.; Rylander, Henry G., III; Welch, Ashley J.

    1997-05-01

    Researchers at the USAF Academy and the University of Texas are developing a computer-assisted retinal photocoagulation system for the treatment of retinal disorders (i.e. diabetic retinopathy, retinal tears). Currently, ophthalmologists manually place therapeutic retinal lesions, an acquired technique that is tiring for both the patient and physician. The computer-assisted system under development can rapidly and safely place multiple therapeutic lesions at desired locations on the retina in a matter of seconds. Separate prototype subsystems have been developed to control lesion depth during irradiation and lesion placement to compensate for retinal movement. Both subsystems have been successfully demonstrated in vivo on pigmented rabbits using an argon continuous wave laser. Two different design approaches are being pursued to combine the capabilities of both subsystems: a digital imaging-based system and a hybrid analog-digital system. This paper will focus on progress with the digital imaging-based prototype system. A separate paper on the hybrid analog-digital system, `Hybrid Retinal Photocoagulation System', is also presented in this session.

  20. ULTRASONIC IMAGING OF 3D DISPLACEMENT VECTORS USING

    E-print Network

    Drummond, Tom

    ULTRASONIC IMAGING OF 3D DISPLACEMENT VECTORS USING A SIMULATED 2D ARRAY AND BEAMSTEERING R. J.cam.ac.uk #12;Ultrasonic imaging of 3D displacement vectors using a simulated 2D array and beamsteering R. James can be obtained using beamsteering. 1 Introduction Ultrasonic elastography is a technique

  1. Imaging retinal mosaics in the living

    E-print Network

    for both basic scientists and clinicians. Recent advances in adaptive optics retinal imaging- and M-cone ratio in persons with normal color vision.7,6 Adaptive optics imaging of persons­10 Genetic techniques can determine specific mutations in the cone opsin, revealing the genotype

  2. Grain Segmentation of 3D Superalloy Images Using Multichannel EWCVT

    E-print Network

    Wang, Song

    @math.sc.edu Abstract. Grain segmentation on 3D superalloy images provides super- alloy's micro-structures, based annotations/segmentations on 2D super- alloy image slices and then combine the 2D results to reconstruct the 3

  3. 3D surface reconstruction using optical flow for medical imaging

    SciTech Connect

    Weng, Nan; Yang, Yee-Hong; Pierson, R.

    1996-12-31

    The recovery of a 3D model from a sequence of 2D images is very useful in medical image analysis. Image sequences obtained from the relative motion between the object and the camera or the scanner contain more 3D information than a single image. Methods to visualize the computed tomograms can be divided into two approaches: the surface rendering approach and the volume rendering approach. A new surface rendering method using optical flow is proposed. Optical flow is the apparent motion in the image plane produced by the projection of the real 3D motion onto 2D image. In this paper, the object remains stationary while the scanner undergoes translational motion. The 3D motion of an object can be recovered from the optical flow field using additional constraints. By extracting the surface information from 3D motion, it is possible to get an accurate 3D model of the object. Both synthetic and real image sequences have been used to illustrate the feasibility of the proposed method. The experimental results suggest that the proposed method is suitable for the reconstruction of 3D models from ultrasound medical images as well as other computed tomograms.

  4. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  5. Phase Sensitive Cueing for 3D Objects in Overhead Images

    SciTech Connect

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  6. Image performance evaluation of a 3D surgical imaging platform

    NASA Astrophysics Data System (ADS)

    Petrov, Ivailo E.; Nikolov, Hristo N.; Holdsworth, David W.; Drangova, Maria

    2011-03-01

    The O-arm (Medtronic Inc.) is a multi-dimensional surgical imaging platform. The purpose of this study was to perform a quantitative evaluation of the imaging performance of the O-arm in an effort to understand its potential for future nonorthopedic applications. Performance of the reconstructed 3D images was evaluated, using a custom-built phantom, in terms of resolution, linearity, uniformity and geometrical accuracy. Both the standard (SD, 13 s) and high definition (HD, 26 s) modes were evaluated, with the imaging parameters set to image the head (120 kVp, 100 mAs and 150 mAs, respectively). For quantitative noise characterization, the images were converted to Hounsfield units (HU) off-line. Measurement of the modulation transfer function revealed a limiting resolution (at 10% level) of 1.0 mm-1 in the axial dimension. Image noise varied between 15 and 19 HU for the HD and SD modes, respectively. Image intensities varied linearly over the measured range, up to 1300 HU. Geometric accuracy was maintained in all three dimensions over the field of view. The present study has evaluated the performance characteristics of the O-arm, and demonstrates feasibility for use in interventional applications and quantitative imaging tasks outside those currently targeted by the manufacturer. Further improvements to the reconstruction algorithms may further enhance performance for lower-contrast applications.

  7. Recovering 3D tumor locations from 2D bioluminescence images

    E-print Network

    Huang, Xiaolei

    Recovering 3D tumor locations from 2D bioluminescence images Xiaolei Huang1 , Dimitris Metaxas1 Bioluminescence imaging (BLI) is an emerging technique for sensitive and nonin- vasive imaging, which can be used bioluminescence images, then register and visualize the reconstructed tumor with detailed animal geometry

  8. MR image denoising method for brain surface 3D modeling

    NASA Astrophysics Data System (ADS)

    Zhao, De-xin; Liu, Peng-jie; Zhang, De-gan

    2014-11-01

    Three-dimensional (3D) modeling of medical images is a critical part of surgical simulation. In this paper, we focus on the magnetic resonance (MR) images denoising for brain modeling reconstruction, and exploit a practical solution. We attempt to remove the noise existing in the MR imaging signal and preserve the image characteristics. A wavelet-based adaptive curve shrinkage function is presented in spherical coordinates system. The comparative experiments show that the denoising method can preserve better image details and enhance the coefficients of contours. Using these denoised images, the brain 3D visualization is given through surface triangle mesh model, which demonstrates the effectiveness of the proposed method.

  9. RECONSTRUCTION OF 3D TOOTH IMAGES S. Buchaillard1

    E-print Network

    Payan, Yohan

    the shape of a tooth using only 3D crown information and without the use of X-rays, CT or MRI. Various, and treatment simulations. For example, a dental implant can be inserted into the jawbone when a tooth tomography (CT) is the most efficient way of generating 3D objects. However, CT imag- ing of dental patients

  10. Adaptive Metamorphs Model for 3D Medical Image Segmentation

    E-print Network

    Huang, Junzhou

    Adaptive Metamorphs Model for 3D Medical Image Segmentation Junzhou Huang1 , Xiaolei Huang2 a solid model deforms toward object bound- ary. Our 3D segmentation method stems from Metamorphs. Metamorphs [1] is proposed as a new class of deformable models that integrate boundary information

  11. 3D quantitative phase imaging of neural networks using WDT

    NASA Astrophysics Data System (ADS)

    Kim, Taewoo; Liu, S. C.; Iyer, Raj; Gillette, Martha U.; Popescu, Gabriel

    2015-03-01

    White-light diffraction tomography (WDT) is a recently developed 3D imaging technique based on a quantitative phase imaging system called spatial light interference microscopy (SLIM). The technique has achieved a sub-micron resolution in all three directions with high sensitivity granted by the low-coherence of a white-light source. Demonstrations of the technique on single cell imaging have been presented previously; however, imaging on any larger sample, including a cluster of cells, has not been demonstrated using the technique. Neurons in an animal body form a highly complex and spatially organized 3D structure, which can be characterized by neuronal networks or circuits. Currently, the most common method of studying the 3D structure of neuron networks is by using a confocal fluorescence microscope, which requires fluorescence tagging with either transient membrane dyes or after fixation of the cells. Therefore, studies on neurons are often limited to samples that are chemically treated and/or dead. WDT presents a solution for imaging live neuron networks with a high spatial and temporal resolution, because it is a 3D imaging method that is label-free and non-invasive. Using this method, a mouse or rat hippocampal neuron culture and a mouse dorsal root ganglion (DRG) neuron culture have been imaged in order to see the extension of processes between the cells in 3D. Furthermore, the tomogram is compared with a confocal fluorescence image in order to investigate the 3D structure at synapses.

  12. AUTOMATIC REGISTRATION OF 3D ULTRASOUND IMAGES

    E-print Network

    Drummond, Tom

    and timing of the echoes are used to create a 2­D grey­level image (B­scan) of a cross­section of the body description of the imaged anatomy, in much the same manner as is possible with CT or MRI, but with less

  13. Efficient volume visualization of 3D ultrasound images

    NASA Astrophysics Data System (ADS)

    Kim, Cheol-An; Oh, Jeong Hwan; Park, Hyun Wook

    1999-05-01

    Visualization of 3D data from ultrasound images is a challenging task due to the noisy and fuzzy nature of ultrasound image and large amounts of computation time. This paper presents an efficient volume rendering technique for visualization of 3D ultrasound image and large amounts of computation time. This paper present an efficient volume rendering technique for visualization of 3D ultrasound image using the noise-reduction filtering and extraction of the boundary surface with good image quality. The truncated- median filter in 2D ultrasound image is proposed for reducing speckle noise within acceptable processing time. In order to adapt the fuzzy nature of boundary surface of ultrasound image, an adaptive thresholding is also proposed. The decision of threshold is based on the idea that effective boundary surface is estimated by the gray level above an adequate noise threshold and width along the pixel ray. The proposed rendering method is simulated with 3D fetus ultrasound image of 148 X 194 X 112 voxels. Several preprocessing methods were tested and compared with respect to the computation time and the subjective image quality. According to the comparison study, the proposed volume rendering method shows good performance for volume visualization of 3D ultrasound image.

  14. Accommodation response measurements for integral 3D image

    NASA Astrophysics Data System (ADS)

    Hiura, H.; Mishina, T.; Arai, J.; Iwadate, Y.

    2014-03-01

    We measured accommodation responses under integral photography (IP), binocular stereoscopic, and real object display conditions, and viewing conditions of binocular and monocular viewing conditions. The equipment we used was an optometric device and a 3D display. We developed the 3D display for IP and binocular stereoscopic images that comprises a high-resolution liquid crystal display (LCD) and a high-density lens array. The LCD has a resolution of 468 dpi and a diagonal size of 4.8 inches. The high-density lens array comprises 106 x 69 micro lenses that have a focal length of 3 mm and diameter of 1 mm. The lenses are arranged in a honeycomb pattern. The 3D display was positioned 60 cm from an observer under IP and binocular stereoscopic display conditions. The target was presented at eight depth positions relative to the 3D display: 15, 10, and 5 cm in front of the 3D display, on the 3D display panel, and 5, 10, 15 and 30 cm behind the 3D display under the IP and binocular stereoscopic display conditions. Under the real object display condition, the target was displayed on the 3D display panel, and the 3D display was placed at the eight positions. The results suggest that the IP image induced more natural accommodation responses compared to the binocular stereoscopic image. The accommodation responses of the IP image were weaker than those of a real object; however, they showed a similar tendency with those of the real object under the two viewing conditions. Therefore, IP can induce accommodation to the depth positions of 3D images.

  15. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    SciTech Connect

    Dibildox, Gerardo Baka, Nora; Walsum, Theo van; Punt, Mark; Aben, Jean-Paul; Schultz, Carl; Niessen, Wiro

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  16. Computational imaging: Machine learning for 3D microscopy

    NASA Astrophysics Data System (ADS)

    Waller, Laura; Tian, Lei

    2015-07-01

    Artificial neural networks have been combined with microscopy to visualize the 3D structure of biological cells. This could lead to solutions for difficult imaging problems, such as the multiple scattering of light.

  17. Retinal imaging with virtual reality stimulus for studying Salticidae retinas

    NASA Astrophysics Data System (ADS)

    Schiesser, Eric; Canavesi, Cristina; Long, Skye; Jakob, Elizabeth; Rolland, Jannick P.

    2014-12-01

    We present a 3-path optical system for studying the retinal movement of jumping spiders: a visible OLED virtual reality system presents stimulus, while NIR illumination and imaging systems observe retinal movement.

  18. Image based 3D city modeling : Comparative study

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city reconstruction; CityEngine is a good product. Agisoft Photoscan software creates much better 3D model with good texture quality and automatic processing. So this image based comparative study is useful for 3D city user community. Thus this study will provide a good roadmap for geomatics user community to create photo-realistic virtual 3D city model by using image based techniques.

  19. Imaging hypoxia using 3D photoacoustic spectroscopy

    NASA Astrophysics Data System (ADS)

    Stantz, Keith M.

    2010-02-01

    Purpose: The objective is to develop a multivariate in vivo hemodynamic model of tissue oxygenation (MiHMO2) based on 3D photoacoustic spectroscopy. Introduction: Low oxygen levels, or hypoxia, deprives cancer cells of oxygen and confers resistance to irradiation, some chemotherapeutic drugs, and oxygen-dependent therapies (phototherapy) leading to treatment failure and poor disease-free and overall survival. For example, clinical studies of patients with breast carcinomas, cervical cancer, and head and neck carcinomas (HNC) are more likely to suffer local reoccurrence and metastasis if their tumors are hypoxic. A novel method to non invasively measure tumor hypoxia, identify its type, and monitor its heterogeneity is devised by measuring tumor hemodynamics, MiHMO2. Material and Methods: Simulations are performed to compare tumor pO2 levels and hypoxia based on physiology - perfusion, fractional plasma volume, fractional cellular volume - and its hemoglobin status - oxygen saturation and hemoglobin concentration - based on in vivo measurements of breast, prostate, and ovarian tumors. Simulations of MiHMO2 are performed to assess the influence of scanner resolutions and different mathematic models of oxygen delivery. Results: Sensitivity of pO2 and hypoxic fraction to photoacoustic scanner resolution and dependencies on model complexity will be presented using hemodynamic parameters for different tumors. Conclusions: Photoacoustic CT spectroscopy provides a unique ability to monitor hemodynamic and cellular physiology in tissue, which can be used to longitudinally monitor tumor oxygenation and its response to anti-angiogenic therapies.

  20. Dedicated 3D photoacoustic breast imaging

    PubMed Central

    Kruger, Robert A.; Kuzmiak, Cherie M.; Lam, Richard B.; Reinecke, Daniel R.; Del Rio, Stephen P.; Steed, Doreen

    2013-01-01

    Purpose: To report the design and imaging methodology of a photoacoustic scanner dedicated to imaging hemoglobin distribution throughout a human breast. Methods: The authors developed a dedicated breast photoacoustic mammography (PAM) system using a spherical detector aperture based on our previous photoacoustic tomography scanner. The system uses 512 detectors with rectilinear scanning. The scan shape is a spiral pattern whose radius varies from 24 to 96 mm, thereby allowing a field of view that accommodates a wide range of breast sizes. The authors measured the contrast-to-noise ratio (CNR) using a target comprised of 1-mm dots printed on clear plastic. Each dot absorption coefficient was approximately the same as a 1-mm thickness of whole blood at 756 nm, the output wavelength of the Alexandrite laser used by this imaging system. The target was immersed in varying depths of an 8% solution of stock Liposyn II-20%, which mimics the attenuation of breast tissue (1.1 cm?1). The spatial resolution was measured using a 6 ?m-diameter carbon fiber embedded in agar. The breasts of four healthy female volunteers, spanning a range of breast size from a brassiere C cup to a DD cup, were imaged using a 96-mm spiral protocol. Results: The CNR target was clearly visualized to a depth of 53 mm. Spatial resolution, which was estimated from the full width at half-maximum of a profile across the PAM image of a carbon fiber, was 0.42 mm. In the four human volunteers, the vasculature was well visualized throughout the breast tissue, including to the chest wall. Conclusions: CNR, lateral field-of-view and penetration depth of our dedicated PAM scanning system is sufficient to image breasts as large as 1335 mL, which should accommodate up to 90% of the women in the United States. PMID:24320471

  1. 3-D capacitance density imaging system

    DOEpatents

    Fasching, G.E.

    1988-03-18

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved. 7 figs.

  2. 2D and 3D Elasticity Imaging Using Freehand Ultrasound

    E-print Network

    Drummond, Tom

    2D and 3D Elasticity Imaging Using Freehand Ultrasound Joel Edward Lindop Pembroke College March property (e.g., density). Various new scanning techniques are aimed at producing elasticity images related for almost two decades. Elasticity images are produced by estimating and analysing quasistatic deformations

  3. Fully Automatic 3D Reconstruction of Histological Images

    E-print Network

    Bagci, Ulas

    2009-01-01

    In this paper, we propose a computational framework for 3D volume reconstruction from 2D histological slices using registration algorithms in feature space. To improve the quality of reconstructed 3D volume, first, intensity variations in images are corrected by an intensity standardization process which maps image intensity scale to a standard scale where similar intensities correspond to similar tissues. Second, a subvolume approach is proposed for 3D reconstruction by dividing standardized slices into groups. Third, in order to improve the quality of the reconstruction process, an automatic best reference slice selection algorithm is developed based on an iterative assessment of image entropy and mean square error of the registration process. Finally, we demonstrate that the choice of the reference slice has a significant impact on registration quality and subsequent 3D reconstruction.

  4. Acoustic 3D imaging of dental structures

    SciTech Connect

    Lewis, D.K.; Hume, W.R.; Douglass, G.D.

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  5. Accelerated 3D catheter visualization from triplanar MR projection images.

    PubMed

    Schirra, Carsten Oliver; Weiss, Steffen; Krueger, Sascha; Caulfield, Denis; Pedersen, Steen F; Razavi, Reza; Kozerke, Sebastian; Schaeffter, Tobias

    2010-07-01

    One major obstacle for MR-guided catheterizations is long acquisition times associated with visualizing interventional devices. Therefore, most techniques presented hitherto rely on single-plane imaging to visualize the catheter. Recently, accelerated three-dimensional (3D) imaging based on compressed sensing has been proposed to reduce acquisition times. However, frame rates with this technique remain low, and the 3D reconstruction problem yields a considerable computational load. In X-ray angiography, it is well understood that the shape of interventional devices can be derived in 3D space from a limited number of projection images. In this work, this fact is exploited to develop a method for 3D visualization of active catheters from multiplanar two-dimensional (2D) projection MR images. This is favorable to 3D MRI as the overall number of acquired profiles, and consequently the acquisition time, is reduced. To further reduce measurement times, compressed sensing is employed. Furthermore, a novel single-channel catheter design is presented that combines a solenoidal tip coil in series with a single-loop antenna, enabling simultaneous tip tracking and shape visualization. The tracked tip and catheter properties provide constraints for compressed sensing reconstruction and subsequent 2D/3D curve fitting. The feasibility of the method is demonstrated in phantoms and in an in vivo pig experiment. PMID:20572136

  6. 3D thermography imaging standardization technique for inflammation diagnosis

    NASA Astrophysics Data System (ADS)

    Ju, Xiangyang; Nebel, Jean-Christophe; Siebert, J. Paul

    2005-01-01

    We develop a 3D thermography imaging standardization technique to allow quantitative data analysis. Medical Digital Infrared Thermal Imaging is very sensitive and reliable mean of graphically mapping and display skin surface temperature. It allows doctors to visualise in colour and quantify temperature changes in skin surface. The spectrum of colours indicates both hot and cold responses which may co-exist if the pain associate with an inflammatory focus excites an increase in sympathetic activity. However, due to thermograph provides only qualitative diagnosis information, it has not gained acceptance in the medical and veterinary communities as a necessary or effective tool in inflammation and tumor detection. Here, our technique is based on the combination of visual 3D imaging technique and thermal imaging technique, which maps the 2D thermography images on to 3D anatomical model. Then we rectify the 3D thermogram into a view independent thermogram and conform it a standard shape template. The combination of these imaging facilities allows the generation of combined 3D and thermal data from which thermal signatures can be quantified.

  7. Signal Quality Assessment of Retinal Optical Coherence Tomography Images

    PubMed Central

    Huang, Yijun; Gangaputra, Sapna; Lee, Kristine E.; Narkar, Ashwini R.; Klein, Ronald; Klein, Barbara E. K.; Meuer, Stacy M.; Danis, Ronald P.

    2012-01-01

    Purpose The purpose of this article was to assess signal quality of retinal optical coherence tomography (OCT) images from multiple devices using subjective and quantitative measurements. Methods A total of 120 multiframe OCT images from 4 spectral domain OCT devices (Cirrus, RTVue, Spectralis, and 3D OCT-1000) were evaluated subjectively by trained graders, and measured quantitatively using a derived parameter, maximum tissue contrast index (mTCI). An intensity histogram decomposition model was proposed to separate the foreground and background information of OCT images and to calculate the mTCI. The mTCI results were compared with the manufacturer signal index (MSI) provided by the respective devices, and to the subjective grading scores (SGS). Results Statistically significant correlations were observed between the paired methods (i.e., SGS and MSI, SGS and mTCI, and mTCI and MSI). Fisher's Z transformation indicated the Pearson correlation coefficient ? ? 0.8 for all devices. Using the Deming regression, correlation parameters between the paired methods were established. This allowed conversion from the proprietary MSI values to SGS and mTCI that are universally applied to each device. Conclusions The study suggests signal quality of retinal OCT images can be evaluated subjectively and objectively, independent of the devices. Together with the proposed histogram decomposition model, mTCI may be used as a standardization metric for OCT signal quality that would affect measurements. PMID:22427567

  8. A 3D surface imaging system for assessing human obesity

    NASA Astrophysics Data System (ADS)

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  9. 3D ACQUISITION OF ARCHAEOLOGICAL HERITAGE FROM IMAGES

    E-print Network

    Pollefeys, Marc

    is one of the sciences were annotations and precise documentation are most important because evidence3D ACQUISITION OF ARCHAEOLOGICAL HERITAGE FROM IMAGES Marc Pollefeys, Maarten Vergauwen, Kurt Marc.Pollefeys@esat.kuleuven.ac.be KEY WORDS: photogrammetry, archaelogy, heritage conservation, image

  10. Hybrid segmentation framework for 3D medical image analysis

    NASA Astrophysics Data System (ADS)

    Chen, Ting; Metaxas, Dimitri N.

    2003-05-01

    Medical image segmentation is the process that defines the region of interest in the image volume. Classical segmentation methods such as region-based methods and boundary-based methods cannot make full use of the information provided by the image. In this paper we proposed a general hybrid framework for 3D medical image segmentation purposes. In our approach we combine the Gibbs Prior model, and the deformable model. First, Gibbs Prior models are applied onto each slice in a 3D medical image volume and the segmentation results are combined to a 3D binary masks of the object. Then we create a deformable mesh based on this 3D binary mask. The deformable model will be lead to the edge features in the volume with the help of image derived external forces. The deformable model segmentation result can be used to update the parameters for Gibbs Prior models. These methods will then work recursively to reach a global segmentation solution. The hybrid segmentation framework has been applied to images with the objective of lung, heart, colon, jaw, tumor, and brain. The experimental data includes MRI (T1, T2, PD), CT, X-ray, Ultra-Sound images. High quality results are achieved with relatively efficient time cost. We also did validation work using expert manual segmentation as the ground truth. The result shows that the hybrid segmentation may have further clinical use.

  11. 3-D Terahertz Synthetic-Aperture Imaging and Spectroscopy

    NASA Astrophysics Data System (ADS)

    Henry, Samuel C.

    Terahertz (THz) wavelengths have attracted recent interest in multiple disciplines within engineering and science. Situated between the infrared and the microwave region of the electromagnetic spectrum, THz energy can propagate through non-polar materials such as clothing or packaging layers. Moreover, many chemical compounds, including explosives and many drugs, reveal strong absorption signatures in the THz range. For these reasons, THz wavelengths have great potential for non-destructive evaluation and explosive detection. Three-dimensional (3-D) reflection imaging with considerable depth resolution is also possible using pulsed THz systems. While THz imaging (especially 3-D) systems typically operate in transmission mode, reflection offers the most practical configuration for standoff detection, especially for objects with high water content (like human tissue) which are opaque at THz frequencies. In this research, reflection-based THz synthetic-aperture (SA) imaging is investigated as a potential imaging solution. THz SA imaging results presented in this dissertation are unique in that a 2-D planar synthetic array was used to generate a 3-D image without relying on a narrow time-window for depth isolation cite [Shen 2005]. Novel THz chemical detection techniques are developed and combined with broadband THz SA capabilities to provide concurrent 3-D spectral imaging. All algorithms are tested with various objects and pressed pellets using a pulsed THz time-domain system in the Northwest Electromagnetics and Acoustics Research Laboratory (NEAR-Lab).

  12. Scalable 3D image conversion and ergonomic evaluation

    NASA Astrophysics Data System (ADS)

    Kishi, Shinsuke; Kim, Sang Hyun; Shibata, Takashi; Kawai, Takashi; Häkkinen, Jukka; Takatalo, Jari; Nyman, Göte

    2008-02-01

    Digital 3D cinema has recently become popular and a number of high-quality 3D films have been produced. However, in contrast with advances in 3D display technology, it has been pointed out that there is a lack of suitable 3D content and content creators. Since 3D display methods and viewing environments vary widely, there is expectation that high-quality content will be multi-purposed. On the other hand, there is increasing interest in the bio-medical effects of image content of various types and there are moves toward international standardization, so 3D content production needs to take into consideration safety and conformity with international guidelines. The aim of the authors' research is to contribute to the production and application of 3D content that is safe and comfortable to watch by developing a scalable 3D conversion technology. In this paper, the authors focus on the process of changing the screen size, examining a conversion algorithm and its effectiveness. The authors evaluated the visual load imposed during the viewing of various 3D content converted by the prototype algorithm as compared with ideal conditions and with content expanded without conversion. Sheffe's paired comparison method was used for evaluation. To examine the effects of screen size reduction on viewers, changes in user impression and experience were elucidated using the IBQ methodology. The results of the evaluation are presented along with a discussion of the effectiveness and potential of the developed scalable 3D conversion algorithm and future research tasks.

  13. User-guided segmentation for volumetric retinal optical coherence tomography images

    PubMed Central

    Yin, Xin; Chao, Jennifer R.; Wang, Ruikang K.

    2014-01-01

    Abstract. Despite the existence of automatic segmentation techniques, trained graders still rely on manual segmentation to provide retinal layers and features from clinical optical coherence tomography (OCT) images for accurate measurements. To bridge the gap between this time-consuming need of manual segmentation and currently available automatic segmentation techniques, this paper proposes a user-guided segmentation method to perform the segmentation of retinal layers and features in OCT images. With this method, by interactively navigating three-dimensional (3-D) OCT images, the user first manually defines user-defined (or sketched) lines at regions where the retinal layers appear very irregular for which the automatic segmentation method often fails to provide satisfactory results. The algorithm is then guided by these sketched lines to trace the entire 3-D retinal layer and anatomical features by the use of novel layer and edge detectors that are based on robust likelihood estimation. The layer and edge boundaries are finally obtained to achieve segmentation. Segmentation of retinal layers in mouse and human OCT images demonstrates the reliability and efficiency of the proposed user-guided segmentation method. PMID:25147962

  14. Computerized analysis of pelvic incidence from 3D images

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaž; Janssen, Michiel M. A.; Pernuš, Franjo; Castelein, René M.; Viergever, Max A.

    2012-02-01

    The sagittal alignment of the pelvis can be evaluated by the angle of pelvic incidence (PI), which is constant for an arbitrary subject position and orientation and can be therefore compared among subjects in standing, sitting or supine position. In this study, PI was measured from three-dimensional (3D) computed tomography (CT) images of normal subjects that were acquired in supine position. A novel computerized method, based on image processing techniques, was developed to automatically determine the anatomical references required to measure PI, i.e. the centers of the femoral heads in 3D, and the center and inclination of the sacral endplate in 3D. Multiplanar image reformation was applied to obtain perfect sagittal views with all anatomical structures completely in line with the hip axis, from which PI was calculated. The resulting PI (mean+/-standard deviation) was equal to 46.6°+/-9.2° for male subjects (N = 189), 47.6°+/-10.7° for female subjects (N = 181), and 47.1°+/-10.0° for all subjects (N = 370). The obtained measurements of PI from 3D images were not biased by acquisition projection or structure orientation, because all anatomical structures were completely in line with the hip axis. The performed measurements in 3D therefore represent PI according to the actual geometrical relationships among anatomical structures of the sacrum, pelvis and hips, as observed from the perfect sagittal views.

  15. A microfabricated 3-D stem cell delivery scaffold for retinal regenerative therapy

    E-print Network

    Sodha, Sonal

    2009-01-01

    Diseases affecting the retina, such as Age-related Macular Degeneration (AMD) and Retinitis Pigmentosa (RP), result in the degeneration of the photoreceptor cells and can ultimately lead to blindness in patients. There is ...

  16. Automatic 3d Mapping Using Multiple Uncalibrated Close Range Images

    NASA Astrophysics Data System (ADS)

    Rafiei, M.; Saadatseresht, M.

    2013-09-01

    Automatic three-dimensions modeling of the real world is an important research topic in the geomatics and computer vision fields for many years. By development of commercial digital cameras and modern image processing techniques, close range photogrammetry is vastly utilized in many fields such as structure measurements, topographic surveying, architectural and archeological surveying, etc. A non-contact photogrammetry provides methods to determine 3D locations of objects from two-dimensional (2D) images. Problem of estimating the locations of 3D points from multiple images, often involves simultaneously estimating both 3D geometry (structure) and camera pose (motion), it is commonly known as structure from motion (SfM). In this research a step by step approach to generate the 3D point cloud of a scene is considered. After taking images with a camera, we should detect corresponding points in each two views. Here an efficient SIFT method is used for image matching for large baselines. After that, we must retrieve the camera motion and 3D position of the matched feature points up to a projective transformation (projective reconstruction). Lacking additional information on the camera or the scene makes the parallel lines to be unparalleled. The results of SfM computation are much more useful if a metric reconstruction is obtained. Therefor multiple views Euclidean reconstruction applied and discussed. To refine and achieve the precise 3D points we use more general and useful approach, namely bundle adjustment. At the end two real cases have been considered to reconstruct (an excavation and a tower).

  17. 3D printed biomimetic vascular phantoms for assessment of hyperspectral imaging systems

    NASA Astrophysics Data System (ADS)

    Wang, Jianting; Ghassemi, Pejhman; Melchiorri, Anthony; Ramella-Roman, Jessica; Mathews, Scott A.; Coburn, James; Sorg, Brian; Chen, Yu; Pfefer, Joshua

    2015-03-01

    The emerging technique of three-dimensional (3D) printing provides a revolutionary way to fabricate objects with biologically realistic geometries. Previously we have performed optical and morphological characterization of basic 3D printed tissue-simulating phantoms and found them suitable for use in evaluating biophotonic imaging systems. In this study we assess the potential for printing phantoms with irregular, image-defined vascular networks that can be used to provide clinically-relevant insights into device performance. A previously acquired fundus camera image of the human retina was segmented, embedded into a 3D matrix, edited to incorporate the tubular shape of vessels and converted into a digital format suitable for printing. A polymer with biologically realistic optical properties was identified by spectrophotometer measurements of several commercially available samples. Phantoms were printed with the retinal vascular network reproduced as ~1.0 mm diameter channels at a range of depths up to ~3 mm. The morphology of the printed vessels was verified by volumetric imaging with ?-CT. Channels were filled with hemoglobin solutions at controlled oxygenation levels, and the phantoms were imaged by a near-infrared hyperspectral reflectance imaging system. The effect of vessel depth on hemoglobin saturation estimates was studied. Additionally, a phantom incorporating the vascular network at two depths was printed and filled with hemoglobin solution at two different saturation levels. Overall, results indicated that 3D printed phantoms are useful for assessing biophotonic system performance and have the potential to form the basis of clinically-relevant standardized test methods for assessment of medical imaging modalities.

  18. 3D Tumor Shape Reconstruction from 2D Bioluminescence Images and Registration with CT Images

    E-print Network

    Huang, Junzhou

    1 3D Tumor Shape Reconstruction from 2D Bioluminescence Images and Registration with CT Images and efficient algo- rithm for reconstructing the 3D shapes of tumors from a set of 2D bioluminescence images of bioluminescence images. Second, the images are registered according to the projection of the animal rotating axis

  19. 3D reconstruction based on CT image and its application

    NASA Astrophysics Data System (ADS)

    Zhang, Jianxun; Zhang, Mingmin

    2004-03-01

    Reconstitute the 3-D model of the liver and its internal piping system and simulation of the liver surgical operation can increase the accurate and security of the liver surgical operation, attain a purpose for the biggest limit decrease surgical operation wound, shortening surgical operation time, increasing surgical operation succeeding rate, reducing medical treatment expenses and promoting patient recovering from illness. This text expatiated technology and method that the author constitutes 3-D the model of the liver and its internal piping system and simulation of the liver surgical operation according to the images of CT. The direct volume rendering method establishes 3D the model of the liver. Under the environment of OPENGL adopt method of space point rendering to display liver's internal piping system and simulation of the liver surgical operation. Finally, we adopt the wavelet transform method compressed the medical image data.

  20. 3D FaceCam: a fast and accurate 3D facial imaging device for biometrics applications

    NASA Astrophysics Data System (ADS)

    Geng, Jason; Zhuang, Ping; May, Patrick; Yi, Steven; Tunnell, David

    2004-08-01

    Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes). Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3D image rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.

  1. A pilot study: 3D stereo photogrammetric image superimposition on to 3D CT scan images the future of orthognathic surgery

    E-print Network

    Nebel, Jean-Christophe

    side of the face, software than allows the construction of a photo-realistic 3D facial model. The modelA pilot study: 3D stereo photogrammetric image superimposition on to 3D CT scan images ­ the future Walker c and Donald Hadley d a Glasgow Dental Hospital and School, Glasgow, Scotland,UK b 3D

  2. Automated curved planar reformation of 3D spine images

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo

    2005-10-01

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks.

  3. Adaptive Optics Technology for High-Resolution Retinal Imaging

    PubMed Central

    Lombardo, Marco; Serrao, Sebastiano; Devaney, Nicholas; Parravano, Mariacristina; Lombardo, Giuseppe

    2013-01-01

    Adaptive optics (AO) is a technology used to improve the performance of optical systems by reducing the effects of optical aberrations. The direct visualization of the photoreceptor cells, capillaries and nerve fiber bundles represents the major benefit of adding AO to retinal imaging. Adaptive optics is opening a new frontier for clinical research in ophthalmology, providing new information on the early pathological changes of the retinal microstructures in various retinal diseases. We have reviewed AO technology for retinal imaging, providing information on the core components of an AO retinal camera. The most commonly used wavefront sensing and correcting elements are discussed. Furthermore, we discuss current applications of AO imaging to a population of healthy adults and to the most frequent causes of blindness, including diabetic retinopathy, age-related macular degeneration and glaucoma. We conclude our work with a discussion on future clinical prospects for AO retinal imaging. PMID:23271600

  4. High-speed, digitally refocused retinal imaging with line-field parallel swept source OCT

    NASA Astrophysics Data System (ADS)

    Fechtig, Daniel J.; Kumar, Abhishek; Ginner, Laurin; Drexler, Wolfgang; Leitgeb, Rainer A.

    2015-03-01

    MHz OCT allows mitigating undesired influence of motion artifacts during retinal assessment, but comes in state-of-the-art point scanning OCT at the price of increased system complexity. By changing the paradigm from scanning to parallel OCT for in vivo retinal imaging the three-dimensional (3D) acquisition time is reduced without a trade-off between speed, sensitivity and technological requirements. Furthermore, the intrinsic phase stability allows for applying digital refocusing methods increasing the in-focus imaging depth range. Line field parallel interferometric imaging (LPSI) is utilizing a commercially available swept source, a single-axis galvo-scanner and a line scan camera for recording 3D data with up to 1MHz A-scan rate. Besides line-focus illumination and parallel detection, we mitigate the necessity for high-speed sensor and laser technology by holographic full-range imaging, which allows for increasing the imaging speed by low sampling of the optical spectrum. High B-scan rates up to 1kHz further allow for implementation of lable-free optical angiography in 3D by calculating the inter B-scan speckle variance. We achieve a detection sensitivity of 93.5 (96.5) dB at an equivalent A-scan rate of 1 (0.6) MHz and present 3D in vivo retinal structural and functional imaging utilizing digital refocusing. Our results demonstrate for the first time competitive imaging sensitivity, resolution and speed with a parallel OCT modality. LPSI is in fact currently the fastest OCT device applied to retinal imaging and operating at a central wavelength window around 800 nm with a detection sensitivity of higher than 93.5 dB.

  5. 3D Measurements in Images using CAD Models George Vosselman

    E-print Network

    Vosselman, George

    3D Measurements in Images using CAD Models George Vosselman Delft University of Technology Faculty.vosselman@geo.tudelft.nl Keywords: Measurement, Matching, CAD-Models Abstract Semi-automatic measurement of objects with regular are summarised in section six. 2 Related work 2.1 Manipulation of wire frames The tools available in CAD packages

  6. 3D Correlative Imaging | High Resolution Electron Microscopy

    Cancer.gov

    One key area of interest for the lab has been to close the 3D imaging gap, finding ways to image whole cells and tissues at high resolution. Focused ion beam scanning electron microscopy (FIB-SEM, or otherwise known as ion abrasion scanning electron microscopy, IA-SEM) uses a scanning electron beam to image the face of a fixed, resin-embedded sample, and an ion beam to remove “slices” of the sample, resulting in a sequential stack of high resolution images.

  7. 3-D object-oriented image analysis of geophysical data

    NASA Astrophysics Data System (ADS)

    Fadel, I.; Kerle, N.; van der Meijde, M.

    2014-07-01

    Geophysical data are the main source of information about the subsurface. Geophysical techniques are, however, highly non-unique in determining specific physical parameters and boundaries of subsurface objects. To obtain actual physical information, an inversion process is often applied, in which measurements at or above the Earth surface are inverted into a 2- or 3-D subsurface spatial distribution of the physical property. Interpreting these models into structural objects, related to physical processes, requires a priori knowledge and expert analysis which is susceptible to subjective choices and is therefore often non-repeatable. In this research, we implemented a recently introduced object-based approach to interpret the 3-D inversion results of a single geophysical technique using the available a priori information and the physical and geometrical characteristics of the interpreted objects. The introduced methodology is semi-automatic and repeatable, and allows the extraction of subsurface structures using 3-D object-oriented image analysis (3-D OOA) in an objective knowledge-based classification scheme. The approach allows for a semi-objective setting of thresholds that can be tested and, if necessary, changed in a very fast and efficient way. These changes require only changing the thresholds used in a so-called ruleset, which is composed of algorithms that extract objects from a 3-D data cube. The approach is tested on a synthetic model, which is based on a priori knowledge on objects present in the study area (Tanzania). Object characteristics and thresholds were well defined in a 3-D histogram of velocity versus depth, and objects were fully retrieved. The real model results showed how 3-D OOA can deal with realistic 3-D subsurface conditions in which the boundaries become fuzzy, the object extensions become unclear and the model characteristics vary with depth due to the different physical conditions. As expected, the 3-D histogram of the real data was substantially more complex. Still, the 3-D OOA-derived objects were extracted based on their velocity and their depth location. Spatially defined boundaries, based on physical variations, can improve the modelling with spatially dependent parameter information. With 3-D OOA, the non-uniqueness on the location of objects and their physical properties can be potentially significantly reduced.

  8. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  9. Automated lesion detectors in retinal fundus images.

    PubMed

    Figueiredo, I N; Kumar, S; Oliveira, C M; Ramos, J D; Engquist, B

    2015-11-01

    Diabetic retinopathy (DR) is a sight-threatening condition occurring in persons with diabetes, which causes progressive damage to the retina. The early detection and diagnosis of DR is vital for saving the vision of diabetic persons. The early signs of DR which appear on the surface of the retina are the dark lesions such as microaneurysms (MAs) and hemorrhages (HEMs), and bright lesions (BLs) such as exudates. In this paper, we propose a novel automated system for the detection and diagnosis of these retinal lesions by processing retinal fundus images. We devise appropriate binary classifiers for these three different types of lesions. Some novel contextual/numerical features are derived, for each lesion type, depending on its inherent properties. This is performed by analysing several wavelet bands (resulting from the isotropic undecimated wavelet transform decomposition of the retinal image green channel) and by using an appropriate combination of Hessian multiscale analysis, variational segmentation and cartoon+texture decomposition. The proposed methodology has been validated on several medical datasets, with a total of 45,770 images, using standard performance measures such as sensitivity and specificity. The individual performance, per frame, of the MA detector is 93% sensitivity and 89% specificity, of the HEM detector is 86% sensitivity and 90% specificity, and of the BL detector is 90% sensitivity and 97% specificity. Regarding the collective performance of these binary detectors, as an automated screening system for DR (meaning that a patient is considered to have DR if it is a positive patient for at least one of the detectors) it achieves an average 95-100% of sensitivity and 70% of specificity at a per patient basis. Furthermore, evaluation conducted on publicly available datasets, for comparison with other existing techniques, shows the promising potential of the proposed detectors. PMID:26378502

  10. Automated registration of 3D-range with 2D-color images: an overview

    E-print Network

    Stamos, Ioannis

    Stamos)) Input: Range ImagesInput: Range Images 3D Line Extraction3D Line Extraction Input: 2D ImagesInput: 2D Images 2D Line Extraction2D Line Extraction 3D Line Clustering3D Line Clustering 2D Feature2D Feature Line Extraction3D Line Extraction Input: 2D ImagesInput: 2D Images 2D Line Extraction2D Line Extraction

  11. 3D acoustic imaging applied to the Baikal Neutrino Telescope

    E-print Network

    K. G. Kebkal; R. Bannasch; O. G. Kebkal; A. I. Panfilov; R. Wischnewski

    2008-11-07

    A hydro-acoustic imaging system was tested in a pilot study on distant localization of elements of the Baikal underwater neutrino telescope. For this innovative approach, based on broad band acoustic echo signals and strictly avoiding any active acoustic elements on the telescope, the imaging system was temporarily installed just below the ice surface, while the telescope stayed in its standard position at 1100 m depth. The system comprised an antenna with four acoustic projectors positioned at the corners of a 50 meter square; acoustic pulses were "linear sweep-spread signals" - multiple-modulated wide-band signals (10-22 kHz) of 51.2 s duration. Three large objects (two string buoys and the central electronics module) were localized by the 3D acoustic imaging, with a accuracy of ~0.2 m (along the beam) and ~1.0 m (transverse). We discuss signal forms and parameters necessary for improved 3D acoustic imaging of the telescope, and suggest a layout of a possible stationary bottom based 3D imaging setup. The presented technique may be of interest for neutrino telescopes of km3-scale and beyond, as a flexible temporary or as a stationary tool to localize basic telescope elements, while these are completely passive.

  12. 1024 pixels single photon imaging array for 3D ranging

    NASA Astrophysics Data System (ADS)

    Bellisai, S.; Guerrieri, F.; Tisa, S.; Zappa, F.; Tosi, A.; Giudice, A.

    2011-01-01

    Three dimensions (3D) acquisition systems are driving applications in many research field. Nowadays 3D acquiring systems are used in a lot of applications, such as cinema industry or in automotive (for active security systems). Depending on the application, systems present different features, for example color sensitivity, bi-dimensional image resolution, distance measurement accuracy and acquisition frame rate. The system we developed acquires 3D movie using indirect Time of Flight (iTOF), starting from phase delay measurement of a sinusoidally modulated light. The system acquires live movie with a frame rate up to 50frame/s in a range distance between 10 cm up to 7.5 m.

  13. INTRODUCTION A 3D image of skeletal hard tissue can be obtained using

    E-print Network

    Nebel, Jean-Christophe

    SUPERIMPOSITION Superimposition of the C3D stereophotographic image over 3D spiral CT scan image of the skull different modalities, stereo photogrammetry and a 3D spiral CT scan is possible. · Registration accuracy image superimposition on to 3D CT scan images ­ the future of orthognathic surgery. Dr. B.S. Khambay1

  14. Large distance 3D imaging of hidden objects

    NASA Astrophysics Data System (ADS)

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  15. 3D imaging of fetus vertebra by synchrotron radiation microtomography

    NASA Astrophysics Data System (ADS)

    Peyrin, Francoise; Pateyron-Salome, Murielle; Denis, Frederic; Braillon, Pierre; Laval-Jeantet, Anne-Marie; Cloetens, Peter

    1997-10-01

    A synchrotron radiation computed microtomography system allowing high resolution 3D imaging of bone samples has been developed at ESRF. The system uses a high resolution 2D detector based on a CCd camera coupled to a fluorescent screen through light optics. The spatial resolution of the device is particularly well adapted to the imaging of bone structure. In view of studying growth, vertebra samples of fetus with differential gestational ages were imaged. The first results show that fetus vertebra is quite different from adult bone both in terms of density and organization.

  16. Texture blending on 3D models using casual images

    NASA Astrophysics Data System (ADS)

    Liu, Xingming; Liu, Xiaoli; Li, Ameng; Liu, Junyao; Wang, Huijing

    2013-12-01

    In this paper, a method for constructing photorealistic textured model using 3D structured light digitizer is presented. Our method acquisition of range images and texture images around object, and range images are registered and integrated to construct geometric model of object. System is calibrated and poses of texture-camera are determined so that the relationship between texture and geometric model is established. After that, a global optimization is applied to assign compatible texture to adjacent surface and followed with a level procedure to remove artifacts due to vary lighting, approximate geometric model and so on. Lastly, we demonstrate the effect of our method on constructing a real model of world.

  17. Automated reconstruction of 3D scenes from sequences of images

    NASA Astrophysics Data System (ADS)

    Pollefeys, M.; Koch, R.; Vergauwen, M.; Van Gool, L.

    Modelling of 3D objects from image sequences is a challenging problem and has been an important research topic in the areas of photogrammetry and computer vision for many years. In this paper, a system is presented which automatically extracts a textured 3D surface model from a sequence of images of a scene. The system can deal with unknown camera settings. In addition, the parameters of this camera are allowed to change during acquisition (e.g., by zooming or focusing). No prior knowledge about the scene is necessary to build the 3D models. Therefore, this system offers a high degree of flexibility. The system is based on state-of-the-art algorithms recently developed in computer vision. The 3D modelling task is decomposed into a number of successive steps. Gradually, more knowledge of the scene and the camera setup is retrieved. At this point, the obtained accuracy is not yet at the level required for most metrology applications, but the visual quality is very convincing. This system has been applied to a number of applications in archaeology. The Roman site of Sagalassos (southwest Turkey) was used as a test case to illustrate the potential of this new approach.

  18. High-resolution 3-D imaging of objects through walls

    NASA Astrophysics Data System (ADS)

    Schechter, Richard S.; Chun, Sung-Taek

    2010-11-01

    This paper describes the use of microwaves to accurately image objects behind dielectric walls. The data are first simulated by using a finite-difference time-domain code. A large model of a room with walls and objects inside is used as a test case. Since the model and associated volume are big compared to wavelengths, the code is run on a parallel supercomputer. A fixed 2-D receiver array captures all the return data simultaneously. A time-domain backprojection algorithm with a correction for the time delay and refraction caused by the front wall then reconstructs high-fidelity 3-D images. A rigorous refraction correction using Snell's law and a simpler but faster linear correction are compared in both 2-D and 3-D. It is shown that imaging in 3-D and viewing an image in the plane parallel to the receiver array is necessary to identify objects by shape. It is also shown that a simple linear correction for the wall is sufficient.

  19. An image reconstruction algorithm for 3-D electrical impedance mammography.

    PubMed

    Zhang, Xiaolin; Wang, Wei; Sze, Gerald; Barber, David; Chatwin, Chris

    2014-12-01

    The Sussex MK4 electrical impedance mammography system is especially designed for 3-D breast screening. It aims to diagnose breast cancer at an early stage when it is most treatable. Planar electrodes are employed in this system. The challenge with planar electrodes is the inaccuracy and poor sensitivity in the vertical direction for 3-D imaging. An enhanced image reconstruction algorithm using a duo-mesh method is proposed to improve the vertical accuracy and sensitivity. The novel part of the enhanced image reconstruction algorithm is the correction term. To evaluate the new algorithm, an image processing based error analysis method is presented, which not only can precisely assess the error of the reconstructed image but also locate the center and outline the center and outline the shape of the objects of interest. Although the enhanced image reconstruction algorithm and the image processing based error analysis method are designed for the Sussex MK4 system, they are applicable to all electrical impedance tomography systems, regardless of the hardware design. To validate the enhanced algorithm, performance results from simulations, phantoms and patients are presented. PMID:25014954

  20. Alignment of multimodality, 2D and 3D breast images

    NASA Astrophysics Data System (ADS)

    Grevera, George J.; Udupa, Jayaram K.

    2003-05-01

    In a larger effort, we are studying methods to improve the specificity of the diagnosis of breast cancer by combining the complementary information available from multiple imaging modalities. Merging information is important for a number of reasons. For example, contrast uptake curves are an indication of malignancy. The determination of anatomical locations in corresponding images from various modalities is necessary to ascertain the extent of regions of tissue. To facilitate this fusion, registration becomes necessary. We describe in this paper a framework in which 2D and 3D breast images from MRI, PET, Ultrasound, and Digital Mammography can be registered to facilitate this goal. Briefly, prior to image acquisition, an alignment grid is drawn on the breast skin. Modality-specific markers are then placed at the indicated grid points. Images are then acquired by a specific modality with the modality specific external markers in place causing the markers to appear in the images. This is the first study that we are aware of that has undertaken the difficult task of registering 2D and 3D images of such a highly deformable (the breast) across such a wide variety of modalities. This paper reports some very preliminary results from this project.

  1. Adaptive-optics SLO imaging combined with widefield OCT and SLO enables precise 3D localization of fluorescent cells in the mouse retina.

    PubMed

    Zawadzki, Robert J; Zhang, Pengfei; Zam, Azhar; Miller, Eric B; Goswami, Mayank; Wang, Xinlei; Jonnal, Ravi S; Lee, Sang-Hyuck; Kim, Dae Yu; Flannery, John G; Werner, John S; Burns, Marie E; Pugh, Edward N

    2015-06-01

    Adaptive optics scanning laser ophthalmoscopy (AO-SLO) has recently been used to achieve exquisite subcellular resolution imaging of the mouse retina. Wavefront sensing-based AO typically restricts the field of view to a few degrees of visual angle. As a consequence the relationship between AO-SLO data and larger scale retinal structures and cellular patterns can be difficult to assess. The retinal vasculature affords a large-scale 3D map on which cells and structures can be located during in vivo imaging. Phase-variance OCT (pv-OCT) can efficiently image the vasculature with near-infrared light in a label-free manner, allowing 3D vascular reconstruction with high precision. We combined widefield pv-OCT and SLO imaging with AO-SLO reflection and fluorescence imaging to localize two types of fluorescent cells within the retinal layers: GFP-expressing microglia, the resident macrophages of the retina, and GFP-expressing cone photoreceptor cells. We describe in detail a reflective afocal AO-SLO retinal imaging system designed for high resolution retinal imaging in mice. The optical performance of this instrument is compared to other state-of-the-art AO-based mouse retinal imaging systems. The spatial and temporal resolution of the new AO instrumentation was characterized with angiography of retinal capillaries, including blood-flow velocity analysis. Depth-resolved AO-SLO fluorescent images of microglia and cone photoreceptors are visualized in parallel with 469 nm and 663 nm reflectance images of the microvasculature and other structures. Additional applications of the new instrumentation are discussed. PMID:26114038

  2. Adaptive-optics SLO imaging combined with widefield OCT and SLO enables precise 3D localization of fluorescent cells in the mouse retina

    PubMed Central

    Zawadzki, Robert J.; Zhang, Pengfei; Zam, Azhar; Miller, Eric B.; Goswami, Mayank; Wang, Xinlei; Jonnal, Ravi S.; Lee, Sang-Hyuck; Kim, Dae Yu; Flannery, John G.; Werner, John S.; Burns, Marie E.; Pugh, Edward N.

    2015-01-01

    Adaptive optics scanning laser ophthalmoscopy (AO-SLO) has recently been used to achieve exquisite subcellular resolution imaging of the mouse retina. Wavefront sensing-based AO typically restricts the field of view to a few degrees of visual angle. As a consequence the relationship between AO-SLO data and larger scale retinal structures and cellular patterns can be difficult to assess. The retinal vasculature affords a large-scale 3D map on which cells and structures can be located during in vivo imaging. Phase-variance OCT (pv-OCT) can efficiently image the vasculature with near-infrared light in a label-free manner, allowing 3D vascular reconstruction with high precision. We combined widefield pv-OCT and SLO imaging with AO-SLO reflection and fluorescence imaging to localize two types of fluorescent cells within the retinal layers: GFP-expressing microglia, the resident macrophages of the retina, and GFP-expressing cone photoreceptor cells. We describe in detail a reflective afocal AO-SLO retinal imaging system designed for high resolution retinal imaging in mice. The optical performance of this instrument is compared to other state-of-the-art AO-based mouse retinal imaging systems. The spatial and temporal resolution of the new AO instrumentation was characterized with angiography of retinal capillaries, including blood-flow velocity analysis. Depth-resolved AO-SLO fluorescent images of microglia and cone photoreceptors are visualized in parallel with 469 nm and 663 nm reflectance images of the microvasculature and other structures. Additional applications of the new instrumentation are discussed. PMID:26114038

  3. Automated Recognition of 3D Features in GPIR Images

    NASA Technical Reports Server (NTRS)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a directed-graph data structure. Relative to past approaches, this multiaxis approach offers the advantages of more reliable detections, better discrimination of objects, and provision of redundant information, which can be helpful in filling gaps in feature recognition by one of the component algorithms. The image-processing class also includes postprocessing algorithms that enhance identified features to prepare them for further scrutiny by human analysts (see figure). Enhancement of images as a postprocessing step is a significant departure from traditional practice, in which enhancement of images is a preprocessing step.

  4. Optimal Point Spread Function Design for 3D Imaging

    NASA Astrophysics Data System (ADS)

    Shechtman, Yoav; Sahl, Steffen J.; Backer, Adam S.; Moerner, W. E.

    2014-09-01

    To extract from an image of a single nanoscale object maximum physical information about its position, we propose and demonstrate a framework for pupil-plane modulation for 3D imaging applications requiring precise localization, including single-particle tracking and superresolution microscopy. The method is based on maximizing the information content of the system, by formulating and solving the appropriate optimization problem—finding the pupil-plane phase pattern that would yield a point spread function (PSF) with optimal Fisher information properties. We use our method to generate and experimentally demonstrate two example PSFs: one optimized for 3D localization precision over a 3 ?m depth of field, and another with an unprecedented 5 ?m depth of field, both designed to perform under physically common conditions of high background signals.

  5. Optimal point spread function design for 3D imaging.

    PubMed

    Shechtman, Yoav; Sahl, Steffen J; Backer, Adam S; Moerner, W E

    2014-09-26

    To extract from an image of a single nanoscale object maximum physical information about its position, we propose and demonstrate a framework for pupil-plane modulation for 3D imaging applications requiring precise localization, including single-particle tracking and superresolution microscopy. The method is based on maximizing the information content of the system, by formulating and solving the appropriate optimization problem--finding the pupil-plane phase pattern that would yield a point spread function (PSF) with optimal Fisher information properties. We use our method to generate and experimentally demonstrate two example PSFs: one optimized for 3D localization precision over a 3???m depth of field, and another with an unprecedented 5???m depth of field, both designed to perform under physically common conditions of high background signals. PMID:25302889

  6. Right main bronchus perforation detected by 3D-image

    PubMed Central

    Bense, László; Eklund, Gunnar; Jorulf, Hakan; Farkas, Árpád; Balásházy, Imre; Hedenstierna, Göran; Krebsz, Ádám; Madas, Balázs Gergely; Strindberg, Jerker Eden

    2011-01-01

    A male metal worker, who has never smoked, contracted debilitating dyspnoea in 2003 which then deteriorated until 2007. Spirometry and chest x-rays provided no diagnosis. A 3D-image of the airways was reconstructed from a high-resolution CT (HRCT) in 2007, showing peribronchial air on the right side, mostly along the presegmental airways. After digital subtraction of the image of the peribronchial air, a hole on the cranial side of the right main bronchus was detected. The perforation could be identified at the re-examination of HRCTs in 2007 and 2009, but not in 2010 when it had possibly healed. The occupational exposure of the patient to evaporating chemicals might have contributed to the perforation and hampered its healing. A 3D HRCT reconstruction should be considered to detect bronchial anomalies, including wall-perforation, when unexplained dyspnoea or other chest symptoms call for extended investigation. PMID:22679238

  7. 3D atomic imaging by internal-detector electron holography.

    PubMed

    Uesaka, Akio; Hayashi, Kouichi; Matsushita, Tomohiro; Arai, Shigetoshi

    2011-07-22

    A method of internal-detector electron holography is the time-reversed version of photoelectron holography. Using an energy-dispersive x-ray detector, an electron gun, and a computer-controllable sample stage, we measured a multiple-energy hologram of the atomic arrangement around the Ti atom in SrTiO3 by recording the characteristic Ti K? x-ray spectra for different electron beam angles and wavelengths. A real-space image was obtained by using a fitting-based reconstruction algorithm. 3D atomic images of the elements Sr, Ti, and O in SrTiO3 were clearly visualized. The present work reveals that internal-detector electron holography has great potential for reproducing 3D atomic arrangements, even for light elements. PMID:21867018

  8. Femoroacetabular impingement with chronic acetabular rim fracture - 3D computed tomography, 3D magnetic resonance imaging and arthroscopic correlation

    PubMed Central

    Chhabra, Avneesh; Nordeck, Shaun; Wadhwa, Vibhor; Madhavapeddi, Sai; Robertson, William J

    2015-01-01

    Femoroacetabular impingement is uncommonly associated with a large rim fragment of bone along the superolateral acetabulum. We report an unusual case of femoroacetabular impingement (FAI) with chronic acetabular rim fracture. Radiographic, 3D computed tomography, 3D magnetic resonance imaging and arthroscopy correlation is presented with discussion of relative advantages and disadvantages of various modalities in the context of FAI. PMID:26191497

  9. Femoroacetabular impingement with chronic acetabular rim fracture - 3D computed tomography, 3D magnetic resonance imaging and arthroscopic correlation.

    PubMed

    Chhabra, Avneesh; Nordeck, Shaun; Wadhwa, Vibhor; Madhavapeddi, Sai; Robertson, William J

    2015-07-18

    Femoroacetabular impingement is uncommonly associated with a large rim fragment of bone along the superolateral acetabulum. We report an unusual case of femoroacetabular impingement (FAI) with chronic acetabular rim fracture. Radiographic, 3D computed tomography, 3D magnetic resonance imaging and arthroscopy correlation is presented with discussion of relative advantages and disadvantages of various modalities in the context of FAI. PMID:26191497

  10. Extracting 3D layout from a single image using global image structures.

    PubMed

    Lou, Zhongyu; Gevers, Theo; Hu, Ninghang

    2015-10-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very beneficial for extracting pixel-level 3D layout since it implies the way how pixels in the image are organized. In this paper, we propose an approach that first predicts the global image structure, and then we use the global structure for fine-grained pixel-level 3D layout extraction. In particular, image features are extracted based on multiple layout templates. We then learn a discriminative model for classifying the global layout at the image-level. Using latent variables, we implicitly model the sublevel semantics of the image, which enrich the expressiveness of our model. After the image-level structure is obtained, it is used as the prior knowledge to infer pixel-wise 3D layout. Experiments show that the results of our model outperform the state-of-the-art methods by 11.7% for 3D structure classification. Moreover, we show that employing the 3D structure prior information yields accurate 3D scene layout segmentation. PMID:25966478

  11. Virtual image display as a backlight for 3D.

    PubMed

    Travis, Adrian; MacCrann, Niall; Emerton, Neil; Kollin, Joel; Georgiou, Andreas; Lanier, Jaron; Bathiche, Stephen

    2013-07-29

    We describe a device which has the potential to be used both as a virtual image display and as a backlight. The pupil of the emitted light fills the device approximately to its periphery and the collimated emission can be scanned both horizontally and vertically in the manner needed to illuminate an eye in any position. The aim is to reduce the power needed to illuminate a liquid crystal panel but also to enable a smooth transition from 3D to a virtual image as the user nears the screen. PMID:23938645

  12. Combined registration of 3D tibia and femur implant models in 3D magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Englmeier, Karl-Hans; Siebert, Markus; von Eisenhart-Rothe, Ruediger; Graichen, Heiko

    2008-03-01

    The most frequent reasons for revision of total knee arthroplasty are loosening and abnormal axial alignment leading to an unphysiological kinematic of the knee implant. To get an idea about the postoperative kinematic of the implant, it is essential to determine the position and orientation of the tibial and femoral prosthesis. Therefore we developed a registration method for fitting 3D CAD-models of knee joint prostheses into an 3D MR image. This rigid registration is the basis for a quantitative analysis of the kinematics of knee implants. Firstly the surface data of the prostheses models are converted into a voxel representation; a recursive algorithm determines all boundary voxels of the original triangular surface data. Secondly an initial preconfiguration of the implants by the user is still necessary for the following step: The user has to perform a rough preconfiguration of both remaining prostheses models, so that the fine matching process gets a reasonable starting point. After that an automated gradient-based fine matching process determines the best absolute position and orientation: This iterative process changes all 6 parameters (3 rotational- and 3 translational parameters) of a model by a minimal amount until a maximum value of the matching function is reached. To examine the spread of the final solutions of the registration, the interobserver variability was measured in a group of testers. This variability, calculated by the relative standard deviation, improved from about 50% (pure manual registration) to 0.5% (rough manual preconfiguration and subsequent fine registration with the automatic fine matching process).

  13. Multidimensional feature extraction from 3D hyperspectral images

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; McLauchlan, Lifford

    2011-09-01

    A hyperspectral imaging system has been set up and used to capture hyperspectral image cubes from various samples in the 400-1000 nm spectral region. The system consists of an imaging spectrometer attached to a CCD camera with fiber optic light source as the illuminator. The significance of this system lies in its capability to capture 3D spectral and spatial data that can then be analyzed to extract information about the underlying samples, monitor the variations in their response to perturbation or changing environmental conditions, and compare optical properties. In this paper preliminary results are presented that analyze the 3D spatial and spectral data in reflection mode to extract features to differentiate among different classes of interest using biological and metallic samples. Studied biological samples possess homogenous as well as non-homogenous properties. Metals are analyzed for their response to different surface treatments, including polishing. Similarities and differences in the feature extraction process and results are presented. The mathematical approach taken is discussed. The hyperspectral imaging system offers a unique imaging modality that captures both spatial and spectral information that can then be correlated for future sample predictions.

  14. 3D Lunar Terrain Reconstruction from Apollo Images

    NASA Technical Reports Server (NTRS)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  15. Feature detection on 3D images of dental imprints

    NASA Astrophysics Data System (ADS)

    Mokhtari, Marielle; Laurendeau, Denis

    1994-09-01

    A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.

  16. Quantitative 3D Optical Imaging: Applications in Dosimetry and Biophysics

    NASA Astrophysics Data System (ADS)

    Thomas, Andrew Stephen

    Optical-CT has been shown to be a potentially useful imaging tool for the two very different spheres of biologists and radiation therapy physicists, but it has yet to live up to that potential. In radiation therapy, researchers have used optical-CT for the readout of 3D dosimeters, but it is yet to be a clinically relevant tool as the technology is too slow to be considered practical. Biologists have used the technique for structural imaging, but have struggled with emission tomography as the reality of photon attenuation for both excitation and emission have made the images quantitatively irrelevant. Dosimetry. The DLOS (Duke Large field of view Optical-CT Scanner) was designed and constructed to make 3D dosimetry utilizing optical-CT a fast and practical tool while maintaining the accuracy of readout of the previous, slower readout technologies. Upon construction/optimization/implementation of several components including a diffuser, band pass filter, registration mount & fluid filtration system the dosimetry system provides high quality data comparable to or exceeding that of commercial products. In addition, a stray light correction algorithm was tested and implemented. The DLOS in combination with the 3D dosimeter it was designed for, PREAGETM, then underwent rigorous commissioning and benchmarking tests validating its performance against gold standard data including a set of 6 irradiations. DLOS commissioning tests resulted in sub-mm isotropic spatial resolution (MTF >0.5 for frequencies of 1.5lp/mm) and a dynamic range of ˜60dB. Flood field uniformity was 10% and stable after 45minutes. Stray light proved to be small, due to telecentricity, but even the residual can be removed through deconvolution. Benchmarking tests showed the mean 3D passing gamma rate (3%, 3mm, 5% dose threshold) over the 6 benchmark data sets was 97.3% +/- 0.6% (range 96%-98%) scans totaling ˜10 minutes, indicating excellent ability to perform 3D dosimetry while improving the speed of readout. Noise was low at ˜2% for 2mm reconstructions. The DLOS/PRESAGERTM benchmark tests show consistently excellent performance, with very good agreement to simple known distributions. The telecentric design was critical to enabling fast (~15mins) imaging with minimal stray light artifacts. The system produces accurate isotropic 2mm3 dose data over clinical volumes (e.g. 16cm diameter phantoms, 12 cm height), and represents a uniquely useful and versatile new tool for commissioning complex radiotherapy techniques. The system also has wide versatility, and has successfully been used in preliminary tests with protons and with kV irradiations. Biology. Attenuation corrections for optical-emission-CT were done by modeling physical parameters in the imaging setup within the framework of an ordered subset expectation maximum (OSEM) iterative reconstruction algorithm. This process has a well documented history in single photon emission computed tomography (SPECT), but is inherently simpler due to the lack of excitation photons to account for. Excitation source strength distribution, excitation and emission attenuation were modeled. The accuracy of the correction was investigated by imaging phantoms containing known distributions of attenuation and fluorophores. The correction was validated on a manufactured phantom designed to give uniform emission in a central cuboidal region and later applied to a cleared mouse brain with GFP (green-fluorescentprotein) labeled vasculature and a cleared 4T1 xenograft flank tumor with constitutive RFP (red-fluorescent-protein). Reconstructions were compared to corresponding slices imaged with a fluorescent dissection microscope. Significant optical-ECT attenuation artifacts were observed in the uncorrected phantom images and appeared up to 80% less intense than the verification image in the central region. The corrected phantom images showed excellent agreement with the verification image with only slight variations. The corrected tissue sample reconstructions showed general agreement between the verification images. Comp

  17. The 3D model control of image processing

    NASA Technical Reports Server (NTRS)

    Nguyen, An H.; Stark, Lawrence

    1989-01-01

    Telerobotics studies remote control of distant robots by a human operator using supervisory or direct control. Even if the robot manipulators has vision or other senses, problems arise involving control, communications, and delay. The communication delays that may be expected with telerobots working in space stations while being controlled from an Earth lab have led to a number of experiments attempting to circumvent the problem. This delay in communication is a main motivating factor in moving from well understood instantaneous hands-on manual control to less well understood supervisory control; the ultimate step would be the realization of a fully autonomous robot. The 3-D model control plays a crucial role in resolving many conflicting image processing problems that are inherent in resolving in the bottom-up approach of most current machine vision processes. The 3-D model control approach is also capable of providing the necessary visual feedback information for both the control algorithms and for the human operator.

  18. 3D range scan enhancement using image-based methods

    NASA Astrophysics Data System (ADS)

    Herbort, Steffen; Gerken, Britta; Schugk, Daniel; Wöhler, Christian

    2013-10-01

    This paper addresses the problem of 3D surface scan refinement, which is desirable due to noise, outliers, and missing measurements being present in the 3D surfaces obtained with a laser scanner. We present a novel algorithm for the fusion of absolute laser scanner depth profiles and photometrically estimated surface normal data, which yields a noise-reduced and highly detailed depth profile with large scale shape robustness. In contrast to other approaches published in the literature, the presented algorithm (1) regards non-Lambertian surfaces, (2) simultaneously computes surface reflectance (i.e. BRDF) parameters required for 3D reconstruction, (3) models pixelwise incident light and viewing directions, and (4) accounts for interreflections. The algorithm as such relies on the minimization of a three-component error term, which penalizes intensity deviations, integrability deviations, and deviations from the known large-scale surface shape. The solution of the error minimization is obtained iteratively based on a calculus of variations. BRDF parameters are estimated by initially reducing and then iteratively refining the optical resolution, which provides the required robust data basis. The 3D reconstruction of concave surface regions affected by interreflections is improved by compensating global illumination in the image data. The algorithm is evaluated based on eight objects with varying albedos and reflectance behaviors (diffuse, specular, metallic). The qualitative evaluation shows a removal of outliers and a strong reduction of noise, while the large scale shape is preserved. Fine surface details Which are previously not contained in the surface scans, are incorporated through using image data. The algorithm is evaluated with respect to its absolute accuracy using two caliper objects of known shape, and based on synthetically generated data. The beneficial effect of interreflection compensation on the reconstruction accuracy is evaluated quantitatively in a Photometric Stereo framework.

  19. [Cystic-structure echographic imaging in retinal detachment].

    PubMed

    Sireteanu, L

    1994-01-01

    The B-type echographic exam in retinal detachment shows in the most accurate way the detached retina. In the total detachments the aspect reproduced the V-image, because in the perpendicular section, the retinal folds join at the papilla. By the rotation of the transducer, when the oblique echographic section get close to the perpendicular section, it may be moved on the monitor like two heterogeneous planes or like a chistic echographic image in the vitreous. This show the importance of the chistic echographic image in the differential diagnosis of the retinal detachment. PMID:8155616

  20. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a manner similar to that of a baseline hyperspectral- image-compression method. The mean values are encoded in the compressed bit stream and added back to the data at the appropriate decompression step. The overhead incurred by encoding the mean values only a few bits per spectral band is negligible with respect to the huge size of a typical hyperspectral data set. The other method is denoted modified decomposition. This method is so named because it involves a modified version of a commonly used multiresolution wavelet decomposition, known in the art as the 3D Mallat decomposition, in which (a) the first of multiple stages of a 3D wavelet transform is applied to the entire dataset and (b) subsequent stages are applied only to the horizontally-, vertically-, and spectrally-low-pass subband from the preceding stage. In the modified decomposition, in stages after the first, not only is the spatially-low-pass, spectrally-low-pass subband further decomposed, but also spatially-low-pass, spectrally-high-pass subbands are further decomposed spatially. Either method can be used alone to improve the quality of a reconstructed image (see figure). Alternatively, the two methods can be combined by first performing modified decomposition, then subtracting the mean values from spatial planes of spatially-low-pass subbands.

  1. Effect of Refractive Status and Axial Length on Peripapillary Retinal Nerve Fibre Layer Thickness: An Analysis Using 3D OCT

    PubMed Central

    Sowmya, V.; Venkataramanan, V.R.

    2015-01-01

    Background Accurate measurement of retinal nerve fiber layer (RNFL) is now possible with the high resolution optical coherence tomography (OCT). Effect of refractive status of the eye on RNFL thickness may be relevant in the diagnosis of glaucoma and other optic nerve diseases. Aim To assess the RNFL thickness and compare its correlation with refractive status and axial length of the eye. Material and Methods Three hundred eyes of 150 patients were included in this study, who underwent RNFL analysis using TOPCON 3D OCT 2000. Analysis of variance has been used to find the significance of study parameters between the study groups. Results The study showed that refractive status/axial length affected the peripapillary RNFL thickness significantly. Conclusion The study suggests that the diagnostic accuracy of OCT may be improved by considering refractive status and axial length of the eye when RNFL is measured. PMID:26500931

  2. Development of 3D microwave imaging reflectometry in LHD (invited).

    PubMed

    Nagayama, Y; Kuwahara, D; Yoshinaga, T; Hamada, Y; Kogi, Y; Mase, A; Tsuchiya, H; Tsuji-Iio, S; Yamaguchi, S

    2012-10-01

    Three-dimensional (3D) microwave imaging reflectometry has been developed in the large helical device to visualize fluctuating reflection surface which is caused by the density fluctuations. The plasma is illuminated by the probe wave with four frequencies, which correspond to four radial positions. The imaging optics makes the image of cut-off surface onto the 2D (7 × 7 channels) horn antenna mixer arrays. Multi-channel receivers have been also developed using micro-strip-line technology to handle many channels at reasonable cost. This system is first applied to observe the edge harmonic oscillation (EHO), which is an MHD mode with many harmonics that appears in the edge plasma. A narrow structure along field lines is observed during EHO. PMID:23126965

  3. Intrinsic optical signal imaging of retinal physiology: a review.

    PubMed

    Yao, Xincheng; Wang, Benquan

    2015-09-01

    Intrinsic optical signal (IOS) imaging promises to be a noninvasive method for high-resolution examination of retinal physiology, which can advance the study and diagnosis of eye diseases. While specialized optical instruments are desirable for functional IOS imaging of retinal physiology, in depth understanding of multiple IOS sources in the complex retinal neural network is essential for optimizing instrument designs. We provide a brief overview of IOS studies and relationships in rod outer segment suspensions, isolated retinas, and intact eyes. Recent developments of line-scan confocal and functional optical coherence tomography (OCT) instruments have allowed in vivo IOS mapping of photoreceptor physiology. Further improvements of the line-scan confocal and functional OCT systems may provide a feasible solution to pursue functional IOS mapping of human photoreceptors. Some interesting IOSs have already been detected in inner retinal layers, but better development of the IOS instruments and software algorithms is required to achieve optimal physiological assessment of inner retinal neurons. PMID:26405819

  4. Realistic Surface Reconstruction of 3D Scenes from Uncalibrated Image Sequences

    E-print Network

    Pollefeys, Marc

    include the use of stereo rigs, laser range scanners and other 3D digi- tizing devices. These devicesRealistic Surface Reconstruction of 3D Scenes from Uncalibrated Image Sequences Reinhard Koch addresses the problem of obtaining 3D models from image sequences. A 3D surface description of the scene

  5. Ultra-realistic 3-D imaging based on colour holography

    NASA Astrophysics Data System (ADS)

    Bjelkhagen, H. I.

    2013-02-01

    A review of recent progress in colour holography is provided with new applications. Colour holography recording techniques in silver-halide emulsions are discussed. Both analogue, mainly Denisyuk colour holograms, and digitally-printed colour holograms are described and their recent improvements. An alternative to silver-halide materials are the panchromatic photopolymer materials such as the DuPont and Bayer photopolymers which are covered. The light sources used to illuminate the recorded holograms are very important to obtain ultra-realistic 3-D images. In particular the new light sources based on RGB LEDs are described. They show improved image quality over today's commonly used halogen lights. Recent work in colour holography by holographers and companies in different countries around the world are included. To record and display ultra-realistic 3-D images with perfect colour rendering are highly dependent on the correct recording technique using the optimal recording laser wavelengths, the availability of improved panchromatic recording materials and combined with new display light sources.

  6. Precise 3D image alignment in micro-axial tomography.

    PubMed

    Matula, P; Kozubek, M; Staier, F; Hausmann, M

    2003-02-01

    Micro (micro-) axial tomography is a challenging technique in microscopy which improves quantitative imaging especially in cytogenetic applications by means of defined sample rotation under the microscope objective. The advantage of micro-axial tomography is an effective improvement of the precision of distance measurements between point-like objects. Under certain circumstances, the effective (3D) resolution can be improved by optimized acquisition depending on subsequent, multi-perspective image recording of the same objects followed by reconstruction methods. This requires, however, a very precise alignment of the tilted views. We present a novel feature-based image alignment method with a precision better than the full width at half maximum of the point spread function. The features are the positions (centres of gravity) of all fluorescent objects observed in the images (e.g. cell nuclei, fluorescent signals inside cell nuclei, fluorescent beads, etc.). Thus, real alignment precision depends on the localization precision of these objects. The method automatically determines the corresponding objects in subsequently tilted perspectives using a weighted bipartite graph. The optimum transformation function is computed in a least squares manner based on the coordinates of the centres of gravity of the matched objects. The theoretically feasible precision of the method was calculated using computer-generated data and confirmed by tests on real image series obtained from data sets of 200 nm fluorescent nano-particles. The advantages of the proposed algorithm are its speed and accuracy, which means that if enough objects are included, the real alignment precision is better than the axial localization precision of a single object. The alignment precision can be assessed directly from the algorithm's output. Thus, the method can be applied not only for image alignment and object matching in tilted view series in order to reconstruct (3D) images, but also to validate the experimental performance (e.g. mechanical precision of the tilting). In practice, the key application of the method is an improvement of the effective spatial (3D) resolution, because the well-known spatial anisotropy in light microscopy can be overcome. This allows more precise distance measurements between point-like objects. PMID:12588530

  7. 3D-LZ helicopter ladar imaging system

    NASA Astrophysics Data System (ADS)

    Savage, James; Harrington, Walter; McKinley, R. Andrew; Burns, H. N.; Braddom, Steven; Szoboszlay, Zoltan

    2010-04-01

    A joint-service team led by the Air Force Research Laboratory's Munitions and Sensors Directorates completed a successful flight test demonstration of the 3D-LZ Helicopter LADAR Imaging System. This was a milestone demonstration in the development of technology solutions for a problem known as "helicopter brownout", the loss of situational awareness caused by swirling sand during approach and landing. The 3D-LZ LADAR was developed by H.N. Burns Engineering and integrated with the US Army Aeroflightdynamics Directorate's Brown-Out Symbology System aircraft state symbology aboard a US Army EH-60 Black Hawk helicopter. The combination of these systems provided an integrated degraded visual environment landing solution with landing zone situational awareness as well as aircraft guidance and obstacle avoidance information. Pilots from the U.S. Army, Air Force, Navy, and Marine Corps achieved a 77% landing rate in full brownout conditions at a test range at Yuma Proving Ground, Arizona. This paper will focus on the LADAR technology used in 3D-LZ and the results of this milestone demonstration.

  8. 3D Multispectral Light Propagation Model For Subcutaneous Veins Imaging

    SciTech Connect

    Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William

    2008-01-01

    In this paper, we describe a new 3D light propagation model aimed at understanding the effects of various physiological properties on subcutaneous vein imaging. In particular, we build upon the well known MCML (Monte Carlo Multi Layer) code and present a tissue model that improves upon the current state-of-the-art by: incorporating physiological variation, such as melanin concentration, fat content, and layer thickness; including veins of varying depth and diameter; using curved surfaces from real arm shapes; and modeling the vessel wall interface. We describe our model, present results from the Monte Carlo modeling, and compare these results with those obtained with other Monte Carlo methods.

  9. Optimization of the open-loop liquid crystal adaptive optics retinal imaging system

    NASA Astrophysics Data System (ADS)

    Kong, Ningning; Li, Chao; Xia, Mingliang; Li, Dayu; Qi, Yue; Xuan, Li

    2012-02-01

    An open-loop adaptive optics (AO) system for retinal imaging was constructed using a liquid crystal spatial light modulator (LC-SLM) as the wavefront compensator. Due to the dispersion of the LC-SLM, there was only one illumination source for both aberration detection and retinal imaging in this system. To increase the field of view (FOV) for retinal imaging, a modified mechanical shutter was integrated into the illumination channel to control the size of the illumination spot on the fundus. The AO loop was operated in a pulsing mode, and the fundus was illuminated twice by two laser impulses in a single AO correction loop. As a result, the FOV for retinal imaging was increased to 1.7-deg without compromising the aberration detection accuracy. The correction precision of the open-loop AO system was evaluated in a closed-loop configuration; the residual error is approximately 0.0909? (root-mean-square, RMS), and the Strehl ratio ranges to 0.7217. Two subjects with differing rates of myopia (-3D and -5D) were tested. High-resolution images of capillaries and photoreceptors were obtained.

  10. Illumination correction of retinal images using Laplace interpolation

    E-print Network

    Dainty, Chris

    , with the particular goal of minimizing its influence upon features of interest. This is achieved by making use, and so filtering meth- ods cannot be relied upon to perform illumination correction of retinal images

  11. Adaptive colour transformation of retinal images for stroke prediction.

    PubMed

    Unnikrishnan, Premith; Aliahmad, Behzad; Kawasaki, Ryo; Kumar, Dinesh

    2013-01-01

    Identifying lesions in the retinal vasculature using Retinal imaging is most often done on the green channel. However, the effect of colour and single channel analysis on feature extraction has not yet been studied. In this paper an adaptive colour transformation has been investigated and validated on retinal images associated with 10-year stroke prediction, using principle component analysis (PCA). Histogram analysis indicated that while each colour channel image had a uni-modal distribution, the second component of the PCA had a bimodal distribution, and showed significantly improved separation between the retinal vasculature and the background. The experiments showed that using adaptive colour transformation, the sensitivity and specificity were both higher (AUC 0.73) compared with when single green channel was used (AUC 0.63) for the same database and image features. PMID:24111451

  12. Performance assessment of 3D surface imaging technique for medical imaging applications

    NASA Astrophysics Data System (ADS)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Recent development in optical 3D surface imaging technologies provide better ways to digitalize the 3D surface and its motion in real-time. The non-invasive 3D surface imaging approach has great potential for many medical imaging applications, such as motion monitoring of radiotherapy, pre/post evaluation of plastic surgery and dermatology, to name a few. Various commercial 3D surface imaging systems have appeared on the market with different dimension, speed and accuracy. For clinical applications, the accuracy, reproducibility and robustness across the widely heterogeneous skin color, tone, texture, shape properties, and ambient lighting is very crucial. Till now, a systematic approach for evaluating the performance of different 3D surface imaging systems still yet exist. In this paper, we present a systematic performance assessment approach to 3D surface imaging system assessment for medical applications. We use this assessment approach to exam a new real-time surface imaging system we developed, dubbed "Neo3D Camera", for image-guided radiotherapy (IGRT). The assessments include accuracy, field of view, coverage, repeatability, speed and sensitivity to environment, texture and color.

  13. 3D Reconstruction of virtual colon structures from colonoscopy images.

    PubMed

    Hong, DongHo; Tavanapong, Wallapak; Wong, Johnny; Oh, JungHwan; de Groen, Piet C

    2014-01-01

    This paper presents the first fully automated reconstruction technique of 3D virtual colon segments from individual colonoscopy images. It is the basis of new software applications that may offer great benefits for improving quality of care for colonoscopy patients. For example, a 3D map of the areas inspected and uninspected during colonoscopy can be shown on request of the endoscopist during the procedure. The endoscopist may revisit the suggested uninspected areas to reduce the chance of missing polyps that reside in these areas. The percentage of the colon surface seen by the endoscopist can be used as a coarse objective indicator of the quality of the procedure. The derived virtual colon models can be stored for post-procedure training of new endoscopists to teach navigation techniques that result in a higher level of procedure quality. Our technique does not require a prior CT scan of the colon or any global positioning device. Our experiments on endoscopy images of an Olympus synthetic colon model reveal encouraging results with small average reconstruction errors (4.1 mm for the fold depths and 12.1 mm for the fold circumferences). PMID:24225230

  14. 3-D visualization and animation technologies in anatomical imaging

    PubMed Central

    McGhee, John

    2010-01-01

    This paper explores a 3-D computer artist’s approach to the creation of three-dimensional computer-generated imagery (CGI) derived from clinical scan data. Interpretation of scientific imagery, such as magnetic resonance imaging (MRI), is restricted to the eye of the trained medical practitioner in a clinical or scientific context. In the research work described here, MRI data are visualized and interpreted by a 3-D computer artist using the tools of the digital animator to navigate image complexity and widen interaction. In this process, the artefact moves across disciplines; it is no longer tethered to its diagnostic origins. It becomes an object that has visual attributes such as light, texture and composition, and a visual aesthetic of its own. The introduction of these visual attributes provides a platform for improved accessibility by a lay audience. The paper argues that this more artisan approach to clinical data visualization has a potential real-world application as a communicative tool for clinicians and patients during consultation. PMID:20002229

  15. Computing 3D head orientation from a monocular image sequence

    NASA Astrophysics Data System (ADS)

    Horprasert, Thanarat; Yacoob, Yaser; Davis, Larry S.

    1997-02-01

    An approach for estimating 3D head orientation in a monocular image sequence is proposed. The approach employs recently developed image-based parameterized tracking for face and face features to locate the area in which a sub- pixel parameterized shape estimation of the eye's boundary is performed. This involves tracking of five points (four at the eye corners and the fifth is the tip of the nose). We describe an approach that relies on the coarse structure of the face to compute orientation relative to the camera plane. Our approach employs projective invariance of the cross-ratios of the eye corners and anthropometric statistics to estimate the head yaw, roll and pitch. Analytical and experimental results are reported.

  16. High-resolution 3-D refractive index imaging and Its biological applications

    E-print Network

    Sung, Yongjin

    2011-01-01

    This thesis presents a theory of 3-D imaging in partially coherent light under a non-paraxial condition. The transmission cross-coefficient (TCC) has been used to characterize partially coherent imaging in a 2- D and 3-D ...

  17. 3D Slicer as an Image Computing Platform for the Quantitative Imaging Network

    PubMed Central

    Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V.; Pieper, Steve; Kikinis, Ron

    2012-01-01

    Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm, and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future directions that can further facilitate development and validation of imaging biomarkers using 3D Slicer. PMID:22770690

  18. High Resolution 3D Radar Imaging of Comet Interiors

    NASA Astrophysics Data System (ADS)

    Asphaug, E. I.; Gim, Y.; Belton, M.; Brophy, J.; Weissman, P. R.; Heggy, E.

    2012-12-01

    Knowing the interiors of comets and other primitive bodies is fundamental to our understanding of how planets formed. We have developed a Discovery-class mission formulation, Comet Radar Explorer (CORE), based on the use of previously flown planetary radar sounding techniques, with the goal of obtaining high resolution 3D images of the interior of a small primitive body. We focus on the Jupiter-Family Comets (JFCs) as these are among the most primitive bodies reachable by spacecraft. Scattered in from far beyond Neptune, they are ultimate targets of a cryogenic sample return mission according to the Decadal Survey. Other suitable targets include primitive NEOs, Main Belt Comets, and Jupiter Trojans. The approach is optimal for small icy bodies ~3-20 km diameter with spin periods faster than about 12 hours, since (a) navigation is relatively easy, (b) radar penetration is global for decameter wavelengths, and (c) repeated overlapping ground tracks are obtained. The science mission can be as short as ~1 month for a fast-rotating JFC. Bodies smaller than ~1 km can be globally imaged, but the navigation solutions are less accurate and the relative resolution is coarse. Larger comets are more interesting, but radar signal is unlikely to be reflected from depths greater than ~10 km. So, JFCs are excellent targets for a variety of reasons. We furthermore focus on the use of Solar Electric Propulsion (SEP) to rendezvous shortly after the comet's perihelion. This approach leaves us with ample power for science operations under dormant conditions beyond ~2-3 AU. This leads to a natural mission approach of distant observation, followed by closer inspection, terminated by a dedicated radar mapping orbit. Radar reflections are obtained from a polar orbit about the icy nucleus, which spins underneath. Echoes are obtained from a sounder operating at dual frequencies 5 and 15 MHz, with 1 and 10 MHz bandwidths respectively. The dense network of echoes is used to obtain global 3D images of interior structure to ~20 m, and to map dielectric properties (related to internal composition) to better than 200 m throughout. This is comparable in detail to modern 3D medical ultrasound, although we emphasize that the techniques are somewhat different. An interior mass distribution is obtained through spacecraft tracking, using data acquired during the close, quiet radar orbits. This is aligned with the radar-based images of the interior, and the shape model, to contribute to the multi-dimensional 3D global view. High-resolution visible imaging provides boundary conditions and geologic context to these interior views. An infrared spectroscopy and imaging campaign upon arrival reveals the time-evolving activity of the nucleus and the structure and composition of the inner coma, and the definition of surface units. CORE is designed to obtain a total view of a comet, from the coma to the active and evolving surface to the deep interior. Its primary science goal is to obtain clear images of internal structure and dielectric composition. These will reveal how the comet was formed, what it is made of, and how it 'works'. By making global yet detailed connections from interior to exterior, this knowledge will be an important complement to the Rosetta mission, and will lay the foundation for comet nucleus sample return by revealing the areas of shallow depth to 'bedrock', and relating accessible deposits to their originating provenances within the nucleus.

  19. Retinal image restoration by means of blind deconvolution

    NASA Astrophysics Data System (ADS)

    Marrugo, Andrés G.; Šorel, Michal; Šroubek, Filip; Millán, María S.

    2011-11-01

    Retinal imaging plays a key role in the diagnosis and management of ophthalmologic disorders, such as diabetic retinopathy, glaucoma, and age-related macular degeneration. Because of the acquisition process, retinal images often suffer from blurring and uneven illumination. This problem may seriously affect disease diagnosis and progression assessment. Here we present a method for color retinal image restoration by means of multichannel blind deconvolution. The method is applied to a pair of retinal images acquired within a lapse of time, ranging from several minutes to months. It consists of a series of preprocessing steps to adjust the images so they comply with the considered degradation model, followed by the estimation of the point-spread function and, ultimately, image deconvolution. The preprocessing is mainly composed of image registration, uneven illumination compensation, and segmentation of areas with structural changes. In addition, we have developed a procedure for the detection and visualization of structural changes. This enables the identification of subtle developments in the retina not caused by variation in illumination or blur. The method was tested on synthetic and real images. Encouraging experimental results show that the method is capable of significant restoration of degraded retinal images.

  20. Using silhouette coherence for 3D image-based object modeling under circular motion

    E-print Network

    Esteban, Carlos Hernández

    Using silhouette coherence for 3D image-based object modeling under circular motion Utilisation de) (France 1983-9999) #12;Using Silhouette Coherence for 3D Image-based Object Modeling under Circular Motion for image-based 3D object modeling under circular motion. We first discuss the silhouette coherence notion

  1. Complex Resistivity 3D Imaging for Ground Reinforcement Site

    NASA Astrophysics Data System (ADS)

    Son, J.; Kim, J.; Park, S.

    2012-12-01

    Induced polarization (IP) method is used for mineral exploration and generally classified into two categories, time and frequency domain method. IP method in frequency domain measures amplitude and absolute phase to the transmitted currents, and is often called spectral induced polarization (SIP) when measurement is made for the wide-band frequencies. Our research group has been studying the modeling and inversion algorithms of complex resistivity method since several years ago and recently started to apply this method for various field applications. We already completed the development of 2/3D modeling and inversion program and developing another algorithm to use wide-band data altogether. Until now complex resistivity (CR) method was mainly used for the surface or tomographic survey of mineral exploration. Through the experience, we can find that the resistivity section from CR method is very similar with that of conventional resistivity method. Interpretation of the phase section is generally well matched with the geological information of survey area. But because most of survey area has very touch and complex terrain, 2D survey and interpretation are used generally. In this study, the case study of 3D CR survey conducted for the site where ground reinforcement was done to prevent the subsidence will be introduced. Data was acquired with the Zeta system, the complex resistivity measurement system produced by Zonge Co. using 8 frequencies from 0.125 to 16 Hz. 2D survey was conducted for total 6 lines with 5 m dipole spacing and 20 electrodes. Line length is 95 meter for every line. Among these 8 frequency data, data below 1 Hz was used considering its quality. With the 6 line data, 3D inversion was conducted. Firstly 2D interpretation was made with acquired data and its results were compared with those of resistivity survey. Resulting resistivity image sections of CR and resistivity method were very similar. Anomalies in phase image section showed good agreement with those identified by the 4D interpretation of resistivity monitoring data. These results in phase section come from the fact that cement mortar used as a grouting material has very strong IP property. With the 3D inversion anomalies were discriminated more clearly that was somewhat obscure in the 2D interpretation. And phase anomalies were also well matched with the 4D interpretation of resistivity monitoring data. Phase anomalies in 2D interpretation were extended deeper area and its boundary was not clear, but we clearly identified its lower boundary and location in the 3D inverted result. CR method is very effective if the target anomaly has strong IP property and can be used for various purposes. But it has some difficulties in data acquisition that it takes more time and efforts compared to normal resistivity survey. If these problems were to be solved, it would be a very effective and prominent method in some area. In this study, we only show the results from single frequency data but we could infer more information when all the multi-frequency data were used in inversion. We will continue to develop 3D multi-frequency inversion algorithm in near future.

  2. Applications of azo-based probes for imaging retinal hypoxia.

    PubMed

    Uddin, Md Imam; Evans, Stephanie M; Craft, Jason R; Marnett, Lawrence J; Uddin, Md Jashim; Jayagopal, Ashwath

    2015-04-01

    We report the design and synthesis of an activatable molecular imaging probe to detect hypoxia in mouse models of retinal vascular diseases. Hypoxia of the retina has been associated with the initiation and progression of blinding retinal vascular diseases including age-related macular degeneration, diabetic retinopathy, and retinopathy of prematurity. In vivo retinal imaging of hypoxia may be useful for early detection and timely treatment of retinal diseases. To achieve this goal, we synthesized HYPOX-3, a near-infrared (NIR) imaging agent coupled to a dark quencher, Black Hole Quencher 3 (BHQ3), which has been previously reported to contain a hypoxia-sensitive cleavable azo-bond. HYPOX-3 was cleaved in hypoxic retinal cell culture and animal models, enabling detection of hypoxia with high signal-to-noise ratios without acute toxicity. HYPOX-3 fluorescences in hypoxic cells and tissues and was undetectable under normoxia. These imaging agents are promising candidates for imaging retinal hypoxia in preclinical disease models and patients. PMID:25893047

  3. Fast 3D subsurface imaging with stepped-frequency GPR

    NASA Astrophysics Data System (ADS)

    Masarik, Matthew P.; Burns, Joseph; Thelen, Brian T.; Sutter, Lena

    2015-05-01

    This paper investigates an algorithm for forming 3D images of the subsurface using stepped-frequency GPR data. The algorithm is specifically designed for a handheld GPR and therefore accounts for the irregular sampling pattern in the data and the spatially-variant air-ground interface by estimating an effective "ground-plane" and then registering the data to the plane. The algorithm efficiently solves the 4th-order polynomial for the Snell reflection points using a fully vectorized iterative scheme. The forward operator is implemented efficiently using an accelerated nonuniform FFT (Greengard and Lee, 2004); the adjoint operator is implemented efficiently using an interpolation step coupled with an upsampled FFT. The imaging is done as a linearized version of the full inverse problem, which is regularized using a sparsity constraint to reduce sidelobes and therefore improve image localization. Applying an appropriate sparsity constraint, the algorithm is able to eliminate most the surrounding clutter and sidelobes, while still rendering valuable image properties such as shape and size. The algorithm is applied to simulated data, controlled experimental data (made available by Dr. Waymond Scott, Georgia Institute of Technology), and government-provided data with irregular sampling and air-ground interface.

  4. A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images

    PubMed Central

    Belghith, Akram; Bowd, Christopher; Weinreb, Robert N.; Zangwill, Linda M.

    2014-01-01

    Glaucoma is an ocular disease characterized by distinctive changes in the optic nerve head (ONH) and visual field. Glaucoma can strike without symptoms and causes blindness if it remains without treatment. Therefore, early disease detection is important so that treatment can be initiated and blindness prevented. In this context, important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, an essential element for glaucoma detection and monitoring. 3D spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, has been commonly used to discriminate glaucomatous from healthy subjects. In this paper, we present a new framework for detection of glaucoma progression using 3D SD-OCT images. In contrast to previous works that the retinal nerve fiber layer (RNFL) thickness measurement provided by commercially available spectral-domain optical coherence tomograph, we consider the whole 3D volume for change detection. To integrate a priori knowledge and in particular the spatial voxel dependency in the change detection map, we propose the use of the Markov Random Field to handle a such dependency. To accommodate the presence of false positive detection, the estimated change detection map is then used to classify a 3D SDOCT image into the “non-progressing” and “progressing” glaucoma classes, based on a fuzzy logic classifier. We compared the diagnostic performance of the proposed framework to existing methods of progression detection. PMID:25606299

  5. A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images

    NASA Astrophysics Data System (ADS)

    Belghith, Akram; Bowd, Christopher; Weinreb, Robert N.; Zangwill, Linda M.

    2014-03-01

    Glaucoma is an ocular disease characterized by distinctive changes in the optic nerve head (ONH) and visual field. Glaucoma can strike without symptoms and causes blindness if it remains without treatment. Therefore, early disease detection is important so that treatment can be initiated and blindness prevented. In this context, important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, an essential element for glaucoma detection and monitoring. 3D spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, has been commonly used to discriminate glaucomatous from healthy subjects. In this paper, we present a new framework for detection of glaucoma progression using 3D SD-OCT images. In contrast to previous works that the retinal nerve fiber layer (RNFL) thickness measurement provided by commercially available spectral-domain optical coherence tomograph, we consider the whole 3D volume for change detection. To integrate a priori knowledge and in particular the spatial voxel dependency in the change detection map, we propose the use of the Markov Random Field to handle a such dependency. To accommodate the presence of false positive detection, the estimated change detection map is then used to classify a 3D SDOCT image into the "non-progressing" and "progressing" glaucoma classes, based on a fuzzy logic classifier. We compared the diagnostic performance of the proposed framework to existing methods of progression detection.

  6. 3D painting documentation: evaluation of conservation conditions with 3D imaging and ranging techniques

    NASA Astrophysics Data System (ADS)

    Abate, D.; Menna, F.; Remondino, F.; Gattari, M. G.

    2014-06-01

    The monitoring of paintings, both on canvas and wooden support, is a crucial issue for the preservation and conservation of this kind of artworks. Many environmental factors (e.g. humidity, temperature, illumination, etc.), as well as bad conservation practices (e.g. wrong restorations, inappropriate locations, etc.), can compromise the material conditions over time and deteriorate an artwork. The article presents an on-going project realized by a multidisciplinary team composed by the ENEA UTICT 3D GraphLab, the 3D Optical Metrology Unit of the Bruno Kessler Foundation and the Soprintendenza per i Beni Storico Artistici ed Etnoantropologici of Bologna (Italy). The goal of the project is the multi-temporal 3D documentation and monitoring of paintings - at the moment in bad conservation's situation - and the provision of some metrics to quantify the deformations and damages.

  7. Imaging the 3D geometry of pseudotachylyte-bearing faults

    NASA Astrophysics Data System (ADS)

    Resor, Phil; Shervais, Katherine

    2013-04-01

    Dynamic friction experiments in granitoid or gabbroic rocks that achieve earthquake slip velocities reveal significant weakening by melt-lubrication of the sliding surfaces. Extrapolation of these experimental results to seismic source depths (> 7 km) suggests that the slip weakening distance (Dw) over which this transition occurs is < 10 cm. The physics of this lubrication in the presence of a fluid (melt) is controlled by surface micro-topography. In order to characterize fault surface microroughness and its evolution during dynamic slip events on natural faults, we have undertaken an analysis of three-dimensional (3D) fault surface microtopography and its causes on a suite of pseudotachylyte-bearing fault strands from the Gole Larghe fault zone, Italy. The solidification of frictional melt soon after seismic slip ceases "freezes in" earthquake source geometries, however it also precludes the development of extensive fault surface exposures that have enabled direct studies of fault surface roughness. We have overcome this difficulty by imaging the intact 3D geometry of the fault using high-resolution X-ray computed tomography (CT). We collected a suite of 2-3.5 cm diameter cores (2-8 cm long) from individual faults within the Gole Larghe fault zone with a range of orientations (+/- 45 degrees from average strike) and slip magnitudes (0-1 m). Samples were scanned at the University of Texas High Resolution X-ray CT Facility, using an Xradia MicroCT scanner with a 70 kV X-ray source. Individual voxels (3D pixels) are ~36 ?m across. Fault geometry is thus imaged over ~4 orders of magnitude from the micron scale up to ~Dw. Pseudotachylyte-bearing fault zones are imaged as tabular bodies of intermediate X-ray attenuation crosscutting high attenuation biotite and low attenuation quartz and feldspar of the surrounding tonalite. We extract the fault surfaces (contact between the pseudotachylyte bearing fault zone and the wall rock) using integrated manual mapping, automated edge detection, and statistical evaluation. This approach results in a digital elevation model for each side of the fault zone that we use to quantify melt thickness and volume as well as surface microroughness and explore the relationship between these properties and the geometry, slip magnitude, and wall rock mineralogy of the fault.

  8. 3D Chemical and Elemental Imaging by STXM Spectrotomography

    SciTech Connect

    Wang, J.; Karunakaran, C.; Lu, Y.; Hormes, J.; Hitchcock, A. P.; Prange, A.; Franz, B.; Harkness, T.; Obst, M.

    2011-09-09

    Spectrotomography based on the scanning transmission x-ray microscope (STXM) at the 10ID-1 spectromicroscopy beamline of the Canadian Light Source was used to study two selected unicellular microorganisms. Spatial distributions of sulphur globules, calcium, protein, and polysaccharide in sulphur-metabolizing bacteria (Allochromatium vinosum) were determined at the S 2p, C 1s, and Ca 2p edges. 3D chemical mapping showed that the sulphur globules are located inside the bacteria with a strong spatial correlation with calcium ions (it is most probably calcium carbonate from the medium; however, with STXM the distribution and localization in the cell can be made visible, which is very interesting for a biologist) and polysaccharide-rich polymers, suggesting an influence of the organic components on the formation of the sulphur and calcium deposits. A second study investigated copper accumulating in yeast cells (Saccharomyces cerevisiae) treated with copper sulphate. 3D elemental imaging at the Cu 2p edge showed that Cu(II) is reduced to Cu(I) on the yeast cell wall. A novel needle-like wet cell sample holder for STXM spectrotomography studies of fully hydrated samples is discussed.

  9. 3D SCENE MODELING AND UNDERSTANDING FROM IMAGE SEQUENCES Hao Tang, CUNY Graduate Center

    E-print Network

    Zhu, Zhigang

    3D SCENE MODELING AND UNDERSTANDING FROM IMAGE SEQUENCES Hao Tang, CUNY Graduate Center method for 3D modeling is proposed, which generates a contentbased 3D mosaic (CB3M) representation for long video sequences of 3D, dynamic urban scenes captured by a camera on a mobile platform

  10. Recent Advances in Retinal Imaging With Adaptive Optics

    E-print Network

    Recent Advances in Retinal Imaging With Adaptive Optics 36 Optics & Photonics News January 2005-6938/05/01/0036/7-$0015.00 © Optical Society of America A daptive optics imaging systems use active optical elements to compensate for aberrations in the optical path between the camera and the object being imaged. In 1953, Babcock first

  11. 3D imaging of semiconductor components by discrete laminography

    SciTech Connect

    Batenburg, K. J.; Palenstijn, W. J.; Sijbers, J.

    2014-06-19

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  12. 3D and multispectral imaging for subcutaneous veins detection.

    PubMed

    Paquit, Vincent C; Tobin, Kenneth W; Price, Jeffery R; Mèriaudeau, Fabrice

    2009-07-01

    The first and perhaps most important phase of a surgical procedure is the insertion of an intravenous (IV) catheter. Currently, this is performed manually by trained personnel. In some visions of future operating rooms, however, this process is to be replaced by an automated system. Experiments to determine the best NIR wavelengths to optimize vein contrast for physiological differences such as skin tone and/or the presence of hair on the arm or wrist surface are presented. For illumination our system is composed of a mercury arc lamp coupled to a 10nm band-pass spectrometer. A structured lighting system is also coupled to our multispectral system in order to provide 3D information of the patient arm orientation. Images of each patient arm are captured under every possible combinations of illuminants and the optimal combination of wavelengths for a given subject to maximize vein contrast using linear discriminant analysis is determined. PMID:19582050

  13. Spectral ladar: towards active 3D multispectral imaging

    NASA Astrophysics Data System (ADS)

    Powers, Michael A.; Davis, Christopher C.

    2010-04-01

    In this paper we present our Spectral LADAR concept, an augmented implementation of traditional LADAR. This sensor uses a polychromatic source to obtain range-resolved 3D spectral images which are used to identify objects based on combined spatial and spectral features, resolving positions in three dimensions and up to hundreds of meters in distance. We report on a proof-of-concept Spectral LADAR demonstrator that generates spectral point clouds from static scenes. The demonstrator transmits nanosecond supercontinuum pulses generated in a photonic crystal fiber. Currently we use a rapidly tuned receiver with a high-speed InGaAs APD for 25 spectral bands with the future expectation of implementing a linear APD array spectrograph. Each spectral band is independently range resolved with multiple return pulse recognition. This is a critical feature, enabling simultaneous spectral and spatial unmixing of partially obscured objects when not achievable using image fusion of monochromatic LADAR and passive spectral imagers. This enables higher identification confidence in highly cluttered environments such as forested or urban areas (e.g. vehicles behind camouflage or foliage). These environments present challenges for situational awareness and robotic perception which can benefit from the unique attributes of Spectral LADAR. Results from this demonstrator unit are presented for scenes typical of military operations and characterize the operation of the device. The results are discussed here in the context of autonomous vehicle navigation and target recognition.

  14. 3D imaging studies of rigid-fiber sedimentation

    NASA Astrophysics Data System (ADS)

    Vahey, David W.; Tozzi, Emilio J.; Scott, C. Tim; Klingenberg, Daniel J.

    2011-03-01

    Fibers are industrially important particles that experience coupling between rotational and translational motion during sedimentation. This leads to helical trajectories that have yet to be accurately predicted or measured. Sedimentation experiments and hydrodynamic analysis were performed on 11 copper "fibers" of average length 10.3 mm and diameter 0.20 mm. Each fiber contained three linear but non-coplanar segments. Fiber dimensions were measured by imaging their 2D projections on three planes. The fibers were sequentially released into silicone oil contained in a transparent cylinder of square cross section. Identical, synchronized cameras were mounted to a moveable platform and imaged the cylinder from orthogonal directions. The cameras were fixed in position during the time that a fiber remained in the field of view. Subsequently, the cameras were controllably moved to the next lower field of view. The trajectories of descending fibers were followed over distances up to 250 mm. Custom software was written to extract fiber orientation and trajectory from the 3D images. Fibers with similar terminal velocity often had significantly different terminal angular velocities. Both were well-predicted by theory. The radius of the helical trajectory was hard to predict when angular velocity was high, probably reflecting uncertainties in fiber shape, initial velocity, and fluid conditions associated with launch. Nevertheless, lateral excursion of fibers during sedimentation was reasonably predicted by fiber curl and asymmetry, suggesting the possibility of sorting fibers according to their shape.

  15. GPU-accelerated denoising of 3D magnetic resonance images

    SciTech Connect

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  16. Fully digital, phase-domain ?? 3D range image sensor in 130nm CMOS imaging technology 

    E-print Network

    Walker, Richard John

    2012-06-25

    Three-Dimensional (3D) optical range-imaging is a field experiencing rapid growth, expanding into a wide variety of machine vision applications, most recently including consumer gaming. Time of Flight (ToF) cameras, akin ...

  17. YAVARIABDI et al.: 1 3D Medical Image Enhancement based on

    E-print Network

    Bartoli, Adrien

    YAVARIABDI et al.: 1 3D Medical Image Enhancement based on Wavelet Transforms Amir Yavariabdi amirV-ISIT Université d'Auvergne Clermont-Ferrand, 63000 France Abstract This paper studies 2D and 3D wavelet domain Wavelet Transform (DWT). Experimental results on both 2D and 3D images show how our method enhances

  18. Enhanced 3D fluorescence live cell imaging on nanoplasmonic substrate

    NASA Astrophysics Data System (ADS)

    Ranjan Gartia, Manas; Hsiao, Austin; Sivaguru, Mayandi; Chen, Yi; Logan Liu, G.

    2011-09-01

    We have created a randomly distributed nanocone substrate on silicon coated with silver for surface-plasmon-enhanced fluorescence detection and 3D cell imaging. Optical characterization of the nanocone substrate showed it can support several plasmonic modes (in the 300-800 nm wavelength range) that can be coupled to a fluorophore on the surface of the substrate, which gives rise to the enhanced fluorescence. Spectral analysis suggests that a nanocone substrate can create more excitons and shorter lifetime in the model fluorophore Rhodamine 6G (R6G) due to plasmon resonance energy transfer from the nanocone substrate to the nearby fluorophore. We observed three-dimensional fluorescence enhancement on our substrate shown from the confocal fluorescence imaging of chinese hamster ovary (CHO) cells grown on the substrate. The fluorescence intensity from the fluorophores bound on the cell membrane was amplified more than 100-fold as compared to that on a glass substrate. We believe that strong scattering within the nanostructured area coupled with random scattering inside the cell resulted in the observed three-dimensional enhancement in fluorescence with higher photostability on the substrate surface.

  19. A Statistical Image-Based Shape Model for Visual Hull Reconstruction and 3D Structure Inference

    E-print Network

    Grauman, Kristen

    2003-05-22

    We present a statistical image-based shape + structure model for Bayesian visual hull reconstruction and 3D structure inference. The 3D shape of a class of objects is represented by sets of contours from silhouette ...

  20. Inferring 3D Structure with a Statistical Image-Based Shape Model

    E-print Network

    Grauman, Kristen

    2003-04-17

    We present an image-based approach to infer 3D structure parameters using a probabilistic "shape+structure'' model. The 3D shape of a class of objects may be represented by sets of contours from silhouette views ...

  1. 3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine

    NASA Astrophysics Data System (ADS)

    Hamamoto, Kazuhiko; Sato, Motoyoshi

    3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.

  2. Estimation of 3D Left Ventricular Deformation from Medical Images Using Biomechanical Models.

    E-print Network

    Duncan, James S.

    Abstract Estimation of 3D Left Ventricular Deformation from Medical Images Using Biomechanical-dimensional medical images. We also explore some of their theoretical constraints which can be used to guide images. #12;Estimation of 3D Left Ventricular Deformation from Medical Images Using Biomechanical Models

  3. Simultaneous Interpolation and Deconvolution Model for the 3-D Reconstruction of Cell Images

    E-print Network

    -dimensional imaging of intracellular structures in living cells is one of these problems. Its solution is of utmostSimultaneous Interpolation and Deconvolution Model for the 3-D Reconstruction of Cell Images Ahmed are an important imaging tech- nique in cell biology. Due to their depth sensitivity they allow a direct 3-D imag

  4. Identifying Age-related Macular Degeneration In Volumetric Retinal Images

    E-print Network

    Coenen, Frans

    Identifying Age-related Macular Degeneration In Volumetric Retinal Images Abdulrahman Albarrak a Age-related Macular Degeneration (AMD) is a retina disorder, which is currently on the increase. Age-related Macular Degeneration(AMD) is one of these disorders. OCT images indicate clearly different

  5. High speed spectral domain optical coherence tomography for retinal imaging at 500,000 A?lines per second

    PubMed Central

    An, Lin; Li, Peng; Shen, Tueng T.; Wang, Ruikang

    2011-01-01

    We present a new development of ultrahigh speed spectral domain optical coherence tomography (SDOCT) for human retinal imaging at 850 nm central wavelength by employing two high-speed line scan CMOS cameras, each running at 250 kHz. Through precisely controlling the recording and reading time periods of the two cameras, the SDOCT system realizes an imaging speed at 500,000 A-lines per second, while maintaining both high axial resolution (~8 ?m) and acceptable depth ranging (~2.5 mm). With this system, we propose two scanning protocols for human retinal imaging. The first is aimed to achieve isotropic dense sampling and fast scanning speed, enabling a 3D imaging within 0.72 sec for a region covering 4x4 mm2. In this case, the B-frame rate is 700 Hz and the isotropic dense sampling is 500 A-lines along both the fast and slow axes. This scanning protocol minimizes the motion artifacts, thus making it possible to perform two directional averaging so that the signal to noise ratio of the system is enhanced while the degradation of its resolution is minimized. The second protocol is designed to scan the retina in a large field of view, in which 1200 A-lines are captured along both the fast and slow axes, covering 10 mm2, to provide overall information about the retinal status. Because of relatively long imaging time (4 seconds for a 3D scan), the motion artifact is inevitable, making it difficult to interpret the 3D data set, particularly in a way of depth-resolved en-face fundus images. To mitigate this difficulty, we propose to use the relatively high reflecting retinal pigmented epithelium layer as the reference to flatten the original 3D data set along both the fast and slow axes. We show that the proposed system delivers superb performance for human retina imaging. PMID:22025983

  6. Image-Based 3d Reconstruction and Analysis for Orthodontia

    NASA Astrophysics Data System (ADS)

    Knyaz, V. A.

    2012-08-01

    Among the main tasks of orthodontia are analysis of teeth arches and treatment planning for providing correct position for every tooth. The treatment plan is based on measurement of teeth parameters and designing perfect teeth arch curve which teeth are to create after treatment. The most common technique for teeth moving uses standard brackets which put on teeth and a wire of given shape which is clamped by these brackets for producing necessary forces to every tooth for moving it in given direction. The disadvantages of standard bracket technique are low accuracy of tooth dimensions measurements and problems with applying standard approach for wide variety of complex orthodontic cases. The image-based technique for orthodontic planning, treatment and documenting aimed at overcoming these disadvantages is proposed. The proposed approach provides performing accurate measurements of teeth parameters needed for adequate planning, designing correct teeth position and monitoring treatment process. The developed technique applies photogrammetric means for teeth arch 3D model generation, brackets position determination and teeth shifting analysis.

  7. 3D fingerprint imaging system based on full-field fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  8. 3D high-density localization microscopy using hybrid astigmatic/ biplane imaging and sparse image reconstruction

    PubMed Central

    Min, Junhong; Holden, Seamus J.; Carlini, Lina; Unser, Michael; Manley, Suliana; Ye, Jong Chul

    2014-01-01

    Localization microscopy achieves nanoscale spatial resolution by iterative localization of sparsely activated molecules, which generally leads to a long acquisition time. By implementing advanced algorithms to treat overlapping point spread functions (PSFs), imaging of densely activated molecules can improve the limited temporal resolution, as has been well demonstrated in two-dimensional imaging. However, three-dimensional (3D) localization of high-density data remains challenging since PSFs are far more similar along the axial dimension than the lateral dimensions. Here, we present a new, high-density 3D imaging system and algorithm. The hybrid system is implemented by combining astigmatic and biplane imaging. The proposed 3D reconstruction algorithm is extended from our state-of-the art 2D high-density localization algorithm. Using mutual coherence analysis of model PSFs, we validated that the hybrid system is more suitable than astigmatic or biplane imaging alone for 3D localization of high-density data. The efficacy of the proposed method was confirmed via simulation and real data of microtubules. Furthermore, we also successfully demonstrated fluorescent-protein-based live cell 3D localization microscopy with a temporal resolution of just 3 seconds, capturing fast dynamics of the endoplasmic recticulum. PMID:26526603

  9. A general framework for vessel segmentation in retinal images Changhua Wu, Gady Agam, Peter Stanchev

    E-print Network

    Stanchev, Peter

    A general framework for vessel segmentation in retinal images Changhua Wu, Gady Agam, Peter Stanchev Abstract-- We present a general framework for vessel seg- mentation in retinal images with a particular focus on small vessels. The retinal images are first processed by a nonlin- ear diffusion filter

  10. Hierarchical Discriminative Framework for Detecting Tubular Structures in 3D Images

    E-print Network

    Hierarchical Discriminative Framework for Detecting Tubular Structures in 3D Images Dirk Corporation, Corporate Technology, Princeton, NJ, USA 2 Imaging & Therapy Systems, Siemens Healthcare a robust measure of tubular presence using a discriminative classifier at multiple image scales

  11. Fast iterative image reconstruction methods for fully 3D multispectral bioluminescence tomography

    E-print Network

    Fast iterative image reconstruction methods for fully 3D multispectral bioluminescence tomography bioluminescence tomography for applications in small animal imaging. Our forward model uses a diffusion algorithm. 1. Introduction Bioluminescence tomography is an in vivo imaging technique that localizes

  12. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    NASA Astrophysics Data System (ADS)

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  13. Portable, low-priced retinal imager for eye disease screening

    NASA Astrophysics Data System (ADS)

    Soliz, Peter; Nemeth, Sheila; VanNess, Richard; Barriga, E. S.; Zamora, Gilberto

    2014-02-01

    The objective of this project was to develop and demonstrate a portable, low-priced, easy to use non-mydriatic retinal camera for eye disease screening in underserved urban and rural locations. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities or other economically stressed healthcare facilities. Our approach for Smart i-Rx is based primarily on a significant departure from current generations of desktop and hand-held commercial retinal cameras as well as those under development. Our techniques include: 1) Exclusive use of off-the-shelf components; 2) Integration of retinal imaging device into low-cost, high utility camera mount and chin rest; 3) Unique optical and illumination designed for small form factor; and 4) Exploitation of autofocus technology built into present digital SLR recreational cameras; and 5) Integration of a polarization technique to avoid the corneal reflex. In a prospective study, 41 out of 44 diabetics were imaged successfully. No imaging was attempted on three of the subjects due to noticeably small pupils (less than 2mm). The images were of sufficient quality to detect abnormalities related to diabetic retinopathy, such as microaneurysms and exudates. These images were compared with ones taken non-mydriatically with a Canon CR-1 Mark II camera. No cases identified as having DR by expert retinal graders were missed in the Smart i-Rx images.

  14. High image quality 3D displays with polarizer glasses based on active retarder technology

    NASA Astrophysics Data System (ADS)

    Jung, Sung-min; Lee, Young-bok; Park, Hyung-ju; Park, Jin-woo; Lee, Dong-hoon; Jeong, Woo-nam; Kim, Jeong-hyun; Chung, In-Jae

    2011-03-01

    In this study, we proposed methods to reduce a black-white (BW) 3D crosstalk and gray-to-gray (GtoG) 3D crosstalk in active retarder 3D (AR3D) technology. To obtain the BW 3D crosstalk to 2.5% without backlight scanning, we first optimized the switching time of AR panel. Then, the BW 3D crosstalk of 1.0% was achieved by scanning the backlight synchronized with the liquid crystal display (LCD) panel and AR panel. Finally, with the over-driving method for various condition of gray-to-gray transition, the average GtoG 3D crosstalk was obtained to be under 1.8% showing clear 3D images. With these concepts of reducing the 3D crosstalk in AR3D technology, we developed AR3D monitor having 23 inches in diagonal with full HD resolution. The luminance of our prototype is 73 nits at 3D mode through polarizer glasses and 300 nits at 2D mode for bare eyes, showing that 24% light efficiency in 3D mode. In conclusion, our prototype shows clear 3D image with full HD resolution and high luminance even though it employs polarizer glasses.

  15. Semi-implicit nite volume scheme for image processing in 3D cylindrical geometry

    E-print Network

    Sgallari, Fiorella

    Semi-implicit #12;nite volume scheme for image processing in 3D cylindrical geometry Karol Mikula volume method for image selective smoothing directly in the cylindrical image geometry. Namely, we study semi-implicit 3D cylindrical #12;nite volume scheme for solving Perona-Malik-type nonlinear di#11;usion

  16. 3D Force Prediction Using Fingernail Imaging with Automated Calibration Thomas Grieve

    E-print Network

    Hollerbach, John M.

    3D Force Prediction Using Fingernail Imaging with Automated Calibration Thomas Grieve Department University of Utah ABSTRACT This paper demonstrates a system for 3D force prediction using fin- gernail imaging, in which video images of the human fingernail are used to predict the normal and shear forces

  17. Correcting MotionInduced Registration Errors in 3D Ultrasound Images \\Lambda

    E-print Network

    Drummond, Tom

    . The magnitude and timing of the echoes are used to create a 2­D grey­level image (B­ scan) of a cross that the physician mentally reconstruct 3­D anatomy given multiple 2­D images. Research is underway to overcome Figure 1. Subsequent processing can build up a 3­D description of the imaged anatomy, in much the same

  18. A LEVEL SET METHOD FOR ANISOTROPIC GEOMETRIC DIFFUSION IN 3D IMAGE PROCESSING

    E-print Network

    Rumpf, Martin

    . Introduction. Multiscale methods have proved to be successful tools in image de- noising, edge enhancementA LEVEL SET METHOD FOR ANISOTROPIC GEOMETRIC DIFFUSION IN 3D IMAGE PROCESSING TOBIAS PREUSSER AND MARTIN RUMPF Abstract. A new morphological multiscale method in 3D image processing is presented which

  19. Towards wide-field high-resolution retinal imaging

    E-print Network

    Kellerer, Aglae

    2015-01-01

    Adaptive optical correction is an efficient technique to obtain high-resolution images of the retinal surface. A main limitation of adaptive optical correction, however, is the small size of the corrected image. For medical purposes it is important to increase the size of the corrected images. This can be done through composite imaging, but a major difficulty is then the introduction of reconstruction artifacts. Another approach is multi-conjugate adaptive optics. MCAO comes in two flavors. The star- oriented approach has been demonstrated on the eye and allows to increase the diameter of the corrected image by a factor of approximately 2-3. Difficulties in the tomographic reconstruction precludes the correction of larger fields. Here we have investigate the possibility to apply a layer-oriented MCAO approach to retinal imaging.

  20. Regularized Estimation of Retinal Vascular Oxygen Tension From Phosphorescence Images

    PubMed Central

    Ansari, Rashid; Wanek, Justin; Yetik, Imam Samil; Shahidi, Mahnaz

    2010-01-01

    The level of retinal oxygenation is potentially an important cue to the onset or presence of some common retinal diseases. An improved method for assessing oxygen tension in retinal blood vessels from phosphorescence lifetime imaging data is reported in this paper. The optimum estimate for phosphorescence lifetime and oxygen tension is obtained by regularizing the least-squares (LS) method. The estimation method is implemented with an iterative algorithm to minimize a regularized LS cost function. The effectiveness of the proposed method is demonstrated by applying it to simulated data as well as image data acquired from rat retinas. The method is shown to yield estimates that are robust to noise and whose variance is lower than that obtained with the classical LS method. PMID:19389690

  1. 3D Harmonic Mapping and Tetrahedral Meshing of Brain Imaging Data

    E-print Network

    Thompson, Paul

    3D Harmonic Mapping and Tetrahedral Meshing of Brain Imaging Data Yalin Wang1 , Xianfeng Gu2 , Paul Department, Harvard University, Cambridge, MA, USA Abstract. We developed two techniques to address 3D volume algorithm finds a harmonic map from a 3-manifold to a 3D solid sphere and the second is a novel sphere

  2. 3D Geometric and Optical Modeling of Warped Document Images from Scanners

    E-print Network

    Tan, Chew Lim

    3D Geometric and Optical Modeling of Warped Document Images from Scanners 1 Li Zhang, 1 Zheng Zhang namely a 3D geometric model and a 3D optical model for the practical scanning conditions to reconstruct-shading and de- warping models. Finally, we evaluate the restoration results by comparing the OCR (Optical

  3. Hydraulic conductivity imaging from 3-D transient hydraulic tomography at several pumping/observation densities

    E-print Network

    Barrash, Warren

    Hydraulic conductivity imaging from 3-D transient hydraulic tomography at several pumping August 2013; accepted 7 September 2013; published 13 November 2013. [1] 3-D Hydraulic tomography (3-D HT (primarily hydraulic conductivity, K) is estimated by joint inversion of head change data from multiple

  4. Imaging cellular network dynamics in three dimensions using fast 3D laser scanning

    E-print Network

    Cai, Long

    -y scanners to repeatedly scan the laser focus along a closed 3D trajectory. More than 90% of cell somata wereImaging cellular network dynamics in three dimensions using fast 3D laser scanning Werner Go lacking. Here we introduce a three-dimensional (3D) line-scan technology for two-photon microscopy

  5. Incremental 3D Ultrasound Imaging from a 2D scanner Ryutarou Ohbuchi

    E-print Network

    Pollefeys, Marc

    Incremental 3D Ultrasound Imaging from a 2D scanner Ryutarou Ohbuchi Henry Fuchs Department (128 x 128 x 128or more) in real-time (30 3D-frames/s) requires parallel processing. The new scanner-3175 Abstract W e have been developing an interactive system that wil! display 3D structures from a series

  6. Rectification and 3D Reconstruction of Curved Document Images Yuandong Tian and Srinivasa G. Narasimhan

    E-print Network

    Narasimhan, Srinivasa G.

    approach that automatically re- constructs the 3D shape and rectifies a deformed text doc- ument from], laser scanners [1], or structured light pro- jectors [3, 2] to measure the 3D deformation in the docuRectification and 3D Reconstruction of Curved Document Images Yuandong Tian and Srinivasa G

  7. Automated Retrieval of 3D CAD Model Objects in Construction Range Images

    E-print Network

    Bosché, Frédéric

    Automated Retrieval of 3D CAD Model Objects in Construction Range Images F. Bosche a, C.T. Haas a a/retrieval of 3D CAD objects in range point clouds in the Architectural/Engineering/Construction & Facility Man Automated and robust retrieval of three-dimensional (3D) Computer-Aided Design (CAD) objects from laser

  8. Topological Equivalence between a 3D Object and the Reconstruction of Its Digital Image

    E-print Network

    Latecki, Longin Jan

    Topological Equivalence between a 3D Object and the Reconstruction of Its Digital Image Peer. If one digitizes a 3D object even with a dense sampling grid, the reconstructed digital object may have that it is homeomorphic and close to the 3D object. The resulting digital object is always well- composed, which has nice

  9. An optical image of a 3D multifunctional

    E-print Network

    Rogers, John A.

    with the human body, its organs and various tissues.[6­12] Recently described devices, referred to as 3D simultaneous coverage of the tissue. An RF catheter provides precise access to anatomical regions but can only

  10. 3D particle tracking velocimetry using synthetic aperture imaging

    E-print Network

    Bajpayee, Abhishek

    2014-01-01

    3D visualization of fluid flow is of critical significance to a number of applications ranging from micro-scale medical devices to design of ships and airplanes. Of the various techniques currently used to visualize flow ...

  11. Automated tool for nuclei detection in digital microscopic images: Application to retinal images

    E-print Network

    California at Santa Barbara, University of

    detection tool that provides reliable and consistently accurate results for counting cell nuclei. Methods, counts of cells and nuclei from histological sections provide quantitative information centralAutomated tool for nuclei detection in digital microscopic images: Application to retinal images

  12. Accuracy of volume measurement using 3D ultrasound and development of CT-3D US image fusion algorithm for prostate cancer radiotherapy

    SciTech Connect

    Baek, Jihye; Huh, Jangyoung; Hyun An, So; Oh, Yoonjin; Kim, Myungsoo; Kim, DongYoung; Chung, Kwangzoo; Cho, Sungho; Lee, Rena

    2013-02-15

    Purpose: To evaluate the accuracy of measuring volumes using three-dimensional ultrasound (3D US), and to verify the feasibility of the replacement of CT-MR fusion images with CT-3D US in radiotherapy treatment planning. Methods: Phantoms, consisting of water, contrast agent, and agarose, were manufactured. The volume was measured using 3D US, CT, and MR devices. A CT-3D US and MR-3D US image fusion software was developed using the Insight Toolkit library in order to acquire three-dimensional fusion images. The quality of the image fusion was evaluated using metric value and fusion images. Results: Volume measurement, using 3D US, shows a 2.8 {+-} 1.5% error, 4.4 {+-} 3.0% error for CT, and 3.1 {+-} 2.0% error for MR. The results imply that volume measurement using the 3D US devices has a similar accuracy level to that of CT and MR. Three-dimensional image fusion of CT-3D US and MR-3D US was successfully performed using phantom images. Moreover, MR-3D US image fusion was performed using human bladder images. Conclusions: 3D US could be used in the volume measurement of human bladders and prostates. CT-3D US image fusion could be used in monitoring the target position in each fraction of external beam radiation therapy. Moreover, the feasibility of replacing the CT-MR image fusion to the CT-3D US in radiotherapy treatment planning was verified.

  13. Hyperspectral imaging for the detection of retinal disease

    NASA Astrophysics Data System (ADS)

    Harvey, Andrew R.; Lawlor, Joanne; McNaught, Andrew I.; Williams, John W.; Fletcher-Holmes, David W.

    2002-11-01

    Hyperspectral imaging (HSI) shows great promise for the detection and classification of several diseases, particularly in the fields of "optical biopsy" as applied to oncology, and functional retinal imaging in ophthalmology. In this paper, we discuss the application of HSI to the detection of retinal diseases and technological solutions that address some of the fundamental difficulties of spectral imaging within the eye. HSI of the retina offers a route to non-invasively deduce biochemical and metabolic processes within the retina. For example it shows promise for the mapping of retinal blood perfusion using spectral signatures of oxygenated and deoxygenated hemoglobin. Compared with other techniques using just a few spectral measurements, it offers improved classification in the presence of spectral cross-contamination by pigments and other components within the retina. There are potential applications for this imaging technique in the investigation and treatment of the eye complications of diabetes, and other diseases involving disturbances to the retinal, or optic-nerve-head circulation. It is well known that high-performance HSI requires high signal-to-noise ratios (SNR) whereas the application of any imaging technique within the eye must cope with the twin limitations of the small numerical aperture provided by the entrance pupil to the eye and the limit on the radiant power at the retina. We advocate the use of spectrally-multiplexed spectral imaging techniques (the traditional filter wheel is a traditional example). These approaches enable a flexible approach to spectral imaging, with wider spectral range, higher SNRs and lower light intensity at the retina than could be achieved using a Fourier-transform (FT) approach. We report the use of spectral imaging to provide calibrated spectral albedo images of healthy and diseased retinas and the use of this data for screening purposes. These images clearly demonstrate the ability to distinguish between oxygenated and deoxygenated hemoglobin using spectral imaging and this shows promise for the early detection of various retinopathies.

  14. Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation.

    PubMed

    Wang, Junchen; Suenaga, Hideyuki; Liao, Hongen; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

    2015-03-01

    Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability. PMID:25465067

  15. Imaging system for creating 3D block-face cryo-images of whole mice

    NASA Astrophysics Data System (ADS)

    Roy, Debashish; Breen, Michael; Salvado, Olivier; Heinzel, Meredith; McKinley, Eliot; Wilson, David

    2006-03-01

    We developed a cryomicrotome/imaging system that provides high resolution, high sensitivity block-face images of whole mice or excised organs, and applied it to a variety of biological applications. With this cryo-imaging system, we sectioned cryo-preserved tissues at 2-40 ?m thickness and acquired high resolution brightfield and fluorescence images with microscopic in-plane resolution (as good as 1.2 ?m). Brightfield images of normal and pathological anatomy show exquisite detail, especially in the abdominal cavity. Multi-planar reformatting and 3D renderings allow one to interrogate 3D structures. In this report, we present brightfield images of mouse anatomy, as well as 3D renderings of organs. For BPK mice model of polycystic kidney disease, we compared brightfield cryo-images and kidney volumes to MRI. The color images provided greater contrast and resolution of cysts as compared to in vivo MRI. We note that color cryo-images are closer to what a researcher sees in dissection, making it easier for them to interpret image data. The combination of field of view, depth of field, ultra high resolution and color/fluorescence contrast enables cryo-image volumes to provide details that cannot be found through in vivo imaging or other ex vivo optical imaging approaches. We believe that this novel imaging system will have applications that include identification of mouse phenotypes, characterization of diseases like blood vessel disease, kidney disease, and cancer, assessment of drug and gene therapy delivery and efficacy and validation of other imaging modalities.

  16. Comparison of retinal image quality with spherical and customized aspheric

    E-print Network

    Dainty, Chris

    retinal image quality, despite the misalignments that accompany cataract surgery. To test this hypothesis aspheric intraocular lenses calculated with real ray tracing," J. Cataract Refract. Surg. 35(11), 1984 in cataract patients," J. Cataract Refract. Surg. 26(7), 1022­1027 (2000). 10. J. Aramberri, "Intraocular lens

  17. Aberrations and retinal image quality of the normal human eye

    E-print Network

    Aberrations and retinal image quality of the normal human eye Junzhong Liang and David R. Williams important optical instrument is the human eye, yet its optical performance has not been completely char to measure the irregular as well as the classical aberrations of the eye, providing a more complete

  18. Adaptive optics with pupil tracking for high resolution retinal imaging

    E-print Network

    Dainty, Chris

    for high resolution retinal imaging because eye movements constitute an important part of the ocular, "Wavefront sensorless adaptive optics ophthalmoscopy in the human eye," Opt. Express 19(21), 14160). 6. T. Nirmaier, G. Pudasaini, and J. Bille, "Very fast wave­front measurements at the human eye

  19. Deconvolution of adaptive optics retinal images Julian C. Christou

    E-print Network

    Deconvolution of adaptive optics retinal images Julian C. Christou Center for Adaptive Optics by using deconvolution to remove the residual wave-front aberrations. Qualitatively, deconvolution improves such as deconvolution. Deconvolution has also been proposed as an alternative technique to adaptive wave-front correc

  20. Implementation of wireless 3D stereo image capture system and 3D exaggeration algorithm for the region of interest

    NASA Astrophysics Data System (ADS)

    Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Badarch, Luubaatar

    2015-05-01

    In this paper, we introduce the mobile embedded system implemented for capturing stereo image based on two CMOS camera module. We use WinCE as an operating system and capture the stereo image by using device driver for CMOS camera interface and Direct Draw API functions. We aslo comments on the GPU hardware and CUDA programming for implementation of 3D exaggeraion algorithm for ROI by adjusting and synthesizing the disparity value of ROI (region of interest) in real time. We comment on the pattern of aperture for deblurring of CMOS camera module based on the Kirchhoff diffraction formula and clarify the reason why we can get more sharp and clear image by blocking some portion of aperture or geometric sampling. Synthesized stereo image is real time monitored on the shutter glass type three-dimensional LCD monitor and disparity values of each segment are analyzed to prove the validness of emphasizing effect of ROI.

  1. GDx-MM: An imaging Mueller matrix retinal polarimeter

    NASA Astrophysics Data System (ADS)

    Twietmeyer, Karen Marie

    2007-12-01

    Retinal diseases are a major cause of blindness worldwide. Although widely studied, disease mechanisms are not completely understood, and diagnostic tests may not detect disease early enough for timely intervention. The goal of this research is to contribute to research for more sensitive diagnostic tests that might use the interaction of polarized light with retinal tissue to detect subtle changes in the microstructure. This dissertation describes the GDx-MM, a scanning laser polarimeter which measures a complete 16-element Mueller matrix image of the retina. This full polarization signature may provide new comparative information on the structure of healthy and diseased retinal tissue by highlighting depolarizing structures as well as structures with varying magnitudes and orientations of retardance and diattenuation. The three major components of this dissertation are: (1) Development of methods for polarimeter optimization and error analysis; (2) Design, optimization, assembly, calibration, and validation of the GDx-MM polarimeter; and (3) Analysis of data for several human subjects. Development involved modifications to a Laser Diagnostics GDx, a commercially available scanning laser ophthalmoscope with incomplete polarization capability. Modifications included installation of polarization components, development of a data acquisition system, and implementation of algorithms to convert raw data into polarization parameter images. Optimization involved visualization of polarimeter state trajectories on the Poincare sphere and a condition number analysis of the instrument matrix. Retinal images are collected non-invasively at 20 mum resolution over a 15° visual field in four seconds. Validation of the polarimeter demonstrates a polarimetric measurement accuracy of approximately +/- 5%. Retinal polarization data was collected on normal human subjects at the University of Arizona and at Indiana University School of Optometry. Calculated polarization parameter images reveal properties of the tissue microstructure. For example, retardance images indicate nerve fiber layer thickness and orientation, and depolarization images (uniform for these normal subjects), are predicted to indicate regions of disease-related tissue disruption. This research demonstrates a method for obtaining a full polarization signature of the retina in one measurement using a polarimetrically optimized instrument, and provides a step toward the use of complete retinal imaging polarimetry in the diagnosis and monitoring of retinal disease.

  2. Retinal functional imager (RFI): non-invasive functional imaging of the retina.

    PubMed

    Ganekal, S

    2013-01-01

    Retinal functional imager (RFI) is a unique non-invasive functional imaging system with novel capabilities for visualizing the retina. The objective of this review was to show the utility of non-invasive functional imaging in various disorders. Electronic literature search was carried out using the websites www.pubmed.gov and www.google.com. The search words were retinal functional imager and non-invasive retinal imaging used in combination. The articles published or translated into English were studied. The RFI directly measures hemodynamic parameters such as retinal blood-flow velocity, oximetric state, metabolic responses to photic activation and generates capillary perfusion maps (CPM) that provides retinal vasculature detail similar to flourescein angiography. All of these parameters stand in a direct relationship to the function and therefore the health of the retina, and are known to be degraded in the course of retinal diseases. Detecting changes in retinal function aid early diagnosis and treatment as functional changes often precede structural changes in many retinal disorders. PMID:24172564

  3. Phase aided 3D imaging and modeling: dedicated systems and case studies

    NASA Astrophysics Data System (ADS)

    Yin, Yongkai; He, Dong; Liu, Zeyi; Liu, Xiaoli; Peng, Xiang

    2014-05-01

    Dedicated prototype systems for 3D imaging and modeling (3DIM) are presented. The 3D imaging systems are based on the principle of phase-aided active stereo, which have been developed in our laboratory over the past few years. The reported 3D imaging prototypes range from single 3D sensor to a kind of optical measurement network composed of multiple node 3D-sensors. To enable these 3D imaging systems, we briefly discuss the corresponding calibration techniques for both single sensor and multi-sensor optical measurement network, allowing good performance of the 3DIM prototype systems in terms of measurement accuracy and repeatability. Furthermore, two case studies including the generation of high quality color model of movable cultural heritage and photo booth from body scanning are presented to demonstrate our approach.

  4. Evidence of outer retinal changes in glaucoma patients as revealed by ultrahigh-resolution in vivo retinal imaging

    PubMed Central

    Choi, Stacey S; Zawadzki, Robert J; Lim, Michele C; Brandt, James D; Keltner, John L; Doble, Nathan; Werner, John S

    2010-01-01

    Aims It is well established that glaucoma results in a thinning of the inner retina. To investigate whether the outer retina is also involved, ultrahigh-resolution retinal imaging techniques were utilised. Methods Eyes from 10 glaucoma patients (25–78 years old), were imaged using three research-grade instruments: (1) ultrahigh-resolution Fourier-domain optical coherence tomography (UHR-FD-OCT), (2) adaptive optics (AO) UHR-FD-OCT and (3) AO-flood illuminated fundus camera (AO-FC). UHR-FD-OCT and AO-UHR-FD-OCT B-scans were examined for any abnormalities in the retinal layers. On some patients, cone density measurements were made from the AO-FC en face images. Correlations between retinal structure and visual sensitivity were measured by Humphrey visual-field (VF) testing made at the corresponding retinal locations. Results All three in vivo imaging modalities revealed evidence of outer retinal changes along with the expected thinning of the inner retina in glaucomatous eyes with VF loss. AO-UHR-FD-OCT images identified the exact location of structural changes within the cone photoreceptor layer with the AO-FC en face images showing dark areas in the cone mosaic at the same retinal locations with reduced visual sensitivity. Conclusion Losses in cone density along with expected inner retinal changes were demonstrated in well-characterised glaucoma patients with VF loss. PMID:20956277

  5. Remote spectral imaging with simultaneous extraction of 3D topography for historical wall paintings

    NASA Astrophysics Data System (ADS)

    Liang, Haida; Lucian, Andrei; Lange, Rebecca; Cheung, Chi Shing; Su, Bomin

    2014-09-01

    PRISMS (Portable Remote Imaging System for Multispectral Scanning) is designed for in situ, simultaneous high resolution spectral and 3D topographic imaging of wall paintings and other large surfaces. In particular, it can image at transverse resolutions of tens of microns remotely from distances of tens of metres, making high resolution imaging possible from a fixed position on the ground for areas at heights that is difficult to access. The spectral imaging system is fully automated giving 3D topographic mapping at millimetre accuracy as a by-product of the image focusing process. PRISMS is the first imaging device capable of both 3D mapping and spectral imaging simultaneously without additional distance measuring devices. Examples from applications of PRISMS to wall paintings at a UNESCO site in the Gobi desert are presented to demonstrate the potential of the instrument for large scale 3D spectral imaging, revealing faded writing and material identification.

  6. A novel method for detection of preferred retinal locus (PRL) through simple retinal image processing using MATLAB

    NASA Astrophysics Data System (ADS)

    Kalikivayi, V.; Pal, Sudip; Ganesan, A. R.

    2013-09-01

    simple and new technique for detection of `Preferred Retinal Locus' (PRL) in human eye is proposed in this paper. Simple MATLAB algorithms for estimating RGB pixel intensity values of retinal images were used. The technique proved non-existence of `S' cones in Fovea Centralis and also proposes that rods are involved in blue color perception. Retinal images of central vision loss and normal retina were taken for image processing. Blue minimum, Red maximum and Red+Green maximum were the three methods used in detecting PRL. Comparative analyses were also performed for these methods with patient's age and visual acuity.

  7. Quantitative 3-D imaging topogrammetry for telemedicine applications

    NASA Technical Reports Server (NTRS)

    Altschuler, Bruce R.

    1994-01-01

    The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with precision micro-sewing machines, splice neural connections with laser welds, micro-bore through constricted vessels, and computer combine ultrasound, microradiography, and 3-D mini-borescopes to quickly assess and trace vascular problems in situ. The spatial relationships between organs, robotic arms, and end-effector diagnostic, manipulative, and surgical instruments would be constantly monitored by the robot 'brain' using inputs from its multiple 3-D quantitative 'eyes' remote sensing, as well as by contact and proximity force measuring devices. Methods to create accurate and quantitative 3-D topograms at continuous video data rates are described.

  8. Update on wide- and ultra-widefield retinal imaging

    PubMed Central

    Shoughy, Samir S; Arevalo, J Fernando; Kozak, Igor

    2015-01-01

    The peripheral retina is the site of pathology in many ocular diseases and ultra-widefield (UWF) imaging is one of the new technologies available to ophthalmologists to manage some of these diseases. Currently, there are several imaging systems used in practice for the purpose of diagnostic, monitoring disease progression or response to therapy, and telemedicine. These include modalities for both adults and pediatric patients. The current systems are capable of producing wide- and UWF color fundus photographs, fluorescein and indocyanine green angiograms, and autofluorescence images. Using this technology, important clinical observations have been made in diseases such as diabetic retinopathy, uveitides, retinal vascular occlusions and tumors, intraocular tumors, retinopathy of prematurity, and age-related macular degeneration. Widefield imaging offers excellent postoperative documentation of retinal detachment surgery. New applications will soon be available to integrate this technology into large volume routine clinical practice. PMID:26458474

  9. Hyperspectral retinal imaging with a spectrally tunable light source

    NASA Astrophysics Data System (ADS)

    Francis, Robert P.; Zuzak, Karel J.; Ufret-Vincenty, Rafael

    2011-03-01

    Hyperspectral retinal imaging can measure oxygenation and identify areas of ischemia in human patients, but the devices used by current researchers are inflexible in spatial and spectral resolution. We have developed a flexible research prototype consisting of a DLP®-based spectrally tunable light source coupled to a fundus camera to quickly explore the effects of spatial resolution, spectral resolution, and spectral range on hyperspectral imaging of the retina. The goal of this prototype is to (1) identify spectral and spatial regions of interest for early diagnosis of diseases such as glaucoma, age-related macular degeneration (AMD), and diabetic retinopathy (DR); and (2) define required specifications for commercial products. In this paper, we describe the challenges and advantages of using a spectrally tunable light source for hyperspectral retinal imaging, present clinical results of initial imaging sessions, and describe how this research can be leveraged into specifying a commercial product.

  10. 3D image display of fetal ultrasonic images by thin shell

    NASA Astrophysics Data System (ADS)

    Wang, Shyh-Roei; Sun, Yung-Nien; Chang, Fong-Ming; Jiang, Ching-Fen

    1999-05-01

    Due to the properties of convenience and non-invasion, ultrasound has become an essential tool for diagnosis of fetal abnormality during women pregnancy in obstetrics. However, the 'noisy and blurry' nature of ultrasound data makes the rendering of the data a challenge in comparison with MRI and CT images. In spite of the speckle noise, the unwanted objects usually occlude the target to be observed. In this paper, we proposed a new system that can effectively depress the speckle noise, extract the target object, and clearly render the 3D fetal image in almost real-time from 3D ultrasound image data. The system is based on a deformable model that detects contours of the object according to the local image feature of ultrasound. Besides, in order to accelerate rendering speed, a thin shell is defined to separate the observed organ from unrelated structures depending on those detected contours. In this way, we can support quick 3D display of ultrasound, and the efficient visualization of 3D fetal ultrasound thus becomes possible.

  11. 3D image reconstruction algorithms for cryo-electron-microscopy images of virus particles

    NASA Astrophysics Data System (ADS)

    Doerschuk, Peter C.; Johnson, John E.

    2000-11-01

    A statistical model for the object and the complete image formation process in cryo electron microscopy of viruses is presented. Using this model, maximum likelihood reconstructions of the 3D structure of viruses are computed using the expectation maximization algorithm and an example based on Cowpea mosaic virus is provided.

  12. Integrated Endoscope for Real-Time 3D Ultrasound Imaging and Hyperthermia: Feasibility Study

    E-print Network

    Smith, Stephen

    Integrated Endoscope for Real-Time 3D Ultrasound Imaging and Hyperthermia: Feasibility Study ERIC C of treatments for prostate, cervical and esophageal cancer. The ability to combine ultrasound hyperthermia and 3 to facilitate drug delivery therapy. Key words: 3D; endoscope; hyperthermia; imaging; ultrasound. I

  13. Curve skeletonization of surface-like objects in 3D images guided by voxel classification

    E-print Network

    Nyström, Ingela

    Curve skeletonization of surface-like objects in 3D images guided by voxel classification S (Naples), Italy Abstract Skeletonization is a way to reduce dimensionality of digital objects. Here, we present an algorithm that computes the curve skeleton of a surface-like object in a 3D image, i

  14. Dual-Mode Intracranial Catheter Integrating 3D Ultrasound Imaging and Hyperthermia for Neuro-oncology

    E-print Network

    Smith, Stephen

    Dual-Mode Intracranial Catheter Integrating 3D Ultrasound Imaging and Hyperthermia for Neuro (RT3D) imaging and ultrasound hyperthermia, for application in the visualiza- tion and treatment and both probes were used in an in vivo canine brain model to im- age anatomical structures and color

  15. 3D RECONSTRUCTION OF PLANT ROOTS FROM MRI IMAGES Hannes Schulz1

    E-print Network

    Behnke, Sven

    3D RECONSTRUCTION OF PLANT ROOTS FROM MRI IMAGES Hannes Schulz1 , Johannes A. Postma2 , Dagmar van.scharr}@fz-juelich.de Keywords: root modeling, plant phenotyping, roots in soil, maize, barley Abstract: We present a novel method for deriving a structural model of a plant root system from 3D Magnetic Resonance Imaging (MRI

  16. Estimation of 3D Left Ventricular Deformation from Medical Images Using Biomechanical Models

    E-print Network

    Duncan, James S.

    Estimation of 3D Left Ventricular Deformation from Medical Images Using Biomechanical Models;Abstract Estimation of 3D Left Ventricular Deformation from Medical Images Using Biomechanical Models a general framework for estimating soft tissue deformation from sequences of three-dimensional medical

  17. LETHA: Learning from High Quality Inputs for 3D Pose Estimation in Low Quality Images

    E-print Network

    Moreno-Noguer, Francesc

    LETHA: Learning from High Quality Inputs for 3D Pose Estimation in Low Quality Images Adrian Penate quality training data, and combining them with discriminative machine learning to deal with low- quality image with the 3D model [13]. Machine learning approaches on the other hand, annotate training imagery

  18. Comstat2 -a modern 3D image analysis environment for biofilms

    E-print Network

    Comstat2 - a modern 3D image analysis environment for biofilms Martin Vorregaard s053247 Kongens for the analysis and treatment of biofilm images in 3D. Various algorithms for gathering knowledge on biofilm compatibility with an earlier version of the program. An, in this area, new method for the evaluation of biofilm

  19. Speckle reducing anisotropic diffusion for 3D ultrasound images Qingling Suna

    E-print Network

    Acton, Scott

    Speckle reducing anisotropic diffusion for 3D ultrasound images Qingling Suna , John A. Hossackb Abstract This paper presents an approach for reducing speckle in three dimensional (3D) ultrasound images. A 2D speckle reduction technique, speckle reducing anisotropic diffusion (SRAD), is explored

  20. A MOVABLE TOMOGRAPHIC DISPLAY FOR 3D MEDICAL IMAGES

    E-print Network

    Stetten, George

    amounts to an "invisible patient" will preserve the perception of 3D anatomic relationships in a way. With grab-a-slice, the user experiences the il- lusion of slicing through an invisible patient. The touch boon to physicians seeking to learn non-invasively about a patient's condition. In clinical practice, 3

  1. Statistical skull models from 3D X-ray images

    E-print Network

    Berar, M; Bailly, G; Payan, Y; Berar, Maxime; Desvignes, Michel; Payan, Yohan

    2006-01-01

    We present 2 statistical models of the skull and mandible built upon an elastic registration method of 3D meshes. The aim of this work is to relate degrees of freedom of skull anatomy, as static relations are of main interest for anthropology and legal medicine. Statistical models can effectively provide reconstructions together with statistical precision. In our applications, patient-specific meshes of the skull and the mandible are high-density meshes, extracted from 3D CT scans. All our patient-specific meshes are registrated in a subject-shared reference system using our 3D-to-3D elastic matching algorithm. Registration is based upon the minimization of a distance between the high density mesh and a shared low density mesh, defined on the vertexes, in a multi resolution approach. A Principal Component analysis is performed on the normalised registrated data to build a statistical linear model of the skull and mandible shape variation. The accuracy of the reconstruction is under the millimetre in the shape...

  2. 3-D Facial Imaging for Identification Anselmo Lastra

    E-print Network

    McShea, Daniel W.

    if above flicker frequency ­ looks like white light) 2. Infrared light source and infrared sensitive Views · Notice that they're different, like from your 2 eyes Left Camera Right Camera #12;Left Camera making 3D model of object with a solid color #12;Structured Light · The idea is to project texture

  3. INTERACTION WITH 3D IMAGE DATA THROUGH VOLUME RENDERED VIEWS.

    E-print Network

    Pelizzari, Charles A.

    of small or intricate structures such as lymph nodes or vessels for another. Geometric rendering produces effectively real time interaction with such models, in that they may be moved, shaded, lighted, picked, disarticulated, etc. Volumetric rendering, by contrast, produces 3D views of anatomy directly

  4. Spatio-Temporal Data Fusion for 3D+T Image Reconstruction in Cerebral Angiography

    E-print Network

    Copeland, Andrew D.

    This paper provides a framework for generating high resolution time sequences of 3D images that show the dynamics of cerebral blood flow. These sequences have the potential to allow image feedback during medical procedures ...

  5. Effective 3D Object Detection and Regression Using Probabilistic Segmentation Features in CT Images

    E-print Network

    Chandy, John A.

    , but the proposed method can be extended to other medical imaging applications (e.g., lung nodule de- tection-looping" strat- egy and apply to lung nodule detection in 3D CT images. In our work, however, segmentation

  6. 3-D ultrafast Doppler imaging applied to the noninvasive mapping of blood vessels in vivo.

    PubMed

    Provost, Jean; Papadacci, Clement; Demene, Charlie; Gennisson, Jean-Luc; Tanter, Mickael; Pernot, Mathieu

    2015-08-01

    Ultrafast Doppler imaging was introduced as a technique to quantify blood flow in an entire 2-D field of view, expanding the field of application of ultrasound imaging to the highly sensitive anatomical and functional mapping of blood vessels. We have recently developed 3-D ultrafast ultrasound imaging, a technique that can produce thousands of ultrasound volumes per second, based on a 3-D plane and diverging wave emissions, and demonstrated its clinical feasibility in human subjects in vivo. In this study, we show that noninvasive 3-D ultrafast power Doppler, pulsed Doppler, and color Doppler imaging can be used to perform imaging of blood vessels in humans when using coherent compounding of 3-D tilted plane waves. A customized, programmable, 1024-channel ultrasound system was designed to perform 3-D ultrafast imaging. Using a 32 × 32, 3-MHz matrix phased array (Vermon, Tours, France), volumes were beamformed by coherently compounding successive tilted plane wave emissions. Doppler processing was then applied in a voxel-wise fashion. The proof of principle of 3-D ultrafast power Doppler imaging was first performed by imaging Tygon tubes of various diameters, and in vivo feasibility was demonstrated by imaging small vessels in the human thyroid. Simultaneous 3-D color and pulsed Doppler imaging using compounded emissions were also applied in the carotid artery and the jugular vein in one healthy volunteer. PMID:26276956

  7. Retinal Image Quality during Accommodation in Adult Myopic Eyes

    PubMed Central

    Sreenivasan, Vidhyapriya; Aslakson, Emily; Kornaus, Andrew; Thibos, Larry N.

    2014-01-01

    Purpose Reduced retinal image contrast produced by accommodative lag is implicated with myopia development. Here, we measure accommodative error and retinal image quality from wavefront aberrations in myopes and emmetropes when they perform visually demanding and naturalistic tasks. Methods Wavefront aberrations were measured in 10 emmetropic and 11 myopic adults at three distances (100, 40, and 20 cm) while performing four tasks (monocular acuity, binocular acuity, reading, and movie watching). For the acuity tasks, measurements of wavefront error were obtained near the end point of the acuity experiment. Refractive state was defined as the target vergence that optimizes image quality using a visual contrast metric (VSMTF) computed from wavefront errors. Results Accommodation was most accurate (and image quality best) during binocular acuity whereas accommodation was least accurate (and image quality worst) while watching a movie. When viewing distance was reduced, accommodative lag increased and image quality (as quantified by VSMTF) declined for all tasks in both refractive groups. For any given viewing distance, computed image quality was consistently worse in myopes than in emmetropes, more so for the acuity than for reading/movie watching. Although myopes showed greater lags and worse image quality for the acuity experiments compared to emmetropes, acuity was not measurably worse in myopes compared to emmetropes. Conclusions Retinal image quality present when performing a visually demanding task (e.g., during clinical examination) is likely to be greater than for less demanding tasks (e.g., reading/movie watching). Although reductions in image quality lead to reductions in acuity, the image quality metric VSMTF is not necessarily an absolute indicator of visual performance because myopes achieved slightly better acuity than emmetropes despite showing greater lags and worse image quality. Reduced visual contrast in myopes compared to emmetropes is consistent with theories of myopia progression that point to image contrast as an inhibitory signal for ocular growth. PMID:24152885

  8. Computer-assisted 3D design software for teaching neuro-ophthalmology of the oculomotor system and training new retinal surgery techniques

    NASA Astrophysics Data System (ADS)

    Glittenberg, Carl; Binder, Susanne

    2004-07-01

    Purpose: To create a more effective method of demonstrating complex subject matter in ophthalmology with the use of high end, 3-D, computer aided animation and interactive multimedia technologies. Specifically, to explore the possibilities of demonstrating the complex nature of the neuroophthalmological basics of the human oculomotor system in a clear and non confusing way, and to demonstrate new forms of retinal surgery in a manner that makes the procedures easier to understand for other retinal surgeons. Methods and Materials: Using Reflektions 4.3, Monzoom Pro 4.5, Cinema 4D XL 5.03, Cinema 4D XL 8 Studio Bundle, Mediator 4.0, Mediator Pro 5.03, Fujitsu-Siemens Pentium III and IV, Gericom Webgine laptop, M.G.I. Video Wave 1.0 and 5, Micrografix Picture Publisher 6.0 and 8, Amorphium 1.0, and Blobs for Windows, we created 3-D animations showing the origin, insertion, course, main direction of pull, and auxiliary direction of pull of the six extra-ocular eye muscles. We created 3-D animations that (a) show the intra-cranial path of the relevant oculomotor cranial nerves and which muscles are supplied by them, (b) show which muscles are active in each of the ten lines of sight, (c) demonstrate the various malfunctions of oculomotor systems, as well as (d) show the surgical techniques and the challenges in radial optic neurotomies and subretinal surgeries. Most of the 3-D animations were integrated in interactive multimedia teaching programs. Their effectiveness was compared to conventional teaching methods in a comparative study performed at the University of Vienna. We also performed a survey to examine the response of students being taught with the interactive programs. We are currently in the process of placing most of the animations in an interactive web site in order to make them freely available to everyone who is interested. Results: Although learning how to use complex 3-D computer animation and multimedia authoring software can be very time consuming and frustrating, we found that once the programs are mastered they can be used to create 3-D animations that drastically improve the quality of medical demonstrations. The comparative study showed a significant advantage of using these technologies over conventional teaching methods. The feedback from medical students, doctors, and retinal surgeons was overwhelmingly positive. A strong interest was expressed to have more subjects and techniques demonstrated in this fashion. Conclusion: 3-D computer technologies should be used in the demonstration of all complex medical subjects. More effort and resources need to be given to the development of these technologies that can improve the understanding of medicine for students, doctors, and patients alike.

  9. Pragmatic fully 3D image reconstruction for the MiCES mouse imaging PET scanner

    NASA Astrophysics Data System (ADS)

    Lee, Kisung; Kinahan, Paul E.; Fessler, Jeffrey A.; Miyaoka, Robert S.; Janes, Marie; Lewellen, Tom K.

    2004-10-01

    We present a pragmatic approach to image reconstruction for data from the micro crystal elements system (MiCES) fully 3D mouse imaging positron emission tomography (PET) scanner under construction at the University of Washington. Our approach is modelled on fully 3D image reconstruction used in clinical PET scanners, which is based on Fourier rebinning (FORE) followed by 2D iterative image reconstruction using ordered-subsets expectation-maximization (OSEM). The use of iterative methods allows modelling of physical effects (e.g., statistical noise, detector blurring, attenuation, etc), while FORE accelerates the reconstruction process by reducing the fully 3D data to a stacked set of independent 2D sinograms. Previous investigations have indicated that non-stationary detector point-spread response effects, which are typically ignored for clinical imaging, significantly impact image quality for the MiCES scanner geometry. To model the effect of non-stationary detector blurring (DB) in the FORE+OSEM(DB) algorithm, we have added a factorized system matrix to the ASPIRE reconstruction library. Initial results indicate that the proposed approach produces an improvement in resolution without an undue increase in noise and without a significant increase in the computational burden. The impact on task performance, however, remains to be evaluated.

  10. Pragmatic fully 3D image reconstruction for the MiCES mouse imaging PET scanner.

    PubMed

    Lee, Kisung; Kinahan, Paul E; Fessler, Jeffrey A; Miyaoka, Robert S; Janes, Marie; Lewellen, Tom K

    2004-10-01

    We present a pragmatic approach to image reconstruction for data from the micro crystal elements system (MiCES) fully 3D mouse imaging positron emission tomography (PET) scanner under construction at the University of Washington. Our approach is modelled on fully 3D image reconstruction used in clinical PET scanners, which is based on Fourier rebinning (FORE) followed by 2D iterative image reconstruction using ordered-subsets expectation-maximization (OSEM). The use of iterative methods allows modelling of physical effects (e.g., statistical noise, detector blurring, attenuation, etc), while FORE accelerates the reconstruction process by reducing the fully 3D data to a stacked set of independent 2D sinograms. Previous investigations have indicated that non-stationary detector point-spread response effects, which are typically ignored for clinical imaging, significantly impact image quality for the MiCES scanner geometry. To model the effect of non-stationary detector blurring (DB) in the FORE+OSEM(DB) algorithm, we have added a factorized system matrix to the ASPIRE reconstruction library. Initial results indicate that the proposed approach produces an improvement in resolution without an undue increase in noise and without a significant increase in the computational burden. The impact on task performance, however, remains to be evaluated. PMID:15552417

  11. Robust Retrieval of 3D Structures from Image Stacks

    E-print Network

    Introduction The sequences of 2D cross­sectional slices produced by X­ray Computed Tomography and Magnetic produced by the scanner [39], which may vary from set to set, from image to image, and even within an image

  12. Computation of tooth axes of existent and missing teeth from 3D CT images.

    PubMed

    Wang, Yang; Wu, Lin; Guo, Huayan; Qiu, Tiantian; Huang, Yuanliang; Lin, Bin; Wang, Lisheng

    2015-12-01

    Orientations of tooth axes are important quantitative information used in dental diagnosis and surgery planning. However, their computation is a complex problem, and the existing methods have respective limitations. This paper proposes new methods to compute 3D tooth axes from 3D CT images for existent teeth with single root or multiple roots and to estimate 3D tooth axes from 3D CT images for missing teeth. The tooth axis of a single-root tooth will be determined by segmenting the pulp cavity of the tooth and computing the principal direction of the pulp cavity, and the estimation of tooth axes of the missing teeth is modeled as an interpolation problem of some quaternions along a 3D curve. The proposed methods can either avoid the difficult teeth segmentation problem or improve the limitations of existing methods. Their effectiveness and practicality are demonstrated by experimental results of different 3D CT images from the clinic. PMID:25941910

  13. 3D Image Reconstructions and the Nyquist-Shannon Theorem

    NASA Astrophysics Data System (ADS)

    Ficker, T.; Martišek, D.

    2015-09-01

    Fracture surfaces are occasionally modelled by Fourier's two-dimensional series that can be converted into digital 3D reliefs mapping the morphology of solid surfaces. Such digital replicas may suffer from various artefacts when processed inconveniently. Spatial aliasing is one of those artefacts that may devalue Fourier's replicas. According to the Nyquist-Shannon sampling theorem the spatial aliasing occurs when Fourier's frequencies exceed the Nyquist critical frequency. In the present paper it is shown that the Nyquist frequency is not the only critical limit determining aliasing artefacts but there are some other frequencies that intensify aliasing phenomena and form an infinite set of points at which numerical results abruptly and dramatically change their values. This unusual type of spatial aliasing is explored and some consequences for 3D computer reconstructions are presented.

  14. ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

    2005-01-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

  15. Real-time auto-stereoscopic visualization of 3D medical images

    NASA Astrophysics Data System (ADS)

    Portoni, Luisa; Patak, Alexandre; Noirard, Pierre; Grossetie, Jean-Claude; van Berkel, Cees

    2000-04-01

    The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.

  16. Image processing for a high-resolution optoelectronic retinal prosthesis.

    PubMed

    Asher, Alon; Segal, William A; Baccus, Stephen A; Yaroslavsky, Leonid P; Palanker, Daniel V

    2007-06-01

    In an effort to restore visual perception in retinal diseases such as age-related macular degeneration or retinitis pigmentosa, a design was recently presented for a high-resolution optoelectronic retinal prosthesis having thousands of electrodes. This system requires real-time image processing fast enough to convert a video stream of images into electrical stimulus patterns that can be properly interpreted by the brain. Here, we present image-processing and tracking algorithms for a subretinal implant designed to stimulate the second neuron in the visual pathway, bypassing the degenerated first synaptic layer. For this task, we have developed and implemented: 1) A tracking algorithm that determines the implant's position in each frame. 2) Image cropping outside of the implant boundaries. 3) A geometrical transformation that distorts the image appropriate to the geometry of the fovea. 4) Spatio-temporal image filtering to reproduce the visual processing normally occurring in photoceptors and at the photoreceptor-bipolar cell synapse. 5) Conversion of the filtered visual information into a pattern of electrical current. Methods to accelerate real-time transformations include the exploitation of data redundancy in the time domain, and the use of precomputed lookup tables that are adjustable to retinal physiology and allow flexible control of stimulation parameters. A software implementation of these algorithms processes natural visual scenes with sufficient speed for real-time operation. This computationally efficient algorithm resembles, in some aspects, biological strategies of efficient coding in the retina and could provide a refresh rate higher than fifty frames per second on our system. PMID:17554819

  17. 3D imaging of cone photoreceptors over extended time periods using optical coherence tomography with adaptive optics

    NASA Astrophysics Data System (ADS)

    Kocaoglu, Omer P.; Lee, Sangyeol; Jonnal, Ravi S.; Wang, Qiang; Herde, Ashley E.; Besecker, Jason; Gao, Weihua; Miller, Donald T.

    2011-03-01

    Optical coherence tomography with adaptive optics (AO-OCT) is a highly sensitive, noninvasive method for 3D imaging of the microscopic retina. The purpose of this study is to advance AO-OCT technology by enabling repeated imaging of cone photoreceptors over extended periods of time (days). This sort of longitudinal imaging permits monitoring of 3D cone dynamics in both normal and diseased eyes, in particular the physiological processes of disc renewal and phagocytosis, which are disrupted by retinal diseases such as age related macular degeneration and retinitis pigmentosa. For this study, the existing AO-OCT system at Indiana underwent several major hardware and software improvements to optimize system performance for 4D cone imaging. First, ultrahigh speed imaging was realized using a Basler Sprint camera. Second, a light source with adjustable spectrum was realized by integration of an Integral laser (Femto Lasers, ?c=800nm, ??=160nm) and spectral filters in the source arm. For cone imaging, we used a bandpass filter with ?c=809nm and ??=81nm (2.6 ?m nominal axial resolution in tissue, and 167 KHz A-line rate using 1,408 px), which reduced the impact of eye motion compared to previous AO-OCT implementations. Third, eye motion artifacts were further reduced by custom ImageJ plugins that registered (axially and laterally) the volume videos. In two subjects, cone photoreceptors were imaged and tracked over a ten day period and their reflectance and outer segment (OS) lengths measured. High-speed imaging and image registration/dewarping were found to reduce eye motion to a fraction of a cone width (1 ?m root mean square). The pattern of reflections in the cones was found to change dramatically and occurred on a spatial scale well below the resolution of clinical instruments. Normalized reflectance of connecting cilia (CC) and OS posterior tip (PT) of an exemplary cone was 54+/-4, 47+/-4, 48+/-6, 50+/-5, 56+/-1% and 46+/-4, 53+/-4, 52+/-6, 50+/-5, 44+/-1% for days #1,3,6,8,10 respectively. OS length of the same cone was 28.9, 26.4, 26.4, 30.6, and 28.1 ìm for days #1,3,6,8,10 respectively. It is plausible these changes are an optical correlate of the natural process of OS renewal and shedding.

  18. Imaging of human differentiated 3D neural aggregates using light sheet fluorescence microscopy.

    PubMed

    Gualda, Emilio J; Simão, Daniel; Pinto, Catarina; Alves, Paula M; Brito, Catarina

    2014-01-01

    The development of three dimensional (3D) cell cultures represents a big step for the better understanding of cell behavior and disease in a more natural like environment, providing not only single but multiple cell type interactions in a complex 3D matrix, highly resembling physiological conditions. Light sheet fluorescence microscopy (LSFM) is becoming an excellent tool for fast imaging of such 3D biological structures. We demonstrate the potential of this technique for the imaging of human differentiated 3D neural aggregates in fixed and live samples, namely calcium imaging and cell death processes, showing the power of imaging modality compared with traditional microscopy. The combination of light sheet microscopy and 3D neural cultures will open the door to more challenging experiments involving drug testing at large scale as well as a better understanding of relevant biological processes in a more realistic environment. PMID:25161607

  19. Remote laboratory for phase-aided 3D microscopic imaging and metrology

    NASA Astrophysics Data System (ADS)

    Wang, Meng; Yin, Yongkai; Liu, Zeyi; He, Wenqi; Li, Boqun; Peng, Xiang

    2014-05-01

    In this paper, the establishment of a remote laboratory for phase-aided 3D microscopic imaging and metrology is presented. Proposed remote laboratory consists of three major components, including the network-based infrastructure for remote control and data management, the identity verification scheme for user authentication and management, and the local experimental system for phase-aided 3D microscopic imaging and metrology. The virtual network computer (VNC) is introduced to remotely control the 3D microscopic imaging system. Data storage and management are handled through the open source project eSciDoc. Considering the security of remote laboratory, the fingerprint is used for authentication with an optical joint transform correlation (JTC) system. The phase-aided fringe projection 3D microscope (FP-3DM), which can be remotely controlled, is employed to achieve the 3D imaging and metrology of micro objects.

  20. Imaging of human differentiated 3D neural aggregates using light sheet fluorescence microscopy

    PubMed Central

    Gualda, Emilio J.; Simão, Daniel; Pinto, Catarina; Alves, Paula M.; Brito, Catarina

    2014-01-01

    The development of three dimensional (3D) cell cultures represents a big step for the better understanding of cell behavior and disease in a more natural like environment, providing not only single but multiple cell type interactions in a complex 3D matrix, highly resembling physiological conditions. Light sheet fluorescence microscopy (LSFM) is becoming an excellent tool for fast imaging of such 3D biological structures. We demonstrate the potential of this technique for the imaging of human differentiated 3D neural aggregates in fixed and live samples, namely calcium imaging and cell death processes, showing the power of imaging modality compared with traditional microscopy. The combination of light sheet microscopy and 3D neural cultures will open the door to more challenging experiments involving drug testing at large scale as well as a better understanding of relevant biological processes in a more realistic environment. PMID:25161607

  1. A Featureless Approach to 3D Polyhedral Building Modeling from Aerial Images

    PubMed Central

    Hammoudi, Karim; Dornaika, Fadi

    2011-01-01

    This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach. PMID:22346575

  2. 3D Modeling from Multi-views Images for Cultural Heritage in Wat-Pho, Thailand

    NASA Astrophysics Data System (ADS)

    Soontranon, N.; Srestasathiern, P.; Lawawirojwong, S.

    2015-08-01

    In Thailand, there are several types of (tangible) cultural heritages. This work focuses on 3D modeling of the heritage objects from multi-views images. The images are acquired by using a DSLR camera which costs around 1,500 (camera and lens). Comparing with a 3D laser scanner, the camera is cheaper and lighter than the 3D scanner. Hence, the camera is available for public users and convenient for accessing narrow areas. The acquired images consist of various sculptures and architectures in Wat-Pho which is a Buddhist temple located behind the Grand Palace (Bangkok, Thailand). Wat-Pho is known as temple of the reclining Buddha and the birthplace of traditional Thai massage. To compute the 3D models, a diagram is separated into following steps; Data acquisition, Image matching, Image calibration and orientation, Dense matching and Point cloud processing. For the initial work, small heritages less than 3 meters height are considered for the experimental results. A set of multi-views images of an interested object is used as input data for 3D modeling. In our experiments, 3D models are obtained from MICMAC (open source) software developed by IGN, France. The output of 3D models will be represented by using standard formats of 3D point clouds and triangulated surfaces such as .ply, .off, .obj, etc. To compute for the efficient 3D models, post-processing techniques are required for the final results e.g. noise reduction, surface simplification and reconstruction. The reconstructed 3D models can be provided for public access such as website, DVD, printed materials. The high accurate 3D models can also be used as reference data of the heritage objects that must be restored due to deterioration of a lifetime, natural disasters, etc.

  3. Automated retinal image analysis for diabetic retinopathy in telemedicine.

    PubMed

    Sim, Dawn A; Keane, Pearse A; Tufail, Adnan; Egan, Catherine A; Aiello, Lloyd Paul; Silva, Paolo S

    2015-03-01

    There will be an estimated 552 million persons with diabetes globally by the year 2030. Over half of these individuals will develop diabetic retinopathy, representing a nearly insurmountable burden for providing diabetes eye care. Telemedicine programmes have the capability to distribute quality eye care to virtually any location and address the lack of access to ophthalmic services. In most programmes, there is currently a heavy reliance on specially trained retinal image graders, a resource in short supply worldwide. These factors necessitate an image grading automation process to increase the speed of retinal image evaluation while maintaining accuracy and cost effectiveness. Several automatic retinal image analysis systems designed for use in telemedicine have recently become commercially available. Such systems have the potential to substantially improve the manner by which diabetes eye care is delivered by providing automated real-time evaluation to expedite diagnosis and referral if required. Furthermore, integration with electronic medical records may allow a more accurate prognostication for individual patients and may provide predictive modelling of medical risk factors based on broad population data. PMID:25697773

  4. Visualization of 3D images from multiple texel images created from fused LADAR/digital imagery

    NASA Astrophysics Data System (ADS)

    Killpack, Cody C.; Budge, Scott E.

    2015-05-01

    The ability to create 3D models, using registered texel images (fused ladar and digital imagery), is an important topic in remote sensing. These models are automatically generated by matching multiple texel images into a single common reference frame. However, rendering a sequence of independently registered texel images often provides challenges. Although accurately registered, the model textures are often incorrectly overlapped and interwoven when using standard rendering techniques. Consequently, corrections must be done after all the primitives have been rendered, by determining the best texture for any viewable fragment in the model. Determining the best texture is difficult, as each texel image remains independent after registration. The depth data is not merged to form a single 3D mesh, thus eliminating the possibility of generating a fused texture atlas. It is therefore necessary to determine which textures are overlapping and how to best combine them dynamically during the render process. The best texture for a particular pixel can be defined using 3D geometric criteria, in conjunction with a real-time, view-dependent ranking algorithm. As a result, overlapping texture fragments can now be hidden, exposed, or blended according to their computed measure of reliability.

  5. Synthesis of image sequences for Korean sign language using 3D shape model

    NASA Astrophysics Data System (ADS)

    Hong, Mun-Ho; Choi, Chang-Seok; Kim, Chang-Seok; Jeon, Joon-Hyeon

    1995-05-01

    This paper proposes a method for offering information and realizing communication to the deaf-mute. The deaf-mute communicates with another person by means of sign language, but most people are unfamiliar with it. This method enables to convert text data into the corresponding image sequences for Korean sign language (KSL). Using a general 3D shape model of the upper body leads to generating the 3D motions of KSL. It is necessary to construct the general 3D shape model considering the anatomical structure of the human body. To obtain a personal 3D shape model, this general model is to adjust to the personal base images. Image synthesis for KSL consists of deforming a personal 3D shape model and texture-mapping the personal images onto the deformed model. The 3D motions for KSL have the facial expressions and the 3D movements of the head, trunk, arms and hands and are parameterized for easily deforming the model. These motion parameters of the upper body are extracted from a skilled signer's motion for each KSL and are stored to the database. Editing the parameters according to the inputs of text data yields to generate the image sequences of 3D motions.

  6. In vivo imaging of the retinal pigment epithelial cells

    NASA Astrophysics Data System (ADS)

    Morgan, Jessica Ijams Wolfing

    The retinal pigment epithelial (RPE) cells form an important layer of the retina because they are responsible for providing metabolic support to the photoreceptors. Techniques to image the RPE layer include autofluorescence imaging with a scanning laser ophthalmoscope (SLO). However, previous studies were unable to resolve single RPE cells in vivo. This thesis describes the technique of combining autofluorescence, SLO, adaptive optics (AO), and dual-wavelength simultaneous imaging and registration to visualize the individual cells in the RPE mosaic in human and primate retina for the first time in vivo. After imaging the RPE mosaic non-invasively, the cell layer's structure and regularity were characterized using quantitative metrics of cell density, spacing, and nearest neighbor distances. The RPE mosaic was compared to the cone mosaic, and RPE imaging methods were confirmed using histology. The ability to image the RPE mosaic led to the discovery of a novel retinal change following light exposure; 568 nm exposures caused an immediate reduction in autofluorescence followed by either full recovery or permanent damage in the RPE layer. A safety study was conducted to determine the range of exposure irradiances that caused permanent damage or transient autofluorescence reductions. Additionally, the threshold exposure causing autofluorescence reduction was determined and reciprocity of radiant exposure was confirmed. Light exposures delivered by the AOSLO were not significantly different than those delivered by a uniform source. As all exposures tested were near or below the permissible light levels of safety standards, this thesis provides evidence that the current light safety standards need to be revised. Finally, with the retinal damage and autofluorescence reduction thresholds identified, the methods of RPE imaging were modified to allow successful imaging of the individual cells in the RPE mosaic while still ensuring retinal safety. This thesis has provided a highly sensitive method for studying the in vivo morphology of individual RPE cells in normal, diseased, and damaged retinas. The methods presented here also will allow longitudinal studies for tracking disease progression and assessing treatment efficacy in human patients and animal models of retinal diseases affecting the RPE.

  7. A new multi-planar reconstruction method using voxel based beamforming for 3D ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Ju, Hyunseok; Kang, Jinbum; Song, Ilseob; Yoo, Yangmo

    2015-03-01

    For multi-planar reconstruction in 3D ultrasound imaging, direct and separable 3D scan conversion (SC) have been used for transforming the ultrasound data acquired in the 3D polar coordinate system to the 3D Cartesian coordinate system. These 3D SC methods can visualize an arbitrary plane for 3D ultrasound volume data. However, they suffer from blurring and blocking artifacts due to resampling during SC. In this paper, a new multi-planar reconstruction method based on voxel based beamforming (VBF) is proposed for reducing blurring and blocking artifacts. In VBF, unlike direct and separable 3D SC, each voxel on an arbitrary imaging plane is directly reconstructed by applying the focusing delay to radio-frequency (RF) data so that the blurring and blocking artifacts can be removed. From the phantom study, the proposed VBF method showed the higher contrast and less blurring compared to the separable and direct 3D SC methods. This result is consistent with the measured information entropy contrast (IEC) values, i.e., 98.9 vs. 42.0 vs. 47.9, respectively. In addition, the 3D SC methods and VBF method were implemented on a high-end GPU by using CUDA programming. The execution times for the VBF and direct 3D SC methods are 1656.1ms, 1633.3ms and 1631.4ms, which are I/O bounded. These results indicate that the proposed VBF method can improve image quality of 3D ultrasound B-mode imaging by removing blurring and blocking artifacts associated with 3D scan conversion and show the feasibility of pseudo-real-time operation.

  8. Quality Prediction of Asymmetrically Distorted Stereoscopic 3D Images.

    PubMed

    Wang, Jiheng; Rehman, Abdul; Zeng, Kai; Wang, Shiqi; Wang, Zhou

    2015-11-01

    Objective quality assessment of distorted stereoscopic images is a challenging problem, especially when the distortions in the left and right views are asymmetric. Existing studies suggest that simply averaging the quality of the left and right views well predicts the quality of symmetrically distorted stereoscopic images, but generates substantial prediction bias when applied to asymmetrically distorted stereoscopic images. In this paper, we first build a database that contains both single-view and symmetrically and asymmetrically distorted stereoscopic images. We then carry out a subjective test, where we find that the quality prediction bias of the asymmetrically distorted images could lean toward opposite directions (overestimate or underestimate), depending on the distortion types and levels. Our subjective test also suggests that eye dominance effect does not have strong impact on the visual quality decisions of stereoscopic images. Furthermore, we develop an information content and divisive normalization-based pooling scheme that improves upon structural similarity in estimating the quality of single-view images. Finally, we propose a binocular rivalry-inspired multi-scale model to predict the quality of stereoscopic images from that of the single-view images. Our results show that the proposed model, without explicitly identifying image distortion types, successfully eliminates the prediction bias, leading to significantly improved quality prediction of the stereoscopic images. PMID:26087491

  9. 3D fluorescence anisotropy imaging using selective plane illumination microscopy.

    PubMed

    Hedde, Per Niklas; Ranjit, Suman; Gratton, Enrico

    2015-08-24

    Fluorescence anisotropy imaging is a popular method to visualize changes in organization and conformation of biomolecules within cells and tissues. In such an experiment, depolarization effects resulting from differences in orientation, proximity and rotational mobility of fluorescently labeled molecules are probed with high spatial resolution. Fluorescence anisotropy is typically imaged using laser scanning and epifluorescence-based approaches. Unfortunately, those techniques are limited in either axial resolution, image acquisition speed, or by photobleaching. In the last decade, however, selective plane illumination microscopy has emerged as the preferred choice for three-dimensional time lapse imaging combining axial sectioning capability with fast, camera-based image acquisition, and minimal light exposure. We demonstrate how selective plane illumination microscopy can be utilized for three-dimensional fluorescence anisotropy imaging of live cells. We further examined the formation of focal adhesions by three-dimensional time lapse anisotropy imaging of CHO-K1 cells expressing an EGFP-paxillin fusion protein. PMID:26368202

  10. High-resolution retinal imaging: enhancement techniques

    NASA Astrophysics Data System (ADS)

    Mujat, Mircea; Patel, Ankit; Iftimia, Nicusor; Akula, James D.; Fulton, Anne B.; Ferguson, R. Daniel

    2015-03-01

    AO has achieved success in a range of applications in ophthalmology where microstructures need to be identified, counted, and mapped. Multiple images are averaged to improve the SNR or analyzed for temporal dynamics. For small patches, image registration by cross-correlation is straightforward. Larger images require more sophisticated registration techniques. Strip-based registration has been used successfully for photoreceptor mosaic alignment in small patches; however, if the deformations along long strips are not simple displacements, averaging will actually degrade the images. We have applied non-rigid registration that significantly improves the quality of processed images for mapping cones and rods, and microvasculature in dark-field imaging. Local grid deformations account for local image stretching and compression due to a number of causes. Individual blood cells can be traced along capillaries in high-speed imaging (130 fps) and flow dynamics can be analyzed.

  11. Determining 3D flow fields via multi-camera light field imaging.

    PubMed

    Truscott, Tadd T; Belden, Jesse; Nielson, Joseph R; Daily, David J; Thomson, Scott L

    2013-01-01

    In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture (1). Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet. PMID:23486112

  12. Segmentation of Intra-Retinal Layers from Optical Coherence Tomography Images

    E-print Network

    Hamarneh, Ghassan

    and monitor a variety of retinal diseases, including macular edema, macular holes, degeneration (thin- ning are the critical layers for glaucomatous degeneration. Index Terms--Optical Coherence Tomography (OCT), retinal therapy to combat retinal degeneration [5]­[9]. Imaging the retina in rodents is significantly more

  13. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country and high resolution satellite images are costly. In this study, proposed method is based on only simple video recording of area. Thus this proposed method is suitable for 3D city modeling. Photo-realistic, scalable, geo-referenced virtual 3D city model is useful for various kinds of applications such as for planning in navigation, tourism, disasters management, transportations, municipality, urban and environmental managements, real-estate industry. Thus this study will provide a good roadmap for geomatics community to create photo-realistic virtual 3D city model by using close range photogrammetry.

  14. High Resolution MALDI Imaging Mass Spectrometry of Retinal Tissue Lipids

    NASA Astrophysics Data System (ADS)

    Anderson, David M. G.; Ablonczy, Zsolt; Koutalos, Yiannis; Spraggins, Jeffrey; Crouch, Rosalie K.; Caprioli, Richard M.; Schey, Kevin L.

    2014-08-01

    Matrix assisted laser desorption ionization imaging mass spectrometry (MALDI IMS) has the ability to provide an enormous amount of information on the abundances and spatial distributions of molecules within biological tissues. The rapid progress in the development of this technology significantly improves our ability to analyze smaller and smaller areas and features within tissues. The mammalian eye has evolved over millions of years to become an essential asset for survival, providing important sensory input of an organism's surroundings. The highly complex sensory retina of the eye is comprised of numerous cell types organized into specific layers with varying dimensions, the thinnest of which is the 10 ?m retinal pigment epithelium (RPE). This single cell layer and the photoreceptor layer contain the complex biochemical machinery required to convert photons of light into electrical signals that are transported to the brain by axons of retinal ganglion cells. Diseases of the retina, including age-related macular degeneration (AMD), retinitis pigmentosa, and diabetic retinopathy, occur when the functions of these cells are interrupted by molecular processes that are not fully understood. In this report, we demonstrate the use of high spatial resolution MALDI IMS and FT-ICR tandem mass spectrometry in the Abca4 -/- knockout mouse model of Stargardt disease, a juvenile onset form of macular degeneration. The spatial distributions and identity of lipid and retinoid metabolites are shown to be unique to specific retinal cell layers.

  15. The agreement between 3D, standard 2D and triplane 2D speckle tracking: effects of image quality and 3D volume rate

    PubMed Central

    Stöbe, Stephan; Tarr, Adrienn; Pfeiffer, Dietrich; Hagendorff, Andreas

    2014-01-01

    Comparison of 3D and 2D speckle tracking performed on standard 2D and triplane 2D datasets of normal and pathological left ventricular (LV) wall-motion patterns with a focus on the effect that 3D volume rate (3DVR), image quality and tracking artifacts have on the agreement between 2D and 3D speckle tracking. 37 patients with normal LV function and 18 patients with ischaemic wall-motion abnormalities underwent 2D and 3D echocardiography, followed by offline speckle tracking measurements. The values of 3D global, regional and segmental strain were compared with the standard 2D and triplane 2D strain values. Correlation analysis with the LV ejection fraction (LVEF) was also performed. The 3D and 2D global strain values correlated good in both normally and abnormally contracting hearts, though systematic differences between the two methods were observed. Of the 3D strain parameters, the area strain showed the best correlation with the LVEF. The numerical agreement of 3D and 2D analyses varied significantly with the volume rate and image quality of the 3D datasets. The highest correlation between 2D and 3D peak systolic strain values was found between 3D area and standard 2D longitudinal strain. Regional wall-motion abnormalities were similarly detected by 2D and 3D speckle tracking. 2DST of triplane datasets showed similar results to those of conventional 2D datasets. 2D and 3D speckle tracking similarly detect normal and pathological wall-motion patterns. Limited image quality has a significant impact on the agreement between 3D and 2D numerical strain values. PMID:26693303

  16. Infrared imaging of the polymer 3D-printing process

    NASA Astrophysics Data System (ADS)

    Dinwiddie, Ralph B.; Kunc, Vlastimil; Lindal, John M.; Post, Brian; Smith, Rachel J.; Love, Lonnie; Duty, Chad E.

    2014-05-01

    Both mid-wave and long-wave IR cameras are used to measure various temperature profiles in thermoplastic parts as they are printed. Two significantly different 3D-printers are used in this study. The first is a small scale commercially available Solidoodle 3 printer, which prints parts with layer thicknesses on the order of 125?m. The second printer used is a "Big Area Additive Manufacturing" (BAAM) 3D-printer developed at Oak Ridge National Laboratory. The BAAM prints parts with a layer thicknesses of 4.06 mm. Of particular interest is the temperature of the previously deposited layer as the new hot layer is about to be extruded onto it. The two layers are expected have a stronger bond if the temperature of the substrate layer is above the glass transition temperature. This paper describes the measurement technique and results for a study of temperature decay and substrate layer temperature for ABS thermoplastic with and without the addition of chopped carbon fibers.

  17. 3D TUMOR SHAPE RECONSTRUCTION FROM 2D BIOLUMINESCENCE IMAGES Junzhou Huang, Xiaolei Huang, Dimitris Metaxas

    E-print Network

    Huang, Junzhou

    3D TUMOR SHAPE RECONSTRUCTION FROM 2D BIOLUMINESCENCE IMAGES Junzhou Huang, Xiaolei Huang, Dimitris spots (corresponding to tumors) are segmented in the set of bioluminescence images. Second, the images of our recon- struction method. 1. INTRODUCTION Bioluminescence imaging (BLI) is an emerging technique

  18. 3-D Deep Penetration Photoacoustic Imaging with a 2-D CMUT Array.

    PubMed

    Ma, Te-Jen; Kothapalli, Sri Rajasekhar; Vaithilingam, Srikant; Oralkan, Omer; Kamaya, Aya; Wygant, Ira O; Zhuang, Xuefeng; Gambhir, Sanjiv S; Jeffrey, R Brooke; Khuri-Yakub, Butrus T

    2010-10-11

    In this work, we demonstrate 3-D photoacoustic imaging of optically absorbing targets embedded as deep as 5 cm inside a highly scattering background medium using a 2-D capacitive micromachined ultrasonic transducer (CMUT) array with a center frequency of 5.5 MHz. 3-D volumetric images and 2-D maximum intensity projection images are presented to show the objects imaged at different depths. Due to the close proximity of the CMUT to the integrated frontend circuits, the CMUT array imaging system has a low noise floor. This makes the CMUT a promising technology for deep tissue photoacoustic imaging. PMID:22977296

  19. 3-D Deep Penetration Photoacoustic Imaging with a 2-D CMUT Array

    PubMed Central

    Ma, Te-Jen; Kothapalli, Sri Rajasekhar; Vaithilingam, Srikant; Oralkan, Ömer; Kamaya, Aya; Wygant, Ira O.; Zhuang, Xuefeng; Gambhir, Sanjiv S.; Jeffrey, R. Brooke; Khuri-Yakub, Butrus T.

    2010-01-01

    In this work, we demonstrate 3-D photoacoustic imaging of optically absorbing targets embedded as deep as 5 cm inside a highly scattering background medium using a 2-D capacitive micromachined ultrasonic transducer (CMUT) array with a center frequency of 5.5 MHz. 3-D volumetric images and 2-D maximum intensity projection images are presented to show the objects imaged at different depths. Due to the close proximity of the CMUT to the integrated frontend circuits, the CMUT array imaging system has a low noise floor. This makes the CMUT a promising technology for deep tissue photoacoustic imaging. PMID:22977296

  20. Processing sequence for non-destructive inspection based on 3D terahertz images

    NASA Astrophysics Data System (ADS)

    Balacey, H.; Perraud, Jean-Baptiste; Bou Sleiman, J.; Guillet, Jean-Paul; Recur, B.; Mounaix, P.

    2014-11-01

    In this paper we present an innovative data and image processing sequence to perform non-destructive inspection from 3D terahertz (THz) images. We develop all the steps starting from a 3D tomographic reconstruction of a sample from its radiographs acquired with a monochromatic millimetre wave imaging system. Thus an automated segmentation provides the different volumes of interest (VOI) composing the sample. Then a 3D visualization and dimensional measurements are performed on these VOI, separately, in order to provide an accurate nondestructive testing (NDT) of the studied sample. This sequence is implemented onto an unique software and validated through the analysis of different objects

  1. Algorithm of pulmonary emphysema extraction using low dose thoracic 3D CT images

    NASA Astrophysics Data System (ADS)

    Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Omatsu, H.; Tominaga, K.; Eguchi, K.; Moriyama, N.

    2006-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to 100 thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  2. Algorithm of pulmonary emphysema extraction using thoracic 3D CT images

    NASA Astrophysics Data System (ADS)

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2007-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  3. Textureless Macula Swelling Detection with Multiple Retinal Fundus Images

    SciTech Connect

    Giancardo, Luca; Meriaudeau, Fabrice; Karnowski, Thomas Paul; Tobin Jr, Kenneth William; Grisan, Enrico; Favaro, Paolo; Ruggeri, Alfredo; Chaum, Edward

    2010-01-01

    Retinal fundus images acquired with non-mydriatic digital fundus cameras are a versatile tool for the diagnosis of various retinal diseases. Because of the ease of use of newer camera models and their relatively low cost, these cameras can be employed by operators with limited training for telemedicine or Point-of-Care applications. We propose a novel technique that uses uncalibrated multiple-view fundus images to analyse the swelling of the macula. This innovation enables the detection and quantitative measurement of swollen areas by remote ophthalmologists. This capability is not available with a single image and prone to error with stereo fundus cameras. We also present automatic algorithms to measure features from the reconstructed image which are useful in Point-of-Care automated diagnosis of early macular edema, e.g., before the appearance of exudation. The technique presented is divided into three parts: first, a preprocessing technique simultaneously enhances the dark microstructures of the macula and equalises the image; second, all available views are registered using non-morphological sparse features; finally, a dense pyramidal optical flow is calculated for all the images and statistically combined to build a naiveheight- map of the macula. Results are presented on three sets of synthetic images and two sets of real world images. These preliminary tests show the ability to infer a minimum swelling of 300 microns and to correlate the reconstruction with the swollen location.

  4. 3-D Target Location from Stereoscopic SAR Images

    SciTech Connect

    DOERRY,ARMIN W.

    1999-10-01

    SAR range-Doppler images are inherently 2-dimensional. Targets with a height offset lay over onto offset range and azimuth locations. Just which image locations are laid upon depends on the imaging geometry, including depression angle, squint angle, and target bearing. This is the well known layover phenomenon. Images formed with different aperture geometries will exhibit different layover characteristics. These differences can be exploited to ascertain target height information, in a stereoscopic manner. Depending on the imaging geometries, height accuracy can be on the order of horizontal position accuracies, thereby rivaling the best IFSAR capabilities in fine resolution SAR images. All that is required for this to work are two distinct passes with suitably different geometries from any plain old SAR.

  5. Advanced 3D Geophysical Imaging Technologies for Geothermal Resource Characterization

    E-print Network

    Zhang, Haijiang

    2012-01-01

    We describe the ongoing development of joint geophysical imaging methodologies for geothermal site characterization and demonstrate their potential in two regions: Krafla volcano and associated geothermal fields in ...

  6. 3D Human Posture Estimation Using the HOG Features from Monocular Image

    E-print Network

    Takiguchi, Tetsuya

    the markers. A 3D human body is expressed by a multi-joint model, and a set of the joint angles describes. There are methods to extract features from images, based on the structure of the human body for example, using skin regions by PCA at every HOG block. Using the proposed methods, 3D human posture is esti- mated by linear

  7. VIRTUAL VIDEO CONFERENCING USING 3D MODEL-ASSISTED IMAGE-BASED RENDERING

    E-print Network

    Eisert, Peter

    -dimensional triangle mesh. By interpolating novel views from a 3-D image volume together with a 3-D model, natural from Washington D.C. to an auditorium in Manhattan was demonstrated. The Bell picture phone shown in communication. First with the introduction of Figure 1: Bell's picture phone system of 1927. digital computers

  8. DETECTING AND TRACKING SEVERE STORMS IN 3D DOPPLER RADAR IMAGES

    E-print Network

    Barron, John

    DETECTING AND TRACKING SEVERE STORMS IN 3D DOPPLER RADAR IMAGES Robert E. Mercer, John L. Barron in Doppler radar datasets. This 3D detection and tracking algorithm is posed in a relaxation labelling, N6A 5B7 Paul Joe Paul.Joe@ec.gc.ca King City Radar Station Meteorological Service of Canada Toronto

  9. Robust Estimation of 3D Human Poses from a Single Image

    E-print Network

    Wang, Chunyu

    2014-06-10

    Human pose estimation is a key step to action recognition. We propose a method of estimating 3D human poses from a single image, which works in conjunction with an existing 2D pose/joint detector. 3D pose estimation is ...

  10. 3D reconstructions with pixel-based images are made possible by digitally clearing plant and animal tissue

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Reconstruction of 3D images from a series of 2D images has been restricted by the limited capacity to decrease the opacity of surrounding tissue. Commercial software that allows color-keying and manipulation of 2D images in true 3D space allowed us to produce 3D reconstructions from pixel based imag...

  11. Efficient RPG detection in noisy 3D image data

    NASA Astrophysics Data System (ADS)

    Pipitone, Frank

    2011-06-01

    We address the automatic detection of Ambush weapons such as rocket propelled grenades (RPGs) from range data which might be derived from multiple camera stereo with textured illumination or by other means. We describe our initial work in a new project involving the efficient acquisition of 3D scene data as well as discrete point invariant techniques to perform real time search for threats to a convoy. The shapes of the jump boundaries in the scene are exploited in this paper, rather than on-surface points, due to the large error typical of depth measurement at long range and the relatively high resolution obtainable in the transverse direction. We describe examples of the generation of a novel range-scaled chain code for detecting and matching jump boundaries.

  12. 3D registration through pseudo x-ray image generation.

    PubMed

    Viant, W J; Barnel, F

    2001-01-01

    Registration of a pre operative plan with the intra operative position of the patient is still a largely unsolved problem. Current techniques generally require fiducials, either artificial or anatomic, to achieve the registration solution. Invariably these fiducials require implantation and/or direct digitisation. The technique described in this paper requires no digitisation or implantation of fiducials, but instead relies on the shape and form of the anatomy through a fully automated image comparison process. A pseudo image, generated from a virtual image intensifier's view of a CT dataset, is intra operatively compared with a real x-ray image. The principle is to align the virtual with the real image intensifier. The technique is an extension to the work undertaken by Domergue [1] and based on original ideas by Weese [4]. PMID:11317805

  13. The Mathematical Foundations of 3D Compton Scatter Emission Imaging

    PubMed Central

    Truong, T. T.; Nguyen, M. K.; Zaidi, H.

    2007-01-01

    The mathematical principles of tomographic imaging using detected (unscattered) X- or gamma-rays are based on the two-dimensional Radon transform and many of its variants. In this paper, we show that two new generalizations, called conical Radon transforms, are related to three-dimensional imaging processes based on detected Compton scattered radiation. The first class of conical Radon transform has been introduced recently to support imaging principles of collimated detector systems. The second class is new and is closely related to the Compton camera imaging principles and invertible under special conditions. As they are poised to play a major role in future designs of biomedical imaging systems, we present an account of their most important properties which may be relevant for active researchers in the field. PMID:18382608

  14. Retinal oxygen saturation evaluation by multi-spectral fundus imaging

    NASA Astrophysics Data System (ADS)

    Khoobehi, Bahram; Ning, Jinfeng; Puissegur, Elise; Bordeaux, Kimberly; Balasubramanian, Madhusudhanan; Beach, James

    2007-03-01

    Purpose: To develop a multi-spectral method to measure oxygen saturation of the retina in the human eye. Methods: Five Cynomolgus monkeys with normal eyes were anesthetized with intramuscular ketamine/xylazine and intravenous pentobarbital. Multi-spectral fundus imaging was performed in five monkeys with a commercial fundus camera equipped with a liquid crystal tuned filter in the illumination light path and a 16-bit digital camera. Recording parameters were controlled with software written specifically for the application. Seven images at successively longer oxygen-sensing wavelengths were recorded within 4 seconds. Individual images for each wavelength were captured in less than 100 msec of flash illumination. Slightly misaligned images of separate wavelengths due to slight eye motion were registered and corrected by translational and rotational image registration prior to analysis. Numerical values of relative oxygen saturation of retinal arteries and veins and the underlying tissue in between the artery/vein pairs were evaluated by an algorithm previously described, but which is now corrected for blood volume from averaged pixels (n > 1000). Color saturation maps were constructed by applying the algorithm at each image pixel using a Matlab script. Results: Both the numerical values of relative oxygen saturation and the saturation maps correspond to the physiological condition, that is, in a normal retina, the artery is more saturated than the tissue and the tissue is more saturated than the vein. With the multi-spectral fundus camera and proper registration of the multi-wavelength images, we were able to determine oxygen saturation in the primate retinal structures on a tolerable time scale which is applicable to human subjects. Conclusions: Seven wavelength multi-spectral imagery can be used to measure oxygen saturation in retinal artery, vein, and tissue (microcirculation). This technique is safe and can be used to monitor oxygen uptake in humans. This work is original and is not under consideration for publication elsewhere.

  15. Polarimetric imaging of retinal disease by polarization sensitive SLO

    NASA Astrophysics Data System (ADS)

    Miura, Masahiro; Elsner, Ann E.; Iwasaki, Takuya; Goto, Hiroshi

    2015-03-01

    Polarimetry imaging is used to evaluate different features of the macular disease. Polarimetry images were recorded using a commercially- available polarization-sensitive scanning laser opthalmoscope at 780 nm (PS-SLO, GDx-N). From data sets of PS-SLO, we computed average reflectance image, depolarized light images, and ratio-depolarized light images. The average reflectance image is the grand mean of all input polarization states. The depolarized light image is the minimum of crossed channel. The ratio-depolarized light image is a ratio between the average reflectance image and depolarized light image, and was used to compensate for variation of brightness. Each polarimetry image is compared with the autofluorescence image at 800 nm (NIR-AF) and autofluorescence image at 500 nm (SW-AF). We evaluated four eyes with geographic atrophy in age related macular degeneration, one eye with retinal pigment epithelium hyperplasia, and two eyes with chronic central serous chorioretinopathy. Polarization analysis could selectively emphasize different features of the retina. Findings in ratio depolarized light image had similarities and differences with NIR-AF images. Area of hyper-AF in NIR-AF images showed high intensity areas in the ratio depolarized light image, representing melanin accumulation. Areas of hypo-AF in NIR-AF images showed low intensity areas in the ratio depolarized light images, representing melanin loss. Drusen were high-intensity areas in the ratio depolarized light image, but NIR-AF images was insensitive to the presence of drusen. Unlike NIR-AF images, SW-AF images showed completely different features from the ratio depolarized images. Polarization sensitive imaging is an effective tool as a non-invasive assessment of macular disease.

  16. Personal identification based on blood vessels of retinal fundus images

    NASA Astrophysics Data System (ADS)

    Fukuta, Keisuke; Nakagawa, Toshiaki; Hayashi, Yoshinori; Hatanaka, Yuji; Hara, Takeshi; Fujita, Hiroshi

    2008-03-01

    Biometric technique has been implemented instead of conventional identification methods such as password in computer, automatic teller machine (ATM), and entrance and exit management system. We propose a personal identification (PI) system using color retinal fundus images which are unique to each individual. The proposed procedure for identification is based on comparison of an input fundus image with reference fundus images in the database. In the first step, registration between the input image and the reference image is performed. The step includes translational and rotational movement. The PI is based on the measure of similarity between blood vessel images generated from the input and reference images. The similarity measure is defined as the cross-correlation coefficient calculated from the pixel values. When the similarity is greater than a predetermined threshold, the input image is identified. This means both the input and the reference images are associated to the same person. Four hundred sixty-two fundus images including forty-one same-person's image pairs were used for the estimation of the proposed technique. The false rejection rate and the false acceptance rate were 9.9×10 -5% and 4.3×10 -5%, respectively. The results indicate that the proposed method has a higher performance than other biometrics except for DNA. To be used for practical application in the public, the device which can take retinal fundus images easily is needed. The proposed method is applied to not only the PI but also the system which warns about misfiling of fundus images in medical facilities.

  17. Retinal, anterior segment and full eye imaging using ultrahigh speed swept source OCT with vertical-cavity surface emitting lasers.

    PubMed

    Grulkowski, Ireneusz; Liu, Jonathan J; Potsaid, Benjamin; Jayaraman, Vijaysekhar; Lu, Chen D; Jiang, James; Cable, Alex E; Duker, Jay S; Fujimoto, James G

    2012-11-01

    We demonstrate swept source OCT utilizing vertical-cavity surface emitting laser (VCSEL) technology for in vivo high speed retinal, anterior segment and full eye imaging. The MEMS tunable VCSEL enables long coherence length, adjustable spectral sweep range and adjustable high sweeping rate (50-580 kHz axial scan rate). These features enable integration of multiple ophthalmic applications into one instrument. The operating modes of the device include: ultrahigh speed, high resolution retinal imaging (up to 580 kHz); high speed, long depth range anterior segment imaging (100 kHz) and ultralong range full eye imaging (50 kHz). High speed imaging enables wide-field retinal scanning, while increased light penetration at 1060 nm enables visualization of choroidal vasculature. Comprehensive volumetric data sets of the anterior segment from the cornea to posterior crystalline lens surface are also shown. The adjustable VCSEL sweep range and rate make it possible to achieve an extremely long imaging depth range of ~50 mm, and to demonstrate the first in vivo 3D OCT imaging spanning the entire eye for non-contact measurement of intraocular distances including axial eye length. Swept source OCT with VCSEL technology may be attractive for next generation integrated ophthalmic OCT instruments. PMID:23162712

  18. Retinal, anterior segment and full eye imaging using ultrahigh speed swept source OCT with vertical-cavity surface emitting lasers

    PubMed Central

    Grulkowski, Ireneusz; Liu, Jonathan J.; Potsaid, Benjamin; Jayaraman, Vijaysekhar; Lu, Chen D.; Jiang, James; Cable, Alex E.; Duker, Jay S.; Fujimoto, James G.

    2012-01-01

    We demonstrate swept source OCT utilizing vertical-cavity surface emitting laser (VCSEL) technology for in vivo high speed retinal, anterior segment and full eye imaging. The MEMS tunable VCSEL enables long coherence length, adjustable spectral sweep range and adjustable high sweeping rate (50–580 kHz axial scan rate). These features enable integration of multiple ophthalmic applications into one instrument. The operating modes of the device include: ultrahigh speed, high resolution retinal imaging (up to 580 kHz); high speed, long depth range anterior segment imaging (100 kHz) and ultralong range full eye imaging (50 kHz). High speed imaging enables wide-field retinal scanning, while increased light penetration at 1060 nm enables visualization of choroidal vasculature. Comprehensive volumetric data sets of the anterior segment from the cornea to posterior crystalline lens surface are also shown. The adjustable VCSEL sweep range and rate make it possible to achieve an extremely long imaging depth range of ~50 mm, and to demonstrate the first in vivo 3D OCT imaging spanning the entire eye for non-contact measurement of intraocular distances including axial eye length. Swept source OCT with VCSEL technology may be attractive for next generation integrated ophthalmic OCT instruments. PMID:23162712

  19. Thermal Plasma Imager (TPI): An Imaging Thermal Ion Mass and 3-D Velocity Analyzer

    NASA Astrophysics Data System (ADS)

    Yau, A. W.; Amerl, P. V.; King, E. P.; Miyake, W.; Abe, T.

    2003-04-01

    The Thermal Plasma Imager (TPI) is an imaging thermal ion mass and 3-dimensional (3-D) velocity analyzer. It is designed to measure the instantaneous mass composition and detailed, mass-resolved, 3-dimensional, velocity distributions of thermal-energy (0.5-50 eV/q) ions on a 3-axis stabilized spacecraft. It consists of a pair of semi-toroidal deflection and fast-switching time-of-flight (TOF) electrodes, a hemispherical electrostatic analyzer (HEA), and a micro-channel plate (MCP) detector. It uses the TOF electrodes to clock the flight times of individual incident ions, and the HEA to focus ions of a given energy-per-charge and incident angle (elevation and azimuth) onto a single point on the MCP. The TOF/HEA combination produces an instantaneous and mass-resolved "image" of a 2-D cone of the 3-D velocity distribution for each ion species, and combines a sequence of concentric 2-D conical samples into a 3-D distribution covering 360° in azimuth and 120° in elevation. It is currently under development for the Enhanced Polar Outflow Probe (e-POP) and Planet-C Venus missions. It is an improved, "3-dimensional" version of the SS520-2 Thermal Suprathermal Analyzer (TSA), which samples ions in its entrance aperture plane and uses the spacecraft spin to achieve 3-D ion sampling. In this paper, we present its detailed design characteristics and prototype instrument performance, and compare these with the ion velocity measurement performances from its 2-D TSA predecessor on SS520-2.

  20. Automatic 3D ultrasound calibration for image guided therapy using intramodality image registration.

    PubMed

    Schlosser, Jeffrey; Kirmizibayrak, Can; Shamdasani, Vijay; Metz, Steve; Hristov, Dimitre

    2013-11-01

    Many real time ultrasound (US) guided therapies can benefit from management of motion-induced anatomical changes with respect to a previously acquired computerized anatomy model. Spatial calibration is a prerequisite to transforming US image information to the reference frame of the anatomy model. We present a new method for calibrating 3D US volumes using intramodality image registration, derived from the 'hand-eye' calibration technique. The method is fully automated by implementing data rejection based on sensor displacements, automatic registration over overlapping image regions, and a self-consistency error metric evaluated continuously during calibration. We also present a novel method for validating US calibrations based on measurement of physical phantom displacements within US images. Both calibration and validation can be performed on arbitrary phantoms. Results indicate that normalized mutual information and localized cross correlation produce the most accurate 3D US registrations for calibration. Volumetric image alignment is more accurate and reproducible than point selection for validating the calibrations, yielding <1.5 mm root mean square error, a significant improvement relative to previously reported hand-eye US calibration results. Comparison of two different phantoms for calibration and for validation revealed significant differences for validation (p = 0.003) but not for calibration (p = 0.795). PMID:24099806

  1. Automatic 3D Ultrasound Calibration for Image Guided Therapy Using Intramodality Image Registration

    PubMed Central

    Schlosser, Jeffrey; Kirmizibayrak, Can; Shamdasani, Vijay; Metz, Steve; Hristov, Dimitre

    2013-01-01

    Many real time ultrasound (US) guided therapies can benefit from management of motion-induced anatomical changes with respect to a previously acquired computerized anatomy model. Spatial calibration is a prerequisite to transforming US image information to the reference frame of the anatomy model. We present a new method for calibrating 3D US volumes using intramodality image registration, derived from the “hand eye” calibration technique. The method is fully automated by implementing data rejection based on sensor displacements, automatic registration over overlapping image regions, and a self-consistency error metric evaluated continuously during calibration. We also present a novel method for validating US calibrations based on measurement of physical phantom displacements within US images. Both calibration and validation can be performed on arbitrary phantoms. Results indicate that normalized mutual information and localized cross correlation produce the most accurate 3D US registrations for calibration. Volumetric image alignment is more accurate and reproducible than point selection for validating the calibrations, yielding <1.5 mm root mean square error, a significant improvement relative to previously reported hand eye US calibration results. Comparison of two different phantoms for calibration and for validation revealed significant differences for validation (p=0.003) but not for calibration (p=0.795). PMID:24099806

  2. Automatic 3D ultrasound calibration for image guided therapy using intramodality image registration

    NASA Astrophysics Data System (ADS)

    Schlosser, Jeffrey; Kirmizibayrak, Can; Shamdasani, Vijay; Metz, Steve; Hristov, Dimitre

    2013-11-01

    Many real time ultrasound (US) guided therapies can benefit from management of motion-induced anatomical changes with respect to a previously acquired computerized anatomy model. Spatial calibration is a prerequisite to transforming US image information to the reference frame of the anatomy model. We present a new method for calibrating 3D US volumes using intramodality image registration, derived from the ‘hand-eye’ calibration technique. The method is fully automated by implementing data rejection based on sensor displacements, automatic registration over overlapping image regions, and a self-consistency error metric evaluated continuously during calibration. We also present a novel method for validating US calibrations based on measurement of physical phantom displacements within US images. Both calibration and validation can be performed on arbitrary phantoms. Results indicate that normalized mutual information and localized cross correlation produce the most accurate 3D US registrations for calibration. Volumetric image alignment is more accurate and reproducible than point selection for validating the calibrations, yielding <1.5 mm root mean square error, a significant improvement relative to previously reported hand-eye US calibration results. Comparison of two different phantoms for calibration and for validation revealed significant differences for validation (p = 0.003) but not for calibration (p = 0.795).

  3. Hands-on guide for 3D image creation for geological purposes

    NASA Astrophysics Data System (ADS)

    Frehner, Marcel; Tisato, Nicola

    2013-04-01

    Geological structures in outcrops or hand specimens are inherently three dimensional (3D), and therefore better understandable if viewed in 3D. While 3D models can easily be created, manipulated, and looked at from all sides on the computer screen (e.g., using photogrammetry or laser scanning data), 3D visualizations for publications or conference posters are much more challenging as they have to live in a 2D-world (i.e., on a sheet of paper). Perspective 2D visualizations of 3D models do not fully transmit the "feeling and depth of the third dimension" to the audience; but this feeling is desirable for a better examination and understanding in 3D of the structure under consideration. One of the very few possibilities to generate real 3D images, which work on a 2D display, is by using so-called stereoscopic images. Stereoscopic images are two images of the same object recorded from two slightly offset viewpoints. Special glasses and techniques have to be used to make sure that one image is seen only by one eye, and the other image is seen by the other eye, which together lead to the "3D effect". Geoscientists are often familiar with such 3D images. For example, geomorphologists traditionally view stereographic orthophotos by employing a mirror-steroscope. Nowadays, petroleum-geoscientists examine high-resolution 3D seismic data sets in special 3D visualization rooms. One of the methods for generating and viewing a stereoscopic image, which does not require a high-tech viewing device, is to create a so-called anaglyph. The principle is to overlay two images saturated in red and cyan, respectively. The two images are then viewed through red-cyan-stereoscopic glasses. This method is simple and cost-effective, but has some drawbacks in preserving colors accurately. A similar method is used in 3D movies, where polarized light or shuttering techniques are used to separate the left from the right image, which allows preserving the original colors. The advantage of red-cyan anaglyphs is their simplicity and the possibility to print them on normal paper or project them using a conventional projector. Producing 3D stereoscopic images is much easier than commonly thought. Our hands-on poster provides an easy-to-use guide for producing 3D stereoscopic images. Few simple rules-of-thumb are presented that define how photographs of any scene or object have to be shot to produce good-looking 3D images. We use the free software Stereophotomaker (http://stereo.jpn.org/eng/stphmkr) to produce anaglyphs and provide red-cyan 3D glasses for viewing them. Our hands-on poster is easy to adapt and helps any geologist to present his/her field or hand specimen photographs in a much more fashionable 3D way for future publications or conference posters.

  4. Beat the MTurkers: Automatic Image Labeling from Weak 3D Supervision Liang-Chieh Chen1

    E-print Network

    Toronto, University of

    Beat the MTurkers: Automatic Image Labeling from Weak 3D Supervision Liang-Chieh Chen1 Sanja Fidler@cs, yuille@stat}.ucla.edu {urtasun, fidler}@cs.toronto.edu Abstract Labeling large-scale datasets with very

  5. Head and Neck Lymph Node Region Delineation with 3-D CT Image Registration

    E-print Network

    Shapiro, Linda

    Head and Neck Lymph Node Region Delineation with 3-D CT Image Registration Chia-Chi Teng , Mary M¥ Department of Otolaryngology-Head and Neck Surgery¤ Department of Computer Science University of Washington

  6. Server-based approach to web visualization of integrated 3-D medical image data.

    PubMed Central

    Poliakov, A. V.; Albright, E.; Corina, D.; Ojemann, G.; Martin, R. F.; Brinkley, J. F.

    2001-01-01

    Although computer processing power and network bandwidth are rapidly increasing, the average desktop is still not able to rapidly process large datasets such as 3-D medical image volumes. We have therefore developed a server side approach to this problem, in which a high performance graphics server accepts commands from web clients to load, process and render 3-D image volumes and models. The renderings are saved as 2-D snapshots on the server, where they are uploaded and displayed on the client. User interactions with the graphic interface on the client side are translated into additional commands to manipulate the 3-D scene, after which the server re-renders the scene and sends a new image to the client. Example forms-based and Java-based clients are described for a brain mapping application, but the techniques should be applicable to multiple domains where 3-D medical image visualization is of interest. PMID:11825248

  7. A single lens with no moving parts for rapid high-resolution 3D image capture

    NASA Astrophysics Data System (ADS)

    Gray, Dan; Chen, Hongquiang; Czechowski, Joseph; Zhang, Kang; Tu, Jilin; Wheeler, Frederick; Yamada, Masako; Pablo Cilia, Juan; DeMuth, Russell; Heidari, Esmaeil; Abramovich, Gil; Harding, Kevin

    2013-02-01

    There are many visual inspection and sensing applications where both a high resolution image and a depth-map of the imaged object are desirable at high speed. Presently available methods to capture 3D data (stereo cameras and structured illumination), are limited in speed, complexity, and transverse resolution. Additionally these techniques rely on a separated baseline for triangulation, precluding use in confined spaces. Typically, off the shelf lenses are implemented where performance in resolution, field-of-view, and depth of field are sacrificed in order to achieve a useful balance. Here we present a novel lens system with high-resolution and wide field-of-view for rapid 3D image capture. The design achieves this using a single lens with no moving parts. A depth-from-defocus algorithm is implemented to reconstruct 3D object point clouds and matched with a fused image to create a 3D rendered view.

  8. 3D seismic imaging of buried Younger Dryas mass movement flows: Lake Windermere, UK

    E-print Network

    National Oceanography Centre Southampton

    submarine canyons (e.g., Lykousis et al., 2007). While several large submarine landslides have been imaged Keywords: High-resolution 3D seismic Submarine landslides Younger Dryas Lake District Windermere Debris

  9. Ultra-Shallow Imaging Using 2D & 3D Seismic Reflection Methods

    E-print Network

    Sloan, Steven D.

    2008-01-01

    The research presented in this dissertation focuses on the survey design, acquisition, processing, and interpretation of ultra-shallow seismic reflection (USR) data in two and three dimensions. The application of 3D USR methods to image multiple...

  10. Double depth-enhanced 3D integral imaging in projection-type system without diffuser

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Jiao, Xiao-xue; Sun, Yu; Xie, Yan; Liu, Shao-peng

    2015-05-01

    Integral imaging is a three dimensional (3D) display technology without any additional equipment. A new system is proposed in this paper which consists of the elemental images of real images in real mode (RIRM) and the ones of virtual images in real mode (VIRM). The real images in real mode are the same as the conventional integral images. The virtual images in real mode are obtained by changing the coordinates of the corresponding points in elemental images which can be reconstructed by the lens array in virtual space. In order to reduce the spot size of the reconstructed images, the diffuser in conventional integral imaging is given up in the proposed method. Then the spot size is nearly 1/20 of that in the conventional system. And an optical integral imaging system is constructed to confirm that our proposed method opens a new way for the application of the passive 3D display technology.

  11. Retinal Imaging of Infants on Spectral Domain Optical Coherence Tomography

    PubMed Central

    Vinekar, Anand; Mangalesh, Shwetha; Jayadev, Chaitra; Maldonado, Ramiro S.; Bauer, Noel; Toth, Cynthia A.

    2015-01-01

    Spectral domain coherence tomography (SD OCT) has become an important tool in the management of pediatric retinal diseases. It is a noncontact imaging device that provides detailed assessment of the microanatomy and pathology of the infant retina with a short acquisition time allowing office examination without the requirement of anesthesia. Our understanding of the development and maturation of the infant fovea has been enhanced by SD OCT allowing an in vivo assessment that correlates with histopathology. This has helped us understand the critical correlation of foveal development with visual potential in the first year of life and beyond. In this review, we summarize the recent literature on the clinical applications of SD OCT in studying the pathoanatomy of the infant macula, its ability to detect subclinical features, and its correlation with disease and vision. Retinopathy of prematurity and macular edema have been discussed in detail. The review also summarizes the current status of SD OCT in other infant retinal conditions, imaging the optic nerve, the choroid, and the retinal nerve fibre in infants and children, and suggests future areas of research. PMID:26221606

  12. An image encryption algorithm based on 3D cellular automata and chaotic maps

    NASA Astrophysics Data System (ADS)

    Del Rey, A. Martín; Sánchez, G. Rodríguez

    2015-05-01

    A novel encryption algorithm to cipher digital images is presented in this work. The digital image is rendering into a three-dimensional (3D) lattice and the protocol consists of two phases: the confusion phase where 24 chaotic Cat maps are applied and the diffusion phase where a 3D cellular automata is evolved. The encryption method is shown to be secure against the most important cryptanalytic attacks.

  13. Influence of surface material characteristics on laser radar 3D imaging of targets

    NASA Astrophysics Data System (ADS)

    Wang, Biao; Wu, Zhen-Sen; Gong, Yanjun

    2010-11-01

    Characteristics of laser radar (LADAR) 3D image depend on ladar transceivers, propagation effects, target/beam interactions, and data processing or detection algorithms. Process of target/beam interaction is determined by target surface optical scattering properties, which can be characterized by bidirectional reflectivity distribution function (BRDF). Based on alternative monostatic BRDF models, we report here a backscattering model of optical radiation for ladar 3D range image. Atmospheric turbulence effects are not treated.

  14. 3D spectral imaging system for anterior chamber metrology

    NASA Astrophysics Data System (ADS)

    Anderson, Trevor; Segref, Armin; Frisken, Grant; Frisken, Steven

    2015-03-01

    Accurate metrology of the anterior chamber of the eye is useful for a number of diagnostic and clinical applications. In particular, accurate corneal topography and corneal thickness data is desirable for fitting contact lenses, screening for diseases and monitoring corneal changes. Anterior OCT systems can be used to measure anterior chamber surfaces, however accurate curvature measurements for single point scanning systems are known to be very sensitive to patient movement. To overcome this problem we have developed a parallel 3D spectral metrology system that captures simultaneous A-scans on a 2D lateral grid. This approach enables estimates of the elevation and curvature of anterior and posterior corneal surfaces that are robust to sample movement. Furthermore, multiple simultaneous surface measurements greatly improve the ability to register consecutive frames and enable aggregate measurements over a finer lateral grid. A key element of our approach has been to exploit standard low cost optical components including lenslet arrays and a 2D sensor to provide a path towards low cost implementation. We demonstrate first prototypes based on 6 Mpixel sensor using a 250 ?m pitch lenslet array with 300 sample beams to achieve an RMS elevation accuracy of 1?m with 95 dB sensitivity and a 7.0 mm range. Initial tests on Porcine eyes, model eyes and calibration spheres demonstrate the validity of the concept. With the next iteration of designs we expect to be able to achieve over 1000 simultaneous A-scans in excess of 75 frames per second.

  15. Space Radar Image Isla Isabela in 3-D

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a three-dimensional view of Isabela, one of the Galapagos Islands located off the western coast of Ecuador, South America. This view was constructed by overlaying a Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) image on a digital elevation map produced by TOPSAR, a prototype airborne interferometric radar which produces simultaneous image and elevation data. The vertical scale in this image is exaggerated by a factor of 1.87. The SIR-C/X-SAR image was taken on the 40th orbit of space shuttle Endeavour. The image is centered at about 0.5 degree south latitude and 91 degrees west longitude and covers an area of 75 by 60 kilometers (47 by 37 miles). The radar incidence angle at the center of the image is about 20 degrees. The western Galapagos Islands, which lie about 1,200 kilometers (750 miles)west of Ecuador in the eastern Pacific, have six active volcanoes similar to the volcanoes found in Hawaii and reflect the volcanic processes that occur where the ocean floor is created. Since the time of Charles Darwin's visit to the area in 1835, there have been more than 60 recorded eruptions on these volcanoes. This SIR-C/X-SAR image of Alcedo and Sierra Negra volcanoes shows the rougher lava flows as bright features, while ash deposits and smooth pahoehoe lava flows appear dark. Vertical exaggeration of relief is a common tool scientists use to detect relationships between structure (for example, faults, and fractures) and topography. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI).

  16. Radar Imaging of Spheres in 3D using MUSIC

    SciTech Connect

    Chambers, D H; Berryman, J G

    2003-01-21

    We have shown that multiple spheres can be imaged by linear and planar EM arrays using only one component of polarization. The imaging approach involves calculating the SVD of the scattering response matrix, selecting a subset of singular values that represents noise, and evaluating the MUSIC functional. The noise threshold applied to the spectrum of singular values for optimal performance is typically around 1%. The resulting signal subspace includes more than one singular value per sphere. The presence of reflections from the ground improves height localization, even for a linear array parallel to the ground. However, the interference between direct and reflected energy modulates the field, creating periodic nulls that can obscure targets in typical images. These nulls are largely eliminated by normalizing the MUSIC functional with the broadside beam pattern of the array. The resulting images show excellent localization for 1 and 2 spheres. The performance for the 3 sphere configurations are complicated by shadowing effects and the greater range of the 3rd sphere in case 2. Two of the three spheres are easily located by MUSIC but the third is difficult to distinguish from other local maxima of the complex imaging functional. Improvement is seen when the linear array is replace with a planar array, which increases the effective aperture height. Further analysis of the singular values and their relationship to modes of scattering from the spheres, as well as better ways to exploit polarization, should improve performance. Work along these lines is currently being pursued by the authors.

  17. Parallel line scanning ophthalmoscope for retinal imaging.

    PubMed

    Vienola, Kari V; Damodaran, Mathi; Braaf, Boy; Vermeer, Koenraad A; de Boer, Johannes F

    2015-11-15

    A parallel line scanning ophthalmoscope (PLSO) is presented using a digital micromirror device (DMD) for parallel confocal line imaging of the retina. The posterior part of the eye is illuminated using up to seven parallel lines, which were projected at 100 Hz. The DMD offers a high degree of parallelism in illuminating the retina compared to traditional scanning laser ophthalmoscope systems utilizing scanning mirrors. The system operated at the shot-noise limit with a signal-to-noise ratio of 28 for an optical power measured at the cornea of 100 ?W. To demonstrate the imaging capabilities of the system, the macula and the optic nerve head of a healthy volunteer were imaged. Confocal images show good contrast and lateral resolution with a 10°×10° field of view. PMID:26565868

  18. Retinal Vessel Cannulation with an Image-Guided Handheld Robot

    PubMed Central

    Becker, Brian C.; Voros, Sandrine; Lobes, Louis A.; Handa, James T.; Hager, Gregory D.; Riviere, Cameron N.

    2012-01-01

    Cannulation of small retinal vessels is often prohibitively difficult for surgeons, since physiological tremor often exceeds the narrow diameter of the vessel (40–120 ?m). Using an active handheld micromanipulator, we introduce an image-guided robotic system that reduces tremor and provides smooth, scaled motion during the procedure. The micromanipulator assists the surgeon during the approach, puncture, and injection stages of the procedure by tracking the pipette and anatomy viewed under the microscope. In experiments performed ex vivo by an experienced retinal surgeon on 40–60 ?m vessels in porcine eyes, the success rate was 29% (2/7) without the aid of the system and 63% (5/8) with the aid of the system. PMID:21096274

  19. Blind multispectral image decomposition by 3D nonnegative tensor factorization.

    PubMed

    Kopriva, Ivica; Cichocki, Andrzej

    2009-07-15

    Alpha-divergence-based nonnegative tensor factorization (NTF) is applied to blind multispectral image (MSI) decomposition. The matrix of spectral profiles and the matrix of spatial distributions of the materials resident in the image are identified from the factors in Tucker3 and PARAFAC models. NTF preserves local structure in the MSI that is lost as a result of vectorization of the image when nonnegative matrix factorization (NMF)- or independent component analysis (ICA)-based decompositions are used. Moreover, NTF based on the PARAFAC model is unique up to permutation and scale under mild conditions. To achieve this, NMF- and ICA-based factorizations, respectively, require enforcement of sparseness (orthogonality) and statistical independence constraints on the spatial distributions of the materials resident in the MSI, and these conditions do not hold. We demonstrate efficiency of the NTF-based factorization in relation to NMF- and ICA-based factorizations on blind decomposition of the experimental MSI with the known ground truth. PMID:19823551

  20. OVERALL PROCEDURES PROTOCOL AND PATIENT ENROLLMENT PROTOCOL: TESTING FEASIBILITY OF 3D ULTRASOUND DATA ACQUISITION AND RELIABILITY OF DATA RETRIEVAL FROM STORED 3D IMAGES

    EPA Science Inventory

    The purpose of this study is to examine the feasibility of collecting, transmitting,

    and analyzing 3-D ultrasound data in the context of a multi-center study of pregnant

    women. The study will also examine the reliability of measurements obtained from 3-D

    imag
    ...

  1. Snapshot 3D optical coherence tomography system using image mappingspectrometry

    PubMed Central

    Nguyen, Thuc-Uyen; Pierce, Mark C; Higgins, Laura; Tkaczyk, Tomasz S

    2013-01-01

    A snapshot 3-Dimensional Optical Coherence Tomography system was developed using Image MappingSpectrometry. This system can give depth information (Z) at different spatial positions (XY) withinone camera integration time to potentially reduce motion artifact and enhance throughput. Thecurrent (x,y,?) datacube of (85×356×117) provides a 3Dvisualization of sample with 400 ?m depth and 13.4?m in transverse resolution. Axial resolution of 16.0?m can also be achieved in this proof-of-concept system. We present ananalysis of the theoretical constraints which will guide development of future systems withincreased imaging depth and improved axial and lateral resolutions. PMID:23736629

  2. 3-D capacitance density imaging of fluidized bed

    DOEpatents

    Fasching, George E. (653 Vista Pl., Morgantown, WV 26505)

    1990-01-01

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved.

  3. QBISM: A Prototype 3-D Medical Image Database System Manish Arya, William Cody, Christos Faloutsos

    E-print Network

    Faloutsos, Christos

    QBISM: A Prototype 3-D Medical Image Database System Manish Arya, William Cody, Christos Faloutsos medical images. Our speci c application is the Functional Brain Mapping project at the Laboratory of Neuro support queries across multiple medical image studies in a very investigative, interactive, and iterative

  4. Pearling: 3D interactive extraction of tubular structures from volumetric images

    E-print Network

    Rossignac, Jarek

    Pearling: 3D interactive extraction of tubular structures from volumetric images J. Rossignac1 , B and Reasoning Department, Princeton, NJ 08540 Abstract. This paper presents Pearling, a novel three image. Given a user-supplied initialization, Pearling extracts runs of pearls (balls) from the image

  5. Integration of virtual and real scenes within an integral 3D imaging environment

    NASA Astrophysics Data System (ADS)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  6. Detecting Wedge Shaped Defects in Polarimetric Images of the Retinal Nerve Fiber Layer.

    E-print Network

    van Vliet, Lucas J.

    Detecting Wedge Shaped Defects in Polarimetric Images of the Retinal Nerve Fiber Layer. Koen detection of wedge shaped defects in Scanning Laser Polarimetry images of the retinal nerve fiber layer. A (a) (b) Fig. 1. Retardation images. (a) Healthy eye. (b) Wedge shaped defect, marked by white arrows

  7. Laser speckle imaging of rat retinal blood flow with hybrid temporal and spatial analysis method

    E-print Network

    Duong, Timothy Q.

    Laser speckle imaging of rat retinal blood flow with hybrid temporal and spatial analysis method is needed for retinal imaging. Laser speckle imaging (LSI) is such a method. Currently, there are two to artifacts from stationary speckle. We proposed a hybrid temporal and spatial analysis method (HTS

  8. 3D segmentation and image annotation for quantitative diagnosis in lung CT images with pulmonary lesions

    NASA Astrophysics Data System (ADS)

    Li, Suo; Zhu, Yanjie; Sun, Jianyong; Zhang, Jianguo

    2013-03-01

    Pulmonary nodules and ground glass opacities are highly significant findings in high-resolution computed tomography (HRCT) of patients with pulmonary lesion. The appearances of pulmonary nodules and ground glass opacities show a relationship with different lung diseases. According to corresponding characteristic of lesion, pertinent segment methods and quantitative analysis are helpful for control and treat diseases at an earlier and potentially more curable stage. Currently, most of the studies have focused on two-dimensional quantitative analysis of these kinds of deceases. Compared to two-dimensional images, three-dimensional quantitative analysis can take full advantage of isotropic image data acquired by using thin slicing HRCT in space and has better quantitative precision for clinical diagnosis. This presentation designs a computer-aided diagnosis component to segment 3D disease areas of nodules and ground glass opacities in lung CT images, and use AIML (Annotation and image makeup language) to annotate the segmented 3D pulmonary lesions with information of quantitative measurement which may provide more features and information to the radiologists in clinical diagnosis.

  9. Determining 3-D motion and structure from image sequences

    NASA Technical Reports Server (NTRS)

    Huang, T. S.

    1982-01-01

    A method of determining three-dimensional motion and structure from two image frames is presented. The method requires eight point correspondences between the two frames, from which motion and structure parameters are determined by solving a set of eight linear equations and a singular value decomposition of a 3x3 matrix. It is shown that the solution thus obtained is unique.

  10. Space Radar Image of Kilauea, Hawaii in 3-D

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This is a three-dimensional perspective view of a false-color image of the eastern part of the Big Island of Hawaii. It was produced using all three radar frequencies -- X-band, C-band and L-band -- from the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) flying on the space shuttle Endeavour, overlaid on a U.S. Geological Survey digital elevation map. Visible in the center of the image in blue are the summit crater (Kilauea Caldera) which contains the smaller Halemaumau Crater, and the line of collapse craters below them that form the Chain of Craters Road. The image was acquired on April 12, 1994 during orbit 52 of the space shuttle. The area shown is approximately 34 by 57 kilometers (21 by 35 miles) with the top of the image pointing toward northwest. The image is centered at about 155.25 degrees west longitude and 19.5 degrees north latitude. The false colors are created by displaying three radar channels of different frequency. Red areas correspond to high backscatter at L-HV polarization, while green areas exhibit high backscatter at C-HV polarization. Finally, blue shows high return at X-VV polarization. Using this color scheme, the rain forest appears bright on the image, while the green areas correspond to lower vegetation. The lava flows have different colors depending on their types and are easily recognizable due to their shapes. The flows at the top of the image originated from the Mauna Loa volcano. Kilauea volcano has been almost continuously active for more than the last 11 years. Field teams that were on the ground specifically to support these radar observations report that there was vigorous surface activity about 400 meters (one-quartermile) inland from the coast. A moving lava flow about 200 meters (650 feet) in length was observed at the time of the shuttle overflight, raising the possibility that subsequent images taken during this mission will show changes in the landscape. Currently, most of the lava that is erupted travels the 8 kilometers (5 miles) from the Pu'u O'o crater (the active vent) just outside this image to the coast through a series of lava tubes, but in the past there have been many large lava flows that have traveled this distance, destroying houses and parts of the Hawaii Volcanoes National Park. This SIR-C/X-SAR image shows two types of lava flows that are common to Hawaiian volcanoes. Pahoehoe lava flows are relatively smooth, and appear very dark blue because much of the radar energy is reflected away from the radar. In contrast other lava flows are relatively rough and bounce much of the radar energy back to the radar, making that part of the image bright blue. This radar image is valuable because it allows scientists to study an evolving lava flow field from the Pu'u O'o vent. Much of the area on the northeast side (right) of the volcano is covered with tropical rain forest, and because trees reflect a lot of the radar energy, the forest appears bright in this radar scene. The linear feature running from Kilauea Crater to the right of the image is Highway 11leading to the city of Hilo which is located just beyond the right edge of this image. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA)

  11. Fast 3-D optical imaging with transient fluorescence signals

    E-print Network

    Guo, Zhixiong "James"

    in the imaging of a small cubic tumor embedded in a cubical tissue phantom with a preassigned uptake distribution Enhancement," Proc. Natl. Acad. Sci. USA 97, 2767-2772 (2000). 3. R. Roy and E. M. Sevick-Muraca, "Truncated-Arriving Photons and Laplace Transforms," Proc. Natl. Acad. Sci. USA 94, 8783-8788 (1997). 6. J. Chang, H. L

  12. Registration and 3D visualization of large microscopy images

    NASA Astrophysics Data System (ADS)

    Mosaliganti, Kishore; Pan, Tony; Sharp, Richard; Ridgway, Randall; Iyengar, Srivathsan; Gulacy, Alexandra; Wenzel, Pamela; de Bruin, Alain; Machiraju, Raghu; Huang, Kun; Leone, Gustavo; Saltz, Joel

    2006-03-01

    Inactivation of the retinoblastoma gene in mouse embryos causes tissue infiltrations into critical sections of the placenta, which has been shown to affect fetal survivability. Our collaborators in cancer genetics are extremely interested in examining the three dimensional nature of these infiltrations given a stack of two dimensional light microscopy images. Three sets of wildtype and mutant placentas was sectioned serially and digitized using a commercial light microscopy scanner. Each individual placenta dataset consisted of approximately 1000 images totaling 700 GB in size, which were registered into a volumetric dataset using National Library of Medicine's (NIH/NLM) Insight Segmentation and Registration Toolkit (ITK). This paper describes our method for image registration to aid in volume visualization of tissue level intermixing for both wildtype and Rb - specimens. The registration process faces many challenges arising from the large image sizes, damages during sectioning, staining gradients both within and across sections, and background noise. These issues limit the direct application of standard registration techniques due to frequent convergence to local solutions. In this work, we develop a mixture of automated and semi-automated enhancements with ground-truth validation for the mutual information-based registration algorithm. Our final volume renderings clearly show tissue intermixing differences between both wildtype and Rb - specimens which are not obvious prior to registration.

  13. A dual-modal retinal imaging system with adaptive optics

    PubMed Central

    Meadway, Alexander; Girkin, Christopher A.; Zhang, Yuhua

    2013-01-01

    An adaptive optics scanning laser ophthalmoscope (AO-SLO) is adapted to provide optical coherence tomography (OCT) imaging. The AO-SLO function is unchanged. The system uses the same light source, scanning optics, and adaptive optics in both imaging modes. The result is a dual-modal system that can acquire retinal images in both en face and cross-section planes at the single cell level. A new spectral shaping method is developed to reduce the large sidelobes in the coherence profile of the OCT imaging when a non-ideal source is used with a minimal introduction of noise. The technique uses a combination of two existing digital techniques. The thickness and position of the traditionally named inner segment/outer segment junction are measured from individual photoreceptors. In-vivo images of healthy and diseased human retinas are demonstrated. PMID:24514529

  14. Contrast-based sensorless adaptive optics for retinal imaging

    PubMed Central

    Zhou, Xiaolin; Bedggood, Phillip; Bui, Bang; Nguyen, Christine T.O.; He, Zheng; Metha, Andrew

    2015-01-01

    Conventional adaptive optics ophthalmoscopes use wavefront sensing methods to characterize ocular aberrations for real-time correction. However, there are important situations in which the wavefront sensing step is susceptible to difficulties that affect the accuracy of the correction. To circumvent these, wavefront sensorless adaptive optics (or non-wavefront sensing AO; NS-AO) imaging has recently been developed and has been applied to point-scanning based retinal imaging modalities. In this study we show, for the first time, contrast-based NS-AO ophthalmoscopy for full-frame in vivo imaging of human and animal eyes. We suggest a robust image quality metric that could be used for any imaging modality, and test its performance against other metrics using (physical) model eyes. PMID:26417525

  15. Applications of Panoramic Images: from 720° Panorama to Interior 3d Models of Augmented Reality

    NASA Astrophysics Data System (ADS)

    Lee, I.-C.; Tsai, F.

    2015-05-01

    A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The results presented in this paper demonstrate the potential of using panoramic images to generate 3D point clouds and 3D models. However, it is currently a manual and labor-intensive process. A research is being carried out to Increase the degree of automation of these procedures.

  16. Digital holography particle image velocimetry for the measurement of 3D t-3c flows

    NASA Astrophysics Data System (ADS)

    Shen, Gongxin; Wei, Runjie

    2005-10-01

    In this paper a digital in-line holographic recording and reconstruction system was set up and used in the particle image velocimetry for the 3D t-3c (the three-component (3c), velocity vector field measurements in a three-dimensional (3D), space field with time history ( t)) flow measurements that made up of the new full-flow field experimental technique—digital holographic particle image velocimetry (DHPIV). The traditional holographic film was replaced by a CCD chip that records instantaneously the interference fringes directly without the darkroom processing, and the virtual image slices in different positions were reconstructed by computation using Fresnel-Kirchhoff integral method from the digital holographic image. Also a complex field signal filter (analyzing image calculated by its intensity and phase from real and image parts in fast fourier transform (FFT)) was applied in image reconstruction to achieve the thin focus depth of image field that has a strong effect with the vertical velocity component resolution. Using the frame-straddle CCD device techniques, the 3c velocity vector was computed by 3D cross-correlation through space interrogation block matching through the reconstructed image slices with the digital complex field signal filter. Then the 3D-3c-velocity field (about 20 000 vectors), 3D-streamline and 3D-vorticiry fields, and the time evolution movies (30 field/s) for the 3D t-3c flows were displayed by the experimental measurement using this DHPIV method and techniques.

  17. Space Radar Image of Missoula, Montana in 3-D

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This is a three-dimensional perspective view of Missoula, Montana, created by combining two spaceborne radar images using a technique known as interferometry. Visualizations like this are useful because they show scientists the shapes of the topographic features such as mountains and valleys. This technique helps to clarify the relationships of the different types of materials on the surface detected by the radar. The view is looking north-northeast. The blue circular area at the lower left corner is a bend of the Bitterroot River just before it joins the Clark Fork, which runs through the city. Crossing the Bitterroot River is the bridge of U.S. Highway 93. Highest mountains in this image are at elevations of 2,200 meters (7,200 feet). The city is about 975 meters (3,200 feet) above sea level. The bright yellow areas are urban and suburban zones, dark brown and blue-green areas are grasslands, bright green areas are farms, light brown and purple areas are scrub and forest, and bright white and blue areas are steep rocky slopes. The two radar images were taken on successive days by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) onboard the space shuttle Endeavour in October 1994. The digital elevation map was produced using radar interferometry, a process in which radar data are acquired on different passes of the space shuttle. The two data passes are compared to obtain elevation information. Radar image data are draped over the topography to provide the color with the following assignments: red is L-band vertically transmitted, vertically received; green is C-band vertically transmitted, vertically received; and blue are differences seen in the L-band data between the two days. This image is centered near 46.9 degrees north latitude and 114.1 degrees west longitude. No vertical exaggeration factor has been applied to the data. SIR-C/X-SAR, a joint mission of the German, Italian and United States space agencies, is part of NASA's Mission to Planet Earth program.

  18. Space Radar Image of Karakax Valley, China 3-D

    NASA Technical Reports Server (NTRS)

    1994-01-01

    This three-dimensional perspective of the remote Karakax Valley in the northern Tibetan Plateau of western China was created by combining two spaceborne radar images using a technique known as interferometry. Visualizations like this are helpful to scientists because they reveal where the slopes of the valley are cut by erosion, as well as the accumulations of gravel deposits at the base of the mountains. These gravel deposits, called alluvial fans, are a common landform in desert regions that scientists are mapping in order to learn more about Earth's past climate changes. Higher up the valley side is a clear break in the slope, running straight, just below the ridge line. This is the trace of the Altyn Tagh fault, which is much longer than California's San Andreas fault. Geophysicists are studying this fault for clues it may be able to give them about large faults. Elevations range from 4000 m (13,100 ft) in the valley to over 6000 m (19,700 ft) at the peaks of the glaciated Kun Lun mountains running from the front right towards the back. Scale varies in this perspective view, but the area is about 20 km (12 miles) wide in the middle of the image, and there is no vertical exaggeration. The two radar images were acquired on separate days during the second flight of the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) aboard the space shuttle Endeavour in October 1994. The interferometry technique provides elevation measurements of all points in the scene. The resulting digital topographic map was used to create this view, looking northwest from high over the valley. Variations in the colors can be related to gravel, sand and rock outcrops. This image is centered at 36.1 degrees north latitude, 79.2 degrees east longitude. Radar image data are draped over the topography to provide the color with the following assignments: Red is L-band vertically transmitted, vertically received; green is the average of L-band vertically transmitted, vertically received and C-band vertically transmitted, vertically received; and blue is C-band vertically transmitted, vertically received. SIR-C/X-SAR, a joint mission of the German, Italian and United States space agencies, is part of NASA's Mission to Planet Earth.

  19. Image-based indoor localization system based on 3D SfM model

    NASA Astrophysics Data System (ADS)

    Lu, Guoyu; Kambhamettu, Chandra

    2013-12-01

    Indoor localization is an important research topic for both of the robot and signal processing communities. In recent years, image-based localization is also employed in indoor environment for the easy availability of the necessary equipment. After capturing an image and sending it to an image database, the best matching image is returned with the navigation information. By allowing further camera pose estimation, the image-based localization system with the use of Structure-from-Motion reconstruction model can achieve higher accuracy than the methods of searching through a 2D image database. However, this emerging technique is still only on the use of outdoor environment. In this paper, we introduce the 3D SfM model based image-based localization system into the indoor localization task. We capture images of the indoor environment and reconstruct the 3D model. On the localization task, we simply use the images captured by a mobile to match the 3D reconstructed model to localize the image. In this process, we use the visual words and the approximate nearest neighbor methods to accelerate the process of nding the query feature's correspondences. Within the visual words, we conduct linear search in detecting the correspondences. From the experiments, we nd that the image-based localization method based on 3D SfM model gives good localization result based on both accuracy and speed.

  20. Mixed reality orthognathic surgical simulation by entity model manipulation and 3D-image display

    NASA Astrophysics Data System (ADS)

    Shimonagayoshi, Tatsunari; Aoki, Yoshimitsu; Fushima, Kenji; Kobayashi, Masaru

    2005-12-01

    In orthognathic surgery, the framing of 3D-surgical planning that considers the balance between the front and back positions and the symmetry of the jawbone, as well as the dental occlusion of teeth, is essential. In this study, a support system for orthodontic surgery to visualize the changes in the mandible and the occlusal condition and to determine the optimum position in mandibular osteotomy has been developed. By integrating the operating portion of a tooth model that is to determine the optimum occlusal position by manipulating the entity tooth model and the 3D-CT skeletal images (3D image display portion) that are simultaneously displayed in real-time, the determination of the mandibular position and posture in which the improvement of skeletal morphology and occlusal condition is considered, is possible. The realistic operation of the entity model and the virtual 3D image display enabled the construction of a surgical simulation system that involves augmented reality.

  1. 3D reconstruction of SEM images by use of optical photogrammetry software.

    PubMed

    Eulitz, Mona; Reiss, Gebhard

    2015-08-01

    Reconstruction of the three-dimensional (3D) surface of an object to be examined is widely used for structure analysis in science and many biological questions require information about their true 3D structure. For Scanning Electron Microscopy (SEM) there has been no efficient non-destructive solution for reconstruction of the surface morphology to date. The well-known method of recording stereo pair images generates a 3D stereoscope reconstruction of a section, but not of the complete sample surface. We present a simple and non-destructive method of 3D surface reconstruction from SEM samples based on the principles of optical close range photogrammetry. In optical close range photogrammetry a series of overlapping photos is used to generate a 3D model of the surface of an object. We adapted this method to the special SEM requirements. Instead of moving a detector around the object, the object itself was rotated. A series of overlapping photos was stitched and converted into a 3D model using the software commonly used for optical photogrammetry. A rabbit kidney glomerulus was used to demonstrate the workflow of this adaption. The reconstruction produced a realistic and high-resolution 3D mesh model of the glomerular surface. The study showed that SEM micrographs are suitable for 3D reconstruction by optical photogrammetry. This new approach is a simple and useful method of 3D surface reconstruction and suitable for various applications in research and teaching. PMID:26073969

  2. Mechanically assisted 3D prostate ultrasound imaging and biopsy needle-guidance system

    NASA Astrophysics Data System (ADS)

    Bax, Jeffrey; Williams, Jackie; Cool, Derek; Gardi, Lori; Montreuil, Jacques; Karnik, Vaishali; Sherebrin, Shi; Romagnoli, Cesare; Fenster, Aaron

    2010-02-01

    Prostate biopsy procedures are currently limited to using 2D transrectal ultrasound (TRUS) imaging to guide the biopsy needle. Being limited to 2D causes ambiguity in needle guidance and provides an insufficient record to allow guidance to the same suspicious locations or avoid regions that are negative during previous biopsy sessions. We have developed a mechanically assisted 3D ultrasound imaging and needle tracking system, which supports a commercially available TRUS probe and integrated needle guide for prostate biopsy. The mechanical device is fixed to a cart and the mechanical tracking linkage allows its joints to be manually manipulated while fully supporting the weight of the ultrasound probe. The computer interface is provided in order to track the needle trajectory and display its path on a corresponding 3D TRUS image, allowing the physician to aim the needle-guide at predefined targets within the prostate. The system has been designed for use with several end-fired transducers that can be rotated about the longitudinal axis of the probe in order to generate 3D image for 3D navigation. Using the system, 3D TRUS prostate images can be generated in approximately 10 seconds. The system reduces most of the user variability from conventional hand-held probes, which make them unsuitable for precision biopsy, while preserving some of the user familiarity and procedural workflow. In this paper, we describe the 3D TRUS guided biopsy system and report on the initial clinical use of this system for prostate biopsy.

  3. Medical image retrieval system using multiple features from 3D ROIs

    NASA Astrophysics Data System (ADS)

    Lu, Hongbing; Wang, Weiwei; Liao, Qimei; Zhang, Guopeng; Zhou, Zhiming

    2012-02-01

    Compared to a retrieval using global image features, features extracted from regions of interest (ROIs) that reflect distribution patterns of abnormalities would benefit more for content-based medical image retrieval (CBMIR) systems. Currently, most CBMIR systems have been designed for 2D ROIs, which cannot reflect 3D anatomical features and region distribution of lesions comprehensively. To further improve the accuracy of image retrieval, we proposed a retrieval method with 3D features including both geometric features such as Shape Index (SI) and Curvedness (CV) and texture features derived from 3D Gray Level Co-occurrence Matrix, which were extracted from 3D ROIs, based on our previous 2D medical images retrieval system. The system was evaluated with 20 volume CT datasets for colon polyp detection. Preliminary experiments indicated that the integration of morphological features with texture features could improve retrieval performance greatly. The retrieval result using features extracted from 3D ROIs accorded better with the diagnosis from optical colonoscopy than that based on features from 2D ROIs. With the test database of images, the average accuracy rate for 3D retrieval method was 76.6%, indicating its potential value in clinical application.

  4. Fully automatic and robust 3D registration of serial-section microscopic images

    PubMed Central

    Wang, Ching-Wei; Budiman Gosno, Eric; Li, Yen-Sheng

    2015-01-01

    Robust and fully automatic 3D registration of serial-section microscopic images is critical for detailed anatomical reconstruction of large biological specimens, such as reconstructions of dense neuronal tissues or 3D histology reconstruction to gain new structural insights. However, robust and fully automatic 3D image registration for biological data is difficult due to complex deformations, unbalanced staining and variations on data appearance. This study presents a fully automatic and robust 3D registration technique for microscopic image reconstruction, and we demonstrate our method on two ssTEM datasets of drosophila brain neural tissues, serial confocal laser scanning microscopic images of a drosophila brain, serial histopathological images of renal cortical tissues and a synthetic test case. The results show that the presented fully automatic method is promising to reassemble continuous volumes and minimize artificial deformations for all data and outperforms four state-of-the-art 3D registration techniques to consistently produce solid 3D reconstructed anatomies with less discontinuities and deformations. PMID:26449756

  5. Reconstruction of pediatric 3D blood vessel images from biplane angiograms

    NASA Astrophysics Data System (ADS)

    Oishi, Satoru; Nishiki, Masayuki; Asahina, Hiroshi; Tanabe, Chiharu; Yasunaga, Kunihiro; Nakamura, Hiroharu

    1996-04-01

    In pediatric cardiac angiography, there are several peculiarities such as limitation of both x-ray dose and the amount of contrast medium in comparison with conventional angiography. Due to these peculiarities, the catheter examinations are accomplished in a short time with biplane x- ray apparatus. Thus, it is often difficult to determine 3D structures of blood vessels, especially those of pediatric anomalies. Then a new 3D reconstruction method based on selective biplane angiography was developed in order to support diagnosis and surgical planning. The method was composed of particular reconstruction and composition. Individual 3D image is reconstructed with the particular reconstruction, and all 3D images are composed into standard coordinate system in the composition. This method was applied to phantom images and clinical images for evaluation of the method. The 3D image of the clinical data was reconstructed accurately as its structures were compared with the real structures described in the operative findings. The 3D visualization based on the method is helpful for diagnosis and surgical planning of complicated anomalies in pediatric cardiology.

  6. Fully automatic and robust 3D registration of serial-section microscopic images.

    PubMed

    Wang, Ching-Wei; Budiman Gosno, Eric; Li, Yen-Sheng

    2015-01-01

    Robust and fully automatic 3D registration of serial-section microscopic images is critical for detailed anatomical reconstruction of large biological specimens, such as reconstructions of dense neuronal tissues or 3D histology reconstruction to gain new structural insights. However, robust and fully automatic 3D image registration for biological data is difficult due to complex deformations, unbalanced staining and variations on data appearance. This study presents a fully automatic and robust 3D registration technique for microscopic image reconstruction, and we demonstrate our method on two ssTEM datasets of drosophila brain neural tissues, serial confocal laser scanning microscopic images of a drosophila brain, serial histopathological images of renal cortical tissues and a synthetic test case. The results show that the presented fully automatic method is promising to reassemble continuous volumes and minimize artificial deformations for all data and outperforms four state-of-the-art 3D registration techniques to consistently produce solid 3D reconstructed anatomies with less discontinuities and deformations. PMID:26449756

  7. Deformation analysis of 3D tagged cardiac images using an optical flow method

    PubMed Central

    2010-01-01

    Background This study proposes and validates a method of measuring 3D strain in myocardium using a 3D Cardiovascular Magnetic Resonance (CMR) tissue-tagging sequence and a 3D optical flow method (OFM). Methods Initially, a 3D tag MR sequence was developed and the parameters of the sequence and 3D OFM were optimized using phantom images with simulated deformation. This method then was validated in-vivo and utilized to quantify normal sheep left ventricular functions. Results Optimizing imaging and OFM parameters in the phantom study produced sub-pixel root-mean square error (RMS) between the estimated and known displacements in the x (RMSx = 0.62 pixels (0.43 mm)), y (RMSy = 0.64 pixels (0.45 mm)) and z (RMSz = 0.68 pixels (1 mm)) direction, respectively. In-vivo validation demonstrated excellent correlation between the displacement measured by manually tracking tag intersections and that generated by 3D OFM (R ? 0.98). Technique performance was maintained even with 20% Gaussian noise added to the phantom images. Furthermore, 3D tracking of 3D cardiac motions resulted in a 51% decrease in in-plane tracking error as compared to 2D tracking. The in-vivo function studies showed that maximum wall thickening was greatest in the lateral wall, and increased from both apex and base towards the mid-ventricular region. Regional deformation patterns are in agreement with previous studies on LV function. Conclusion A novel method was developed to measure 3D LV wall deformation rapidly with high in-plane and through-plane resolution from one 3D cine acquisition. PMID:20353600

  8. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    NASA Astrophysics Data System (ADS)

    Dhou, S.; Hurwitz, M.; Mishra, P.; Cai, W.; Rottmann, J.; Li, R.; Williams, C.; Wagar, M.; Berbeco, R.; Ionascu, D.; Lewis, J. H.

    2015-05-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5?mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2?mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4?mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery.

  9. A high resolution and high speed 3D imaging system and its application on ATR

    NASA Astrophysics Data System (ADS)

    Lu, Thomas T.; Chao, Tien-Hsin

    2006-04-01

    The paper presents an advanced 3D imaging system based on a combination of stereo vision and light projection methods. A single digital camera is used to take only one shot of the object and reconstruct the 3D model of an object. The stereo vision is achieved by employing a prism and mirror setup to split the views and combine them side by side in the camera. The advantage of this setup is its simple system architecture, easy synchronization, fast 3D imaging speed and high accuracy. The 3D imaging algorithms and potential applications are discussed. For ATR applications, it is critically important to extract maximum information for the potential targets and to separate the targets from the background and clutter noise. The added dimension of a 3D model provides additional features of surface profile, range information of the target. It is capable of removing the false shadow from camouflage and reveal the 3D profile of the object. It also provides arbitrary viewing angles and distances for training the filter bank for invariant ATR. The system architecture can be scaled to take large objects and to perform area 3D modeling onboard a UAV.

  10. 3D and 4D magnetic susceptibility tomography based on complex MR images

    DOEpatents

    Chen, Zikuan; Calhoun, Vince D

    2014-11-11

    Magnetic susceptibility is the physical property for T2*-weighted magnetic resonance imaging (T2*MRI). The invention relates to methods for reconstructing an internal distribution (3D map) of magnetic susceptibility values, .chi. (x,y,z), of an object, from 3D T2*MRI phase images, by using Computed Inverse Magnetic Resonance Imaging (CIMRI) tomography. The CIMRI technique solves the inverse problem of the 3D convolution by executing a 3D Total Variation (TV) regularized iterative convolution scheme, using a split Bregman iteration algorithm. The reconstruction of .chi. (x,y,z) can be designed for low-pass, band-pass, and high-pass features by using a convolution kernel that is modified from the standard dipole kernel. Multiple reconstructions can be implemented in parallel, and averaging the reconstructions can suppress noise. 4D dynamic magnetic susceptibility tomography can be implemented by reconstructing a 3D susceptibility volume from a 3D phase volume by performing 3D CIMRI magnetic susceptibility tomography at each snapshot time.

  11. Label free cell tracking in 3D tissue engineering constructs with high resolution imaging

    NASA Astrophysics Data System (ADS)

    Smith, W. A.; Lam, K.-P.; Dempsey, K. P.; Mazzocchi-Jones, D.; Richardson, J. B.; Yang, Y.

    2014-02-01

    Within the field of tissue engineering there is an emphasis on studying 3-D live tissue structures. Consequently, to investigate and identify cellular activities and phenotypes in a 3-D environment for all in vitro experiments, including shape, migration/proliferation and axon projection, it is necessary to adopt an optical imaging system that enables monitoring 3-D cellular activities and morphology through the thickness of the construct for an extended culture period without cell labeling. This paper describes a new 3-D tracking algorithm developed for Cell-IQ®, an automated cell imaging platform, which has been equipped with an environmental chamber optimized to enable capturing time-lapse sequences of live cell images over a long-term period without cell labeling. As an integral part of the algorithm, a novel auto-focusing procedure was developed for phase contrast microscopy equipped with 20x and 40x objectives, to provide a more accurate estimation of cell growth/trajectories by allowing 3-D voxels to be computed at high spatiotemporal resolution and cell density. A pilot study was carried out in a phantom system consisting of horizontally aligned nanofiber layers (with precise spacing between them), to mimic features well exemplified in cellular activities of neuronal growth in a 3-D environment. This was followed by detailed investigations concerning axonal projections and dendritic circuitry formation in a 3-D tissue engineering construct. Preliminary work on primary animal neuronal cells in response to chemoattractant and topographic cue within the scaffolds has produced encouraging results.

  12. Small SWAP 3D imaging flash ladar for small tactical unmanned air systems

    NASA Astrophysics Data System (ADS)

    Bird, Alan; Anderson, Scott A.; Wojcik, Michael; Budge, Scott E.

    2015-05-01

    The Space Dynamics Laboratory (SDL), working with Naval Research Laboratory (NRL) and industry leaders Advanced Scientific Concepts (ASC) and Hood Technology Corporation, has developed a small SWAP (size, weight, and power) 3D imaging flash ladar (LAser Detection And Ranging) sensor system concept design for small tactical unmanned air systems (STUAS). The design utilizes an ASC 3D flash ladar camera and laser in a Hood Technology gyro-stabilized gimbal system. The design is an autonomous, intelligent, geo-aware sensor system that supplies real-time 3D terrain and target images. Flash ladar and visible camera data are processed at the sensor using a custom digitizer/frame grabber with compression. Mounted in the aft housing are power, controls, processing computers, and GPS/INS. The onboard processor controls pointing and handles image data, detection algorithms and queuing. The small SWAP 3D imaging flash ladar sensor system generates georeferenced terrain and target images with a low probability of false return and <10 cm range accuracy through foliage in real-time. The 3D imaging flash ladar is designed for a STUAS with a complete system SWAP estimate of <9 kg, <0.2 m3 and <350 W power. The system is modeled using LadarSIM, a MATLAB® and Simulink®- based ladar system simulator designed and developed by the Center for Advanced Imaging Ladar (CAIL) at Utah State University. We will present the concept design and modeled performance predictions.

  13. In vivo 3D PIXE-micron-CT imaging of Drosophila melanogaster using a contrast agent

    NASA Astrophysics Data System (ADS)

    Matsuyama, Shigeo; Hamada, Naoki; Ishii, Keizo; Nozawa, Yuichiro; Ohkura, Satoru; Terakawa, Atsuki; Hatori, Yoshinobu; Fujiki, Kota; Fujiwara, Mitsuhiro; Toyama, Sho

    2015-04-01

    In this study, we developed a three-dimensional (3D) computed tomography (CT) in vivo imaging system for imaging small insects with micrometer resolution. The 3D CT imaging system, referred to as 3D PIXE-micron-CT (PIXE?CT), uses characteristic X-rays produced by ion microbeam bombardment of a metal target. PIXE?CT was used to observe the body organs and internal structure of a living Drosophila melanogaster. Although the organs of the thorax were clearly imaged, the digestive organs in the abdominal cavity could not be clearly discerned initially, with the exception of the rectum and the Malpighian tubule. To enhance the abdominal images, a barium sulfate powder radiocontrast agent was added. For the first time, 3D images of the ventriculus of a living D. melanogaster were obtained. Our results showed that PIXE?CT can provide in vivo 3D-CT images that reflect correctly the structure of individual living organs, which is expected to be very useful in biological research.

  14. 3D city models completion by fusing lidar and image data

    NASA Astrophysics Data System (ADS)

    Grammatikopoulos, L.; Kalisperakis, I.; Petsa, E.; Stentoumis, C.

    2015-05-01

    A fundamental step in the generation of visually detailed 3D city models is the acquisition of high fidelity 3D data. Typical approaches employ DSM representations usually derived from Lidar (Light Detection and Ranging) airborne scanning or image based procedures. In this contribution, we focus on the fusion of data from both these methods in order to enhance or complete them. Particularly, we combine an existing Lidar and orthomosaic dataset (used as reference), with a new aerial image acquisition (including both vertical and oblique imagery) of higher resolution, which was carried out in the area of Kallithea, in Athens, Greece. In a preliminary step, a digital orthophoto and a DSM is generated from the aerial images in an arbitrary reference system, by employing a Structure from Motion and dense stereo matching framework. The image-to-Lidar registration is performed by 2D feature (SIFT and SURF) extraction and matching among the two orthophotos. The established point correspondences are assigned with 3D coordinates through interpolation on the reference Lidar surface, are then backprojected onto the aerial images, and finally matched with 2D image features located in the vicinity of the backprojected 3D points. Consequently, these points serve as Ground Control Points with appropriate weights for final orientation and calibration of the images through a bundle adjustment solution. By these means, the aerial imagery which is optimally aligned to the reference dataset can be used for the generation of an enhanced and more accurately textured 3D city model.

  15. Image guidance using 3D-ultrasound (3D-US) for daily positioning of lumpectomy cavity for boost irradiation

    PubMed Central

    2011-01-01

    Purpose The goal of this study was to evaluate the use of 3D ultrasound (3DUS) breast IGRT for electron and photon lumpectomy site boost treatments. Materials and methods 20 patients with a prescribed photon or electron boost were enrolled in this study. 3DUS images were acquired both at time of simulation, to form a coregistered CT/3DUS dataset, and at the time of daily treatment delivery. Intrafractional motion between treatment and simulation 3DUS datasets were calculated to determine IGRT shifts. Photon shifts were evaluated isocentrically, while electron shifts were evaluated in the beam's-eye-view. Volume differences between simulation and first boost fraction were calculated. Further, to control for the effect of change in seroma/cavity volume due to time lapse between the 2 sets of images, interfraction IGRT shifts using the first boost fraction as reference for all subsequent treatment fractions were also calculated. Results For photon boosts, IGRT shifts were 1.1 ± 0.5 cm and 50% of fractions required a shift >1.0 cm. Volume change between simulation and boost was 49 ± 31%. Shifts when using the first boost fraction as reference were 0.8 ± 0.4 cm and 24% required a shift >1.0 cm. For electron boosts, shifts were 1.0 ± 0.5 cm and 52% fell outside the dosimetric penumbra. Interfraction analysis relative to the first fraction noted the shifts to be 0.8 ± 0.4 cm and 36% fell outside the penumbra. Conclusion The lumpectomy cavity can shift significantly during fractionated radiation therapy. 3DUS can be used to image the cavity and correct for interfractional motion. Further studies to better define the protocol for clinical application of IGRT in breast cancer is needed. PMID:21554697

  16. Automatic extraction of retinal features from colour retinal images for glaucoma diagnosis: a review.

    PubMed

    Haleem, Muhammad Salman; Han, Liangxiu; van Hemert, Jano; Li, Baihua

    2013-01-01

    Glaucoma is a group of eye diseases that have common traits such as, high eye pressure, damage to the Optic Nerve Head and gradual vision loss. It affects peripheral vision and eventually leads to blindness if left untreated. The current common methods of pre-diagnosis of Glaucoma include measurement of Intra-Ocular Pressure (IOP) using Tonometer, Pachymetry, Gonioscopy; which are performed manually by the clinicians. These tests are usually followed by Optic Nerve Head (ONH) Appearance examination for the confirmed diagnosis of Glaucoma. The diagnoses require regular monitoring, which is costly and time consuming. The accuracy and reliability of diagnosis is limited by the domain knowledge of different ophthalmologists. Therefore automatic diagnosis of Glaucoma attracts a lot of attention. This paper surveys the state-of-the-art of automatic extraction of anatomical features from retinal images to assist early diagnosis of the Glaucoma. We have conducted critical evaluation of the existing automatic extraction methods based on features including Optic Cup to Disc Ratio (CDR), Retinal Nerve Fibre Layer (RNFL), Peripapillary Atrophy (PPA), Neuroretinal Rim Notching, Vasculature Shift, etc., which adds value on efficient feature extraction related to Glaucoma diagnosis. PMID:24139134

  17. Automated 3D ultrasound image segmentation to aid breast cancer image interpretation.

    PubMed

    Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A; Yuan, Jie; Wang, Xueding; Carson, Paul L

    2016-02-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer. PMID:26547117

  18. Simulation of 3D MRI brain images for quantitative evaluation of image segmentation algorithms

    NASA Astrophysics Data System (ADS)

    Wagenknecht, Gudrun; Kaiser, Hans-Juergen; Obladen, Thorsten; Sabri, Osama; Buell, Udalrich

    2000-06-01

    To model the true shape of MRI brain images, automatically classified T1-weighted 3D MRI images (gray matter, white matter, cerebrospinal fluid, scalp/bone and background) are utilized for simulation of grayscale data and imaging artifacts. For each class, Gaussian distribution of grayscale values is assumed, and mean and variance are computed from grayscale images. A random generator fills up the class images with Gauss-distributed grayscale values. Since grayscale values of neighboring voxels are not correlated, a Gaussian low-pass filtering is done, preserving class region borders. To simulate anatomical variability, a Gaussian distribution in space with user-defined mean and variance can be added at any user-defined position. Several imaging artifacts can be added: (1) to simulate partial volume effects, every voxel is averaged with neighboring voxels if they have a different class label; (2) a linear or quadratic bias field can be added with user-defined strength and orientation; (3) additional background noise can be added; and (4) artifacts left over after spoiling can be simulated by adding a band with increasing/decreasing grayscale values. With this method, realistic-looking simulated MRI images can be produced to test classification and segmentation algorithms regarding accuracy and robustness even in the presence of artifacts.

  19. Registration of 3-D images using weighted geometrical features

    SciTech Connect

    Maurer, C.R. Jr.; Aboutanos, G.B.; Dawant, B.M.; Maciunas, R.J.; Fitzpatrick, J.M.

    1996-12-01

    In this paper, the authors present a weighted geometrical features (WGF) registration algorithm. Its efficacy is demonstrated by combining points and a surface. The technique is an extension of Besl and McKay`s iterative closest point (ICP) algorithm. The authors use the WGF algorithm to register X-ray computed tomography (CT) and T2-weighted magnetic resonance (MR) volume head images acquired from eleven patients that underwent craniotomies in a neurosurgical clinical trial. Each patient had five external markers attached to transcutaneous posts screwed into the outer table of the skull. The authors define registration error as the distance between positions of corresponding markers that are not used for registration. The CT and MR images are registered using fiducial points (marker positions) only, a surface only, and various weighted combinations of points and a surface. The CT surface is derived from contours corresponding to the inner surface of the skull. The MR surface is derived from contours corresponding to the cerebrospinal fluid (CSF)-dura interface. Registration using points and a surface is found to be significantly more accurate than registration using only points or a surface.

  20. Sample preparation for 3D SIMS chemical imaging of cells.

    PubMed

    Winograd, Nicholas; Bloom, Anna

    2015-01-01

    Time-of-flight secondary ion mass spectrometry (ToF-SIMS) is an emerging technique for the characterization of biological systems. With the development of novel ion sources such as cluster ion beams, ionization efficiency has been increased, allowing for greater amounts of information to be obtained from the sample of interest. This enables the plotting of the distribution of chemical compounds against position with submicrometer resolution, yielding a chemical map of the material. In addition, by combining imaging with molecular depth profiling, a complete 3-dimensional rendering of the object is possible. The study of single biological cells presents significant challenges due to the fundamental complexity associated with any biological material. Sample preparation is of critical importance in controlling this complexity, owing to the fragile nature of biological cells and to the need to characterize them in their native state, free of chemical or physical changes. Here, we describe the four most widely used sample preparation methods for cellular imaging using ToF-SIMS, and provide guidance for data collection and analysis procedures. PMID:25361662

  1. Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery.

    PubMed

    Wang, Junchen; Suenaga, Hideyuki; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro; Liao, Hongen

    2014-04-01

    Computer-assisted oral and maxillofacial surgery (OMS) has been rapidly evolving since the last decade. State-of-the-art surgical navigation in OMS still suffers from bulky tracking sensors, troublesome image registration procedures, patient movement, loss of depth perception in visual guidance, and low navigation accuracy. We present an augmented reality navigation system with automatic marker-free image registration using 3-D image overlay and stereo tracking for dental surgery. A customized stereo camera is designed to track both the patient and instrument. Image registration is performed by patient tracking and real-time 3-D contour matching, without requiring any fiducial and reference markers. Real-time autostereoscopic 3-D imaging is implemented with the help of a consumer-level graphics processing unit. The resulting 3-D image of the patient's anatomy is overlaid on the surgical site by a half-silvered mirror using image registration and IP-camera registration to guide the surgeon by exposing hidden critical structures. The 3-D image of the surgical instrument is also overlaid over the real one for an augmented display. The 3-D images present both stereo and motion parallax from which depth perception can be obtained. Experiments were performed to evaluate various aspects of the system; the overall image overlay error of the proposed system was 0.71 mm. PMID:24658253

  2. High-resolution digital 3D imaging of large structures

    NASA Astrophysics Data System (ADS)

    Rioux, Marc; Beraldin, J. A.; Godin, Guy; Blais, Francois; Cournoyer, Luc

    1997-03-01

    This talk summarizes the conclusions of a few of these laser scanning experiments on remote sites and the potential of the technology for imaging applications. Parameters to be considered for these types of activities are related to the design of a large volume of view laser scanner, such as the depth of field, the ambient light interference (especially for outdoors) and, the scanning strategies. The first case reviewed is an inspection application performed in a coal- burning power station located in Alberta, Canada. The second case is the digitizing of the ODS (Orbiter Docking System) at the Kennedy Space Center in Florida and, the third case is the digitizing of a large sculpture located outside of the Canadian Museum of Civilisation in Ottawa-Hull, Canada.

  3. Terahertz Lasers Reveal Information for 3D Images

    NASA Technical Reports Server (NTRS)

    2013-01-01

    After taking off her shoes and jacket, she places them in a bin. She then takes her laptop out of its case and places it in a separate bin. As the items move through the x-ray machine, the woman waits for a sign from security personnel to pass through the metal detector. Today, she was lucky; she did not encounter any delays. The man behind her, however, was asked to step inside a large circular tube, raise his hands above his head, and have his whole body scanned. If you have ever witnessed a full-body scan at the airport, you may have witnessed terahertz imaging. Terahertz wavelengths are located between microwave and infrared on the electromagnetic spectrum. When exposed to these wavelengths, certain materials such as clothing, thin metal, sheet rock, and insulation become transparent. At airports, terahertz radiation can illuminate guns, knives, or explosives hidden underneath a passenger s clothing. At NASA s Kennedy Space Center, terahertz wavelengths have assisted in the inspection of materials like insulating foam on the external tanks of the now-retired space shuttle. "The foam we used on the external tank was a little denser than Styrofoam, but not much," says Robert Youngquist, a physicist at Kennedy. The problem, he explains, was that "we lost a space shuttle by having a chunk of foam fall off from the external fuel tank and hit the orbiter." To uncover any potential defects in the foam covering, such as voids or air pockets, that could keep the material from staying in place, NASA employed terahertz imaging to see through the foam. For many years, the technique ensured the integrity of the material on the external tanks.

  4. Real-time 3D surface-image-guided beam setup in radiotherapy of breast cancer

    SciTech Connect

    Djajaputra, David; Li Shidong

    2005-01-01

    We describe an approach for external beam radiotherapy of breast cancer that utilizes the three-dimensional (3D) surface information of the breast. The surface data of the breast are obtained from a 3D optical camera that is rigidly mounted on the ceiling of the treatment vault. This 3D camera utilizes light in the visible range therefore it introduces no ionization radiation to the patient. In addition to the surface topographical information of the treated area, the camera also captures gray-scale information that is overlaid on the 3D surface image. This allows us to visualize the skin markers and automatically determine the isocenter position and the beam angles in the breast tangential fields. The field sizes and shapes of the tangential, supraclavicular, and internal mammary gland fields can all be determined according to the 3D surface image of the target. A least-squares method is first introduced for the tangential-field setup that is useful for compensation of the target shape changes. The entire process of capturing the 3D surface data and subsequent calculation of beam parameters typically requires less than 1 min. Our tests on phantom experiments and patient images have achieved the accuracy of 1 mm in shift and 0.5 deg. in rotation. Importantly, the target shape and position changes in each treatment session can both be corrected through this real-time image-guided system.

  5. Validation of Retinal Image Registration Algorithms by a Projective Imaging Distortion Model

    PubMed Central

    Lee, Sangyeol; Abràmoff, Michael D.; Reinhardt, Joseph M.

    2008-01-01

    Fundus camera imaging of the retina is widely used to document ophthalmologic disorders including diabetic retinopathy, glaucoma, and age-related macular degeneration. The retinal images typically have a limited field of view due mainly to the curvedness of human retina, so multiple images are to be joined together using image registration technique to form a montage with a larger field of view. A variety of methods for retinal image registration have been proposed, but evaluating such methods objectively is difficult due to the lack of a reference standard for the true alignment of the individual images that make up the montage. A method of generating simulated retinal image set by modeling geometric distortions due to the eye geometry and the image acquisition process is described in this paper. We also present the validation tool for any retinal image registration method by tracing back the distortion path and accessing the geometric misalignment from the coordinate system of reference standard. The quantitative comparison for different registration methods is given in the experiment, so the registration performance is evaluated in an objective manner. PMID:18003507

  6. 3D surface reconstruction of apples from 2D NIR images

    NASA Astrophysics Data System (ADS)

    Zhu, Bin; Jiang, Lu; Cheng, Xuemei; Tao, Yang

    2005-11-01

    Machine vision methods are widely used in apple defect detection and quality grading applications. Currently, 2D near-infrared (NIR) imaging of apples is often used to detect apple defects because the image intensity of defects is different from normal apple parts. However, a drawback of this method is that the apple calyx also exhibits similar image intensity to the apple defects. Since an apple calyx often appears in the NIR image, the false alarm rate is high with the 2D NIR imaging method. In this paper, a 2D NIR imaging method is extended to a 3D reconstruction so that the apple calyx can be differentiated from apple defects according to their different 3D depth information. The Lambertian model is used to evaluate the reflectance map of the apple surface, and then Pentland's Shape-From-Shading (SFS) method is applied to reconstruct the 3D surface information of the apple based on Fast Fourier Transform (FFT). Pentland's method is directly derived from human perception properties, making it close to the way human eyes recover 3D information from a 2D scene. In addition, the FFT reduces the computation time significantly. The reconstructed 3D apple surface maps are shown in the results, and different depths of apple calyx and defects are obtained correctly.

  7. Understanding 3D TSE Sequences: Advantages, Disadvantages, and Application in MSK Imaging.

    PubMed

    Glaser, Christian; D'Anastasi, Melvin; Theisen, Daniel; Notohamiprodjo, Mike; Horger, Wilhelm; Paul, Dominik; Horng, Annie

    2015-09-01

    Three-dimensional (3D) turbo-spin echo (TSE) sequences have outgrown the stage of mere sequence optimization and by now are clinically applicable. Image blurring and acquisition times have been reduced, and contrast for T1-, T2-, and moderately T2-weighted (or intermediate-weighted) fat-suppressed variants has been optimized. Data on sound-to-noise ratio efficiency and contrast are available for moderately T2-weighted fat-saturated sequence protocols. The 3-T MRI scanners help to better exploit isotropic spatial resolution and multiplanar reformatting. Imaging times range from 5 to 10 minutes, and they are shorter than the cumulative acquisition times of three separate orthogonal two-dimensional (2D) sequences. Recent suggestions go beyond secondary reformations by using online 3D rendering for image evaluation. Comparative clinical studies indicate that the diagnostic performance of 3D TSE for imaging of internal derangements of joints is at least comparable with conventional 2D TSE with potential advantages of 3D TSE for small highly curved structures. But such studies, especially those with direct arthroscopic correlation, are still sparse. Whether 3D TSE will succeed in entering clinical routine imaging on a broader scale will depend on further published clinical evidence, on further reduction of imaging time, and on improvement of its integration into daily practice. PMID:26583360

  8. F3D Image Processing and Analysis for Many - and Multi-core Platforms

    Energy Science and Technology Software Center (ESTSC)

    2014-10-01

    F3D is written in OpenCL, so it achieve[sic] platform-portable parallelism on modern mutli-core CPUs and many-core GPUs. The interface and mechanims to access F3D core are written in Java as a plugin for Fiji/ImageJ to deliver several key image-processing algorithms necessary to remove artifacts from micro-tomography data. The algorithms consist of data parallel aware filters that can efficiently utilizes[sic] resources and can work on out of core datasets and scale efficiently across multiple accelerators. Optimizingmore »for data parallel filters, streaming out of core datasets, and efficient resource and memory and data managements over complex execution sequence of filters greatly expedites any scientific workflow with image processing requirements. F3D performs several different types of 3D image processing operations, such as non-linear filtering using bilateral filtering and/or median filtering and/or morphological operators (MM). F3D gray-level MM operators are one-pass constant time methods that can perform morphological transformations with a line-structuring element oriented in discrete directions. Additionally, MM operators can be applied to gray-scale images, and consist of two parts: (a) a reference shape or structuring element, which is translated over the image, and (b) a mechanism, or operation, that defines the comparisons to be performed between the image and the structuring element. This tool provides a critical component within many complex pipelines such as those for performing automated segmentation of image stacks. F3D is also called a "descendent" of Quant-CT, another software we developed in the past. These two modules are to be integrated in a next version. Further details were reported in: D.M. Ushizima, T. Perciano, H. Krishnan, B. Loring, H. Bale, D. Parkinson, and J. Sethian. Structure recognition from high-resolution images of ceramic composites. IEEE International Conference on Big Data, October 2014.« less

  9. F3D Image Processing and Analysis for Many - and Multi-core Platforms

    SciTech Connect

    2014-10-01

    F3D is written in OpenCL, so it achieve[sic] platform-portable parallelism on modern mutli-core CPUs and many-core GPUs. The interface and mechanims to access F3D core are written in Java as a plugin for Fiji/ImageJ to deliver several key image-processing algorithms necessary to remove artifacts from micro-tomography data. The algorithms consist of data parallel aware filters that can efficiently utilizes[sic] resources and can work on out of core datasets and scale efficiently across multiple accelerators. Optimizing for data parallel filters, streaming out of core datasets, and efficient resource and memory and data managements over complex execution sequence of filters greatly expedites any scientific workflow with image processing requirements. F3D performs several different types of 3D image processing operations, such as non-linear filtering using bilateral filtering and/or median filtering and/or morphological operators (MM). F3D gray-level MM operators are one-pass constant time methods that can perform morphological transformations with a line-structuring element oriented in discrete directions. Additionally, MM operators can be applied to gray-scale images, and consist of two parts: (a) a reference shape or structuring element, which is translated over the image, and (b) a mechanism, or operation, that defines the comparisons to be performed between the image and the structuring element. This tool provides a critical component within many complex pipelines such as those for performing automated segmentation of image stacks. F3D is also called a "descendent" of Quant-CT, another software we developed in the past. These two modules are to be integrated in a next version. Further details were reported in: D.M. Ushizima, T. Perciano, H. Krishnan, B. Loring, H. Bale, D. Parkinson, and J. Sethian. Structure recognition from high-resolution images of ceramic composites. IEEE International Conference on Big Data, October 2014.

  10. Space Radar Image of Death Valley in 3-D

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This picture is a three-dimensional perspective view of Death Valley, California. This view was constructed by overlaying a SIR-C radar image on a U.S. Geological Survey digital elevation map. The SIR-C image is centered at 36.629 degrees north latitude and 117.069 degrees west longitude. We are looking at Stove Pipe Wells, which is the bright rectangle located in the center of the picture frame. Our vantage point is located atop a large alluvial fan centered at the mouth of Cottonwood Canyon. In the foreground on the left, we can see the sand dunes near Stove Pipe Wells. In the background on the left, the Valley floor gradually falls in elevation toward Badwater, the lowest spot in the United States. In the background on the right we can see Tucki Mountain. This SIR-C/X-SAR supersite is an area of extensive field investigations and has been visited by both Space Radar Lab astronaut crews. Elevations in the Valley range from 70 meters (230 feet) below sea level, the lowest in the United States, to more than 3,300 meters (10,800 feet) above sea level. Scientists are using SIR-C/X-SAR data from Death Valley to help the answer a number of different questions about Earth's geology. One question concerns how alluvial fans are formed and change through time under the influence of climatic changes and earthquakes. Alluvial fans are gravel deposits that wash down from the mountains over time. They are visible in the image as circular, fan-shaped bright areas extending into the darker valley floor from the mountains. Information about the alluvial fans helps scientists study Earth's ancient climate. Scientists know the fans are built up through climatic and tectonic processes and they will use the SIR-C/X-SAR data to understand the nature and rates of weathering processes on the fans, soil formation and the transport of sand and dust by the wind. SIR-C/X-SAR's sensitivity to centimeter-scale (inch-scale) roughness provides detailed maps of surface texture. Such information can be used to study the occurrence and movement of dust storms and sand dunes. The goal of these studies is to gain a better understanding of the record of past climatic changes and the effects of those changes on a sensitive environment. This may lead to a better ability to predict future response of the land to different potential global climate-change scenarios. Vertical exaggeration is 1.87 times; exaggeration of relief is a common tool scientists use to detect relationships between structure (for example, faults and fractures) and topography. Death Valley is also one of the primary calibration sites for SIR-C/X-SAR. In the lower right quadrant of the picture frame two bright dots can be seen which form a line extending to Stove Pipe Wells. These dots are corner reflectors that have been set up to calibrate the radar as the shuttle passes overhead. Thirty triangular-shaped reflectors (they look like aluminum pyramids) have been deployed by the calibration team from JPL over a 40- by 40-kilometer (25- by 25-mile) area in and around Death Valley. The signatures of these reflectors were analyzed by JPL scientists to calibrate the image used in this picture. The calibration team here also deployed transponders (electronic reflectors) and receivers to measure the radar signals from SIR-C/X-SAR on the ground. SIR-C/X-SAR radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, in conjunction with aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA's Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche

  11. 2D imaging and 3D sensing data acquisition and mutual registration for painting conservation

    NASA Astrophysics Data System (ADS)

    Fontana, Raffaella; Gambino, Maria Chiara; Greco, Marinella; Marras, Luciano; Pampaloni, Enrico M.; Pelagotti, Anna; Pezzati, Luca; Poggi, Pasquale

    2004-12-01

    We describe the application of 2D and 3D data acquisition and mutual registration to the conservation of paintings. RGB color image acquisition, IR and UV fluorescence imaging, together with the more recent hyperspectral imaging (32 bands) are among the most useful techniques in this field. They generally are meant to provide information on the painting materials, on the employed techniques and on the object state of conservation. However, only when the various images are perfectly registered on each other and on the 3D model, no ambiguity is possible and safe conclusions may be drawn. We present the integration of 2D and 3D measurements carried out on two different paintings: "Madonna of the Yarnwinder" by Leonardo da Vinci, and "Portrait of Lionello d'Este", by Pisanello, both painted in the XV century.

  12. 2D imaging and 3D sensing data acquisition and mutual registration for painting conservation

    NASA Astrophysics Data System (ADS)

    Fontana, Raffaella; Gambino, Maria Chiara; Greco, Marinella; Marras, Luciano; Pampaloni, Enrico M.; Pelagotti, Anna; Pezzati, Luca; Poggi, Pasquale

    2005-01-01

    We describe the application of 2D and 3D data acquisition and mutual registration to the conservation of paintings. RGB color image acquisition, IR and UV fluorescence imaging, together with the more recent hyperspectral imaging (32 bands) are among the most useful techniques in this field. They generally are meant to provide information on the painting materials, on the employed techniques and on the object state of conservation. However, only when the various images are perfectly registered on each other and on the 3D model, no ambiguity is possible and safe conclusions may be drawn. We present the integration of 2D and 3D measurements carried out on two different paintings: "Madonna of the Yarnwinder" by Leonardo da Vinci, and "Portrait of Lionello d'Este", by Pisanello, both painted in the XV century.

  13. Synthesis of 3D Model of a Magnetic Field-Influenced Body from a Single Image

    NASA Technical Reports Server (NTRS)

    Wang, Cuilan; Newman, Timothy; Gallagher, Dennis

    2006-01-01

    A method for recovery of a 3D model of a cloud-like structure that is in motion and deforming but approximately governed by magnetic field properties is described. The method allows recovery of the model from a single intensity image in which the structure's silhouette can be observed. The method exploits envelope theory and a magnetic field model. Given one intensity image and the segmented silhouette in the image, the method proceeds without human intervention to produce the 3D model. In addition to allowing 3D model synthesis, the method's capability to yield a very compact description offers further utility. Application of the method to several real-world images is demonstrated.

  14. 3D image copyright protection based on cellular automata transform and direct smart pixel mapping

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Wei; Kim, Seok-Tae; Lee, In-Kwon

    2014-10-01

    We propose a three-dimensional (3D) watermarking system with the direct smart pixel mapping algorithm to improve the resolution of the reconstructed 3D watermark plane images. The depth-converted elemental image array (EIA) is obtained through the computational pixel mapping method. In the watermark embedding process, the depth-converted EIA is first scrambled by using the Arnold transform, which is then embedded in the middle frequency of the cellular automata (CA) transform. Compared with conventional computational integral imaging reconstruction (CIIR) methods, this proposed scheme gives us a higher resolution of the reconstructed 3D plane images by using the quality-enhanced depth-converted EIA. The proposed method, which can obtain many transform planes for embedding watermark data, uses CA transforms with various gateway values. To prove the effectiveness of the proposed method, we present the results of our preliminary experiments.

  15. Quantitative Morphological and Biochemical Studies on Human Downy Hairs using 3-D Quantitative Phase Imaging

    E-print Network

    Lee, SangYun; Lee, Yuhyun; Park, Sungjin; Shin, Heejae; Yang, Jongwon; Ko, Kwanhong; Park, HyunJoo; Park, YongKeun

    2015-01-01

    This study presents the morphological and biochemical findings on human downy arm hairs using 3-D quantitative phase imaging techniques. 3-D refractive index tomograms and high-resolution 2-D synthetic aperture images of individual downy arm hairs were measured using a Mach-Zehnder laser interferometric microscopy equipped with a two-axis galvanometer mirror. From the measured quantitative images, the biochemical and morphological parameters of downy hairs were non-invasively quantified including the mean refractive index, volume, cylinder, and effective radius of individual hairs. In addition, the effects of hydrogen peroxide on individual downy hairs were investigated.

  16. Midsagittal plane extraction from brain images based on 3D SIFT.

    PubMed

    Wu, Huisi; Wang, Defeng; Shi, Lin; Wen, Zhenkun; Ming, Zhong

    2014-03-21

    Midsagittal plane (MSP) extraction from 3D brain images is considered as a promising technique for human brain symmetry analysis. In this paper, we present a fast and robust MSP extraction method based on 3D scale-invariant feature transform (SIFT). Unlike the existing brain MSP extraction methods, which mainly rely on the gray similarity, 3D edge registration or parameterized surface matching to determine the fissure plane, our proposed method is based on distinctive 3D SIFT features, in which the fissure plane is determined by parallel 3D SIFT matching and iterative least-median of squares plane regression. By considering the relative scales, orientations and flipped descriptors between two 3D SIFT features, we propose a novel metric to measure the symmetry magnitude for 3D SIFT features. By clustering and indexing the extracted SIFT features using a k-dimensional tree (KD-tree) implemented on graphics processing units, we can match multiple pairs of 3D SIFT features in parallel and solve the optimal MSP on-the-fly. The proposed method is evaluated by synthetic and in vivo datasets, of normal and pathological cases, and validated by comparisons with the state-of-the-art methods. Experimental results demonstrated that our method has achieved a real-time performance with better accuracy yielding an average yaw angle error below 0.91° and an average roll angle error no more than 0.89°. PMID:24583964

  17. Midsagittal plane extraction from brain images based on 3D SIFT

    NASA Astrophysics Data System (ADS)

    Wu, Huisi; Wang, Defeng; Shi, Lin; Wen, Zhenkun; Ming, Zhong

    2014-03-01

    Midsagittal plane (MSP) extraction from 3D brain images is considered as a promising technique for human brain symmetry analysis. In this paper, we present a fast and robust MSP extraction method based on 3D scale-invariant feature transform (SIFT). Unlike the existing brain MSP extraction methods, which mainly rely on the gray similarity, 3D edge registration or parameterized surface matching to determine the fissure plane, our proposed method is based on distinctive 3D SIFT features, in which the fissure plane is determined by parallel 3D SIFT matching and iterative least-median of squares plane regression. By considering the relative scales, orientations and flipped descriptors between two 3D SIFT features, we propose a novel metric to measure the symmetry magnitude for 3D SIFT features. By clustering and indexing the extracted SIFT features using a k-dimensional tree (KD-tree) implemented on graphics processing units, we can match multiple pairs of 3D SIFT features in parallel and solve the optimal MSP on-the-fly. The proposed method is evaluated by synthetic and in vivo datasets, of normal and pathological cases, and validated by comparisons with the state-of-the-art methods. Experimental results demonstrated that our method has achieved a real-time performance with better accuracy yielding an average yaw angle error below 0.91° and an average roll angle error no more than 0.89°.

  18. Retinal layer segmentation of macular OCT images using boundary classification.

    PubMed

    Lang, Andrew; Carass, Aaron; Hauser, Matthew; Sotirchos, Elias S; Calabresi, Peter A; Ying, Howard S; Prince, Jerry L

    2013-07-01

    Optical coherence tomography (OCT) has proven to be an essential imaging modality for ophthalmology and is proving to be very important in neurology. OCT enables high resolution imaging of the retina, both at the optic nerve head and the macula. Macular retinal layer thicknesses provide useful diagnostic information and have been shown to correlate well with measures of disease severity in several diseases. Since manual segmentation of these layers is time consuming and prone to bias, automatic segmentation methods are critical for full utilization of this technology. In this work, we build a random forest classifier to segment eight retinal layers in macular cube images acquired by OCT. The random forest classifier learns the boundary pixels between layers, producing an accurate probability map for each boundary, which is then processed to finalize the boundaries. Using this algorithm, we can accurately segment the entire retina contained in the macular cube to an accuracy of at least 4.3 microns for any of the nine boundaries. Experiments were carried out on both healthy and multiple sclerosis subjects, with no difference in the accuracy of our algorithm found between the groups. PMID:23847738

  19. Automated seed detection and 3D reconstruction I: Seed Localization from Fluoroscopic Images or Radiographs.

    E-print Network

    Pouliot, Jean

    Automated seed detection and 3D reconstruction I: Seed Localization from Fluoroscopic Images of radioactive seeds on fluoroscopic images or scanned radiographs is presented. The extracted positions of seed, 92% of the seeds are detected automatically. The orientation is found with an error lower then 5

  20. Integer wavelet transformations with predictive coding improves 3-D similar image set compression

    E-print Network

    Qi, Xiaojun

    modalities such as magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography compression improvement is demonstrated with 3-D sets of magnetic resonance (MR) brain images. Keywords (PET) and single-photon emission computed tomography (SPECT). Multiple slices are sometimes referred

  1. Curvature histogram features for retrieval of images of smooth 3D objects

    NASA Astrophysics Data System (ADS)

    Zhdanov, I.; Scherbakov, O.; Potapov, A.; Peterson, M.

    2014-09-01

    We consider image features on the base of histograms of oriented gradients (HOG) with addition of contour curvature histogram (HOG-CH), and also compare it with results of known scale-invariant feature transform (SIFT) approach in application to retrieval of images of smooth 3D objects.

  2. High-Resolution Isotropic 3D Diffusion Tensor Imaging of the Human Brain

    E-print Network

    Jiang, Hangyi

    tensor imaging; high resolution; 3D isotro- pic imaging; white matter; brainstem Diffusion tensor (2,3), has proved to be very useful in the study of axonal structures in animals (4,5) and humans- cessfully used on small animals (4,5,16), it has never been used on humans, which is a consequence

  3. Calibration and 3D Measurement from Martian Terrain Images Maarten Vergauwen, Marc Pollefeys

    E-print Network

    Pollefeys, Marc

    developments in computer vision. The calibration is retrieved from the images of the Mars terrain (the sameCalibration and 3D Measurement from Martian Terrain Images Maarten Vergauwen, Marc Pollefeys-16-32.10.64, Fax: ¡ 32-16-32.17.23 vergauwe ¢ pollefey ¢ vangool@esat.kuleuven.ac.be Abstract In this paper a new

  4. Photon-counting compressive sensing laser radar for 3D imaging.

    PubMed

    Howland, G A; Dixon, P B; Howell, J C

    2011-11-01

    We experimentally demonstrate a photon-counting, single-pixel, laser radar camera for 3D imaging where transverse spatial resolution is obtained through compressive sensing without scanning. We use this technique to image through partially obscuring objects, such as camouflage netting. Our implementation improves upon pixel-array based designs with a compact, resource-efficient design and highly scalable resolution. PMID:22086015

  5. Video Compression with 3-D Pose Tracking, PDE-based Image Coding, and

    E-print Network

    Video Compression with 3-D Pose Tracking, PDE-based Image Coding, and Electrostatic Halftoning,weickert}@mia.uni-saarland.de Abstract. Recent video compression algorithms such as the members of the MPEG or H.26x family use image or security surveillance, or video conferencing. Our video compression algo- rithm tracks moving foreground

  6. AN APPROACH FOR INTERSUBJECT ANALYSIS OF 3D BRAIN IMAGES BASED ON CONFORMAL GEOMETRY

    E-print Network

    Hua, Jing

    AN APPROACH FOR INTERSUBJECT ANALYSIS OF 3D BRAIN IMAGES BASED ON CONFORMAL GEOMETRY Guangyu Zou Emission Tomography (PET) and Diffusion Tensor Imaging (DTI) have accelerated brain research in many aspects. In order to better understand the synergy of the many processes involved in normal brain function

  7. Head Modeling from Pictures and Morphing in 3D with Image Metamorphosis based on triangulation

    E-print Network

    Lee, WonSook

    Head Modeling from Pictures and Morphing in 3D with Image Metamorphosis based on triangulation WON with texture metamorphosis. There are various approaches to reconstruct a realistic person using a Laser animation. Other techniques for metamorphosis, or "morphing", involve the transformation between 2D images

  8. 3D Scanning Transmission Electron Microscopy for Catalysts: Imaging and Data Analysis

    E-print Network

    Abidi, Mongi A.

    3D Scanning Transmission Electron Microscopy for Catalysts: Imaging and Data Analysis A. Y revolutionized electron microscopy, for the first time allowing direct imaging of sub-angstrom atomic spacings, A.R. Lipini, S. M. Travaglini, and S. J. Pennycook, J. Electron Microscopy, 55 (2006) 7. [4] P

  9. 3D gaze tracking method using Purkinje images on eye optical model and pupil

    NASA Astrophysics Data System (ADS)

    Lee, Ji Woo; Cho, Chul Woo; Shin, Kwang Yong; Lee, Eui Chul; Park, Kang Ryoung

    2012-05-01

    Gaze tracking is to detect the position a user is looking at. Most research on gaze estimation has focused on calculating the X, Y gaze position on a 2D plane. However, as the importance of stereoscopic displays and 3D applications has increased greatly, research into 3D gaze estimation of not only the X, Y gaze position, but also the Z gaze position has gained attention for the development of next-generation interfaces. In this paper, we propose a new method for estimating the 3D gaze position based on the illuminative reflections (Purkinje images) on the surface of the cornea and lens by considering the 3D optical structure of the human eye model. This research is novel in the following four ways compared with previous work. First, we theoretically analyze the generated models of Purkinje images based on the 3D human eye model for 3D gaze estimation. Second, the relative positions of the first and fourth Purkinje images to the pupil center, inter-distance between these two Purkinje images, and pupil size are used as the features for calculating the Z gaze position. The pupil size is used on the basis of the fact that pupil accommodation happens according to the gaze positions in the Z direction. Third, with these features as inputs, the final Z gaze position is calculated using a multi-layered perceptron (MLP). Fourth, the X, Y gaze position on the 2D plane is calculated by the position of the pupil center based on a geometric transform considering the calculated Z gaze position. Experimental results showed that the average errors of the 3D gaze estimation were about 0.96° (0.48 cm) on the X-axis, 1.60° (0.77 cm) on the Y-axis, and 4.59 cm along the Z-axis in 3D space.

  10. Automated segmentation of retinal pigment epithelium cells in fluorescence adaptive optics images

    E-print Network

    Automated segmentation of retinal pigment epithelium cells in fluorescence adaptive optics images, such as photoreceptors and retinal pigment epithelium (RPE) cells, to be studied in vivo. The high-resolution images been developed to detect the position of individual photoreceptor cells; however, most of these methods

  11. MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction

    SciTech Connect

    Chen, G; Pan, X; Stayman, J; Samei, E

    2014-06-15

    Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within the reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical applications. Learning Objectives: Learn the general methodologies associated with model-based 3D image reconstruction. Learn the potential advantages in image quality and dose associated with model-based image reconstruction. Learn the challenges associated with computational load and image quality assessment for such reconstruction methods. Learn how imaging task can be incorporated as a means to drive optimal image acquisition and reconstruction techniques. Learn how model-based reconstruction methods can incorporate prior information to improve image quality, ease sampling requirements, and reduce dose.

  12. Real Time Quantitative 3-D Imaging of Diffusion Flame Species

    NASA Technical Reports Server (NTRS)

    Kane, Daniel J.; Silver, Joel A.

    1997-01-01

    A low-gravity environment, in space or ground-based facilities such as drop towers, provides a unique setting for study of combustion mechanisms. Understanding the physical phenomena controlling the ignition and spread of flames in microgravity has importance for space safety as well as better characterization of dynamical and chemical combustion processes which are normally masked by buoyancy and other gravity-related effects. Even the use of so-called 'limiting cases' or the construction of 1-D or 2-D models and experiments fail to make the analysis of combustion simultaneously simple and accurate. Ideally, to bridge the gap between chemistry and fluid mechanics in microgravity combustion, species concentrations and temperature profiles are needed throughout the flame. However, restrictions associated with performing measurements in reduced gravity, especially size and weight considerations, have generally limited microgravity combustion studies to the capture of flame emissions on film or video laser Schlieren imaging and (intrusive) temperature measurements using thermocouples. Given the development of detailed theoretical models, more sophisticated studies are needed to provide the kind of quantitative data necessary to characterize the properties of microgravity combustion processes as well as provide accurate feedback to improve the predictive capabilities of the computational models. While there have been a myriad of fluid mechanical visualization studies in microgravity combustion, little experimental work has been completed to obtain reactant and product concentrations within a microgravity flame. This is largely due to the fact that traditional sampling methods (quenching microprobes using GC and/or mass spec analysis) are too heavy, slow, and cumbersome for microgravity experiments. Non-intrusive optical spectroscopic techniques have - up until now - also required excessively bulky, power hungry equipment. However, with the advent of near-IR diode lasers, the possibility now exists to obtain reactant and product concentrations and temperatures non-intrusively in microgravity combustion studies. Over the past ten years, Southwest Sciences has focused its research on the high sensitivity, quantitative detection of gas phase species using diode lasers. Our research approach combines three innovations in an experimental system resulting in a new capability for nonintrusive measurement of major combustion species. FM spectroscopy or high frequency Wavelength Modulation Spectroscopy (WMS) have recently been applied to sensitive absorption measurements at Southwest Sciences and in other laboratories using GaAlAs or InGaAsP diode lasers in the visible or near-infrared as well as lead-salt lasers in the mid-infrared spectral region. Because these lasers exhibit essentially no source noise at the high detection frequencies employed with this technique, the achievement of sensitivity approaching the detector shot noise limit is possible.

  13. Integration of Video Images and CAD Wireframes for 3d Object Localization

    NASA Astrophysics Data System (ADS)

    Persad, R. A.; Armenakis, C.; Sohn, G.

    2012-07-01

    The tracking of moving objects from single images has received widespread attention in photogrammetric computer vision and considered to be at a state of maturity. This paper presents a model-driven solution for localizing moving objects detected from monocular, rotating and zooming video images in a 3D reference frame. To realize such a system, the recovery of 2D to 3D projection parameters is essential. Automatic estimation of these parameters is critical, particularly for pan-tilt-zoom (PTZ) surveillance cameras where parameters change spontaneously upon camera motion. In this work, an algorithm for automated parameter retrieval is proposed. This is achieved by matching linear features between incoming images from video sequences and simple geometric 3D CAD wireframe models of man-made structures. The feature matching schema uses a hypothesis-verify optimization framework referred to as LR-RANSAC. This novel method improves the computational efficiency of the matching process in comparison to the standard RANSAC robust estimator. To demonstrate the applicability and performance of the method, experiments have been performed on indoor and outdoor image sequences under varying conditions with lighting changes and occlusions. Reliability of the matching algorithm has been analyzed by comparing the automatically determined camera parameters with ground truth (GT). Dependability of the retrieved parameters for 3D localization has also been assessed by comparing the difference between 3D positions of moving image objects estimated using the LR-RANSAC-derived parameters and those computed using GT parameters.

  14. 3D Texture Analysis in Renal Cell Carcinoma Tissue Image Grading

    PubMed Central

    Cho, Nam-Hoon; Choi, Heung-Kook

    2014-01-01

    One of the most significant processes in cancer cell and tissue image analysis is the efficient extraction of features for grading purposes. This research applied two types of three-dimensional texture analysis methods to the extraction of feature values from renal cell carcinoma tissue images, and then evaluated the validity of the methods statistically through grade classification. First, we used a confocal laser scanning microscope to obtain image slices of four grades of renal cell carcinoma, which were then reconstructed into 3D volumes. Next, we extracted quantitative values using a 3D gray level cooccurrence matrix (GLCM) and a 3D wavelet based on two types of basis functions. To evaluate their validity, we predefined 6 different statistical classifiers and applied these to the extracted feature sets. In the grade classification results, 3D Haar wavelet texture features combined with principal component analysis showed the best discrimination results. Classification using 3D wavelet texture features was significantly better than 3D GLCM, suggesting that the former has potential for use in a computer-based grading system. PMID:25371701

  15. Digital holographic microscopy for imaging growth and treatment response in 3D tumor models

    NASA Astrophysics Data System (ADS)

    Li, Yuyu; Petrovic, Ljubica; Celli, Jonathan P.; Yelleswarapu, Chandra S.

    2014-03-01

    While three-dimensional tumor models have emerged as valuable tools in cancer research, the ability to longitudinally visualize the 3D tumor architecture restored by these systems is limited with microscopy techniques that provide only qualitative insight into sample depth, or which require terminal fixation for depth-resolved 3D imaging. Here we report the use of digital holographic microscopy (DHM) as a viable microscopy approach for quantitative, non-destructive longitudinal imaging of in vitro 3D tumor models. Following established methods we prepared 3D cultures of pancreatic cancer cells in overlay geometry on extracellular matrix beds and obtained digital holograms at multiple timepoints throughout the duration of growth. The holograms were digitally processed and the unwrapped phase images were obtained to quantify nodule thickness over time under normal growth, and in cultures subject to chemotherapy treatment. In this manner total nodule volumes are rapidly estimated and demonstrated here to show contrasting time dependent changes during growth and in response to treatment. This work suggests the utility of DHM to quantify changes in 3D structure over time and suggests the further development of this approach for time-lapse monitoring of 3D morphological changes during growth and in response to treatment that would otherwise be impractical to visualize.

  16. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy.

    PubMed

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-19

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available. PMID:24910506

  17. Bio-medical imaging: Localization of main structures in retinal fundus images

    NASA Astrophysics Data System (ADS)

    Basit, A.; Egerton, S. J.

    2013-12-01

    Retinal fundus images have three main structures, the optic disk, fovea and blood vessels. By examining fundus images, an ophthalmologist can diagnose various clinical disorders of the eye and the body, typically indicated by changes in the diameter, area, branching angles and tortuosity of the three ma in retinal structures. Knowledge of the optic disk position is an important diagnostic index fo r many diseases related to the retina. In this paper, localization of optic disc is discussed. Optic disk detection is based on morphological operationsand smoothing filters. Blood vessels are extracted using the green component of a colour retinal image with the help of a median filter. Maximum intensity values are validated with blood vessels to localize the optic disk location. The proposed method has shown significant improvements in results.

  18. Subnuclear foci quantification using high-throughput 3D image cytometry

    NASA Astrophysics Data System (ADS)

    Wadduwage, Dushan N.; Parrish, Marcus; Choi, Heejin; Engelward, Bevin P.; Matsudaira, Paul; So, Peter T. C.

    2015-07-01

    Ionising radiation causes various types of DNA damages including double strand breaks (DSBs). DSBs are often recognized by DNA repair protein ATM which forms gamma-H2AX foci at the site of the DSBs that can be visualized using immunohistochemistry. However most of such experiments are of low throughput in terms of imaging and image analysis techniques. Most of the studies still use manual counting or classification. Hence they are limited to counting a low number of foci per cell (5 foci per nucleus) as the quantification process is extremely labour intensive. Therefore we have developed a high throughput instrumentation and computational pipeline specialized for gamma-H2AX foci quantification. A population of cells with highly clustered foci inside nuclei were imaged, in 3D with submicron resolution, using an in-house developed high throughput image cytometer. Imaging speeds as high as 800 cells/second in 3D were achieved by using HiLo wide-field depth resolved imaging and a remote z-scanning technique. Then the number of foci per cell nucleus were quantified using a 3D extended maxima transform based algorithm. Our results suggests that while most of the other 2D imaging and manual quantification studies can count only up to about 5 foci per nucleus our method is capable of counting more than 100. Moreover we show that 3D analysis is significantly superior compared to the 2D techniques.

  19. Non-invasive single-shot 3D imaging through a scattering layer using speckle interferometry

    E-print Network

    Somkuwar, Atul S; R., Vinu; Park, Yongkeun; Singh, Rakesh Kumar

    2015-01-01

    Optical imaging through complex scattering media is one of the major technical challenges with important applications in many research fields, ranging from biomedical imaging, astronomical telescopy, and spatially multiplex optical communications. Although various approaches for imaging though turbid layer have been recently proposed, they had been limited to two-dimensional imaging. Here we propose and experimentally demonstrate an approach for three-dimensional single-shot imaging of objects hidden behind an opaque scattering layer. We demonstrate that under suitable conditions, it is possible to perform the 3D imaging to reconstruct the complex amplitude of objects situated at different depths.

  20. Satellite-borne high-resolution 3D active imaging lidar

    NASA Astrophysics Data System (ADS)

    Zhang, Fangpei; Xue, Haizhong; Liu, Zhongjie; Zhang, Yubing; Xing, Yuhua; Dong, Guangyan; Wang, Shengguo; Wu, Xiafei; Song, Yingxiang

    2011-06-01

    Owing to the notable advantages over range, resolution and accuracy, satellite-borne high-resolution 3D imaging lidar has found widespread applications in aerospace reconnaissance, deep-space detection, earth observation, disaster evaluation, and so on. Based on the principle of 3D laser imaging, the typical satellite-borne high-resolution 3D active imaging lidar systems are reviewed and the development trend is analyzed. Some conclusions can be drawn that operating mechanism of direct detection would be transferred to coherent detection and diode-pumped solid state laser would be transferred to fiber laser. In addition, advanced synthetic aperture and array detection technology should be adopted for higher range resolution.

  1. Modeling Images of Natural 3D Surfaces: Overview and Potential Applications

    NASA Technical Reports Server (NTRS)

    Jalobeanu, Andre; Kuehnel, Frank; Stutz, John

    2004-01-01

    Generative models of natural images have long been used in computer vision. However, since they only describe the of 2D scenes, they fail to capture all the properties of the underlying 3D world. Even though such models are sufficient for many vision tasks a 3D scene model is when it comes to inferring a 3D object or its characteristics. In this paper, we present such a generative model, incorporating both a multiscale surface prior model for surface geometry and reflectance, and an image formation process model based on realistic rendering, the computation of the posterior model parameter densities, and on the critical aspects of the rendering. We also how to efficiently invert the model within a Bayesian framework. We present a few potential applications, such as asteroid modeling and Planetary topography recovery, illustrated by promising results on real images.

  2. Acquiring 3-D information about thick objects from differential interference contrast images using texture extraction.

    PubMed

    Sierra, Heidy; Brooks, Dana; DiMarzio, Charles

    2010-01-01

    The extraction of 3-D morphological information about thick objects is explored in this work. We extract this information from 3-D differential interference contrast (DIC) images by applying a texture detection method. Texture extraction methods have been successfully used in different applications to study biological samples. A 3-D texture image is obtained by applying a local entropy-based texture extraction method. The use of this method to detect regions of blastocyst mouse embryos that are used in assisted reproduction techniques such as in vitro fertilization is presented as an example. Results demonstrate the potential of using texture detection methods to improve morphological analysis of thick samples, which is relevant to many biomedical and biological studies. Fluorescence and optical quadrature microscope phase images are used for validation. PMID:20799823

  3. An improved computer method to prepare 3D magnetic resonance images of thoracic structures.

    PubMed

    Uokawa, K; Nakano, Y; Urayama, S; Uyama, C; Kurokawa, H; Ikeda, K; Koito, H; Tanaka, Y

    1997-05-01

    The mediastinal and cardiovascular anatomy is complex. We have developed a three-dimensional (3D) reconstruction system for the major mediastinal structures using magnetic resonance imaging data on a NeXT workstation. The program uses a combination of automatic and manual procedures to determine the contours of the cardiac structures. The geometric centers of the contours are connected by a 3D space curve, and the central axis of each cardiac structures is determined. The contours are projected on the perpendicular plane to the central axis and semiautomatically processed until the contours of one pixel are obtained. Then the surface rendering with transparency is performed. Compositing combines two images so that both appear in the composite, superimposed on each other. Demonstration of the various mediastinal lines and cardiovascular diseases by the composits of the partly transparent 3D images has promoted a better understanding of the complex mediastinal and cardiovascular anatomy and diseases. PMID:9165423

  4. Acquiring 3-D information about thick objects from differential interference contrast images using texture extraction

    NASA Astrophysics Data System (ADS)

    Sierra, Heidy; Brooks, Dana; Dimarzio, Charles

    2010-07-01

    The extraction of 3-D morphological information about thick objects is explored in this work. We extract this information from 3-D differential interference contrast (DIC) images by applying a texture detection method. Texture extraction methods have been successfully used in different applications to study biological samples. A 3-D texture image is obtained by applying a local entropy-based texture extraction method. The use of this method to detect regions of blastocyst mouse embryos that are used in assisted reproduction techniques such as in vitro fertilization is presented as an example. Results demonstrate the potential of using texture detection methods to improve morphological analysis of thick samples, which is relevant to many biomedical and biological studies. Fluorescence and optical quadrature microscope phase images are used for validation.

  5. 3D dense local point descriptors for mouse brain gene expression images.

    PubMed

    Le, Yen H; Kurkure, Uday; Kakadiaris, Ioannis A

    2014-07-01

    Anatomical landmarks play an important role in many biomedical image analysis applications (e.g., registration and segmentation). Landmark detection can be computationally very expensive, especially in 3D images, because every single voxel in a region of interest may need to be evaluated. In this paper, we introduce two 3D local image descriptors which can be computed simultaneously for every voxel in a volume. Both our proposed descriptors are extensions of the DAISY descriptor, a popular descriptor that is based on the histograms of oriented gradients and was named after its daisy-flower-like configuration. Our experiments on mouse brain gene expression images indicate that our descriptors are discriminative and are able to reduce the detection errors of landmark points more than 30% when compared with SIFT-3D, an extension in 3D of SIFT (scale-invariant feature transform). We also demonstrate that our descriptors are more computationally efficient than SIFT-3D and n-SIFT (an extension SIFT in n-dimensions) for densely sampled points. Therefore, our descriptors can be used in applications that require computation of the descriptors at densely sampled points (e.g., landmark point detection or feature-based registration). PMID:24786719

  6. Photogrammetric calibration and colorization of the SwissRanger SR-3100 3-D range imaging sensor

    NASA Astrophysics Data System (ADS)

    Robbins, Scott; Murawski, Bryan; Schroeder, Brigit

    2009-05-01

    Many robotic and industrial systems require 3-D range-sensing capabilities for mapping, localization, navigation, and obstacle avoidance. Laser-scanning systems that mechanically trace a range-sensing beam over a raster or similar pattern can produce highly accurate models but tend to be bulky and slow when acquiring a significant field of view at useful resolutions. Stereo cameras can provide video-rate range images over significant fields of view but tend to have difficulty with scenes containing low or confusing textures. A new generation of active light, time-of-flight range sensors use a 2-D array of sensor elements to produce a 3-D range image at video rates. These sensors pose unique calibration challenges, requiring both the usual calibration of lens distortion (intrinsic calibration) and calibration of the time-of-flight range measurement (3-D calibration). We present our application of a photogrammetric calibration approach using inexpensive printed optical targets and off-the-shelf software to solve both intrinsic and range calibrations for the MESA Imaging SwissRanger SR-3100 range imaging sensor. Specific calibration issues stemming from this sensor's correlation of reflectivity with measured range are identified. We further present the integration of this otherwise grayscale 3-D sensor with an optical camera, providing a full-color, video-rate 3-D sensing solution.

  7. Automatic detection of endothelial cells in 3D angiogenic sprouts from experimental phase contrast images

    NASA Astrophysics Data System (ADS)

    Wang, MengMeng; Ong, Lee-Ling Sharon; Dauwels, Justin; Asada, H. Harry

    2015-03-01

    Cell migration studies in 3D environments become more popular, as cell behaviors in 3D are more similar to the behaviors of cells in a living organism (in vivo). We focus on the 3D angiogenic sprouting in microfluidic devices, where Endothelial Cells (ECs) burrow into the gel matrix and form solid lumen vessels. Phase contrast microscopy is used for long-term observation of the unlabeled ECs in the 3D microfluidic devices. Two template matching based approaches are proposed to automatically detect the unlabeled ECs in the angiogenic sprouts from the acquired experimental phase contrast images. Cell and non-cell templates are obtained from these phase contrast images as the training data. The first approach applies Partial Least Square Regression (PLSR) to find the discriminative features and their corresponding weight to distinguish cells and non-cells, whereas the second approach relies on Principal Component Analysis (PCA) to reduce the template feature dimension and Support Vector Machine (SVM) to find their corresponding weight. Through a sliding window manner, the cells in the test images are detected. We then validate the detection accuracy by comparing the results with the same images acquired with a confocal microscope after cells are fixed and their nuclei are stained. More accurate numerical results are obtained for approach I (PLSR) compared to approach II (PCA & SVM) for cell detection. Automatic cell detection will aid in the understanding of cell migration in 3D environment and in turn result in a better understanding of angiogenesis.

  8. 3D-MSCT imaging of bullet trajectory in 3D crime scene reconstruction: two case reports.

    PubMed

    Colard, T; Delannoy, Y; Bresson, F; Marechal, C; Raul, J S; Hedouin, V

    2013-11-01

    Postmortem investigations are increasingly assisted by three-dimensional multi-slice computed tomography (3D-MSCT) and have become more available to forensic pathologists over the past 20years. In cases of ballistic wounds, 3D-MSCT can provide an accurate description of the bullet location, bone fractures and, more interestingly, a clear visual of the intracorporeal trajectory (bullet track). These forensic medical examinations can be combined with tridimensional bullet trajectory reconstructions created by forensic ballistic experts. These case reports present the implementation of tridimensional methods and the results of 3D crime scene reconstruction in two cases. The authors highlight the value of collaborations between police forensic experts and forensic medicine institutes through the incorporation of 3D-MSCT data in a crime scene reconstruction, which is of great interest in forensic science as a clear visual communication tool between experts and the court. PMID:23931960

  9. 3D Image-Guided Automatic Pipette Positioning for Single Cell Experiments in vivo.

    PubMed

    Long, Brian; Li, Lu; Knoblich, Ulf; Zeng, Hongkui; Peng, Hanchuan

    2015-01-01

    We report a method to facilitate single cell, image-guided experiments including in vivo electrophysiology and electroporation. Our method combines 3D image data acquisition, visualization and on-line image analysis with precise control of physical probes such as electrophysiology microelectrodes in brain tissue in vivo. Adaptive pipette positioning provides a platform for future advances in automated, single cell in vivo experiments. PMID:26689553

  10. Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy.

    PubMed

    Prevedel, Robert; Yoon, Young-Gyu; Hoffmann, Maximilian; Pak, Nikita; Wetzstein, Gordon; Kato, Saul; Schrödel, Tina; Raskar, Ramesh; Zimmer, Manuel; Boyden, Edward S; Vaziri, Alipasha

    2014-07-01

    High-speed, large-scale three-dimensional (3D) imaging of neuronal activity poses a major challenge in neuroscience. Here we demonstrate simultaneous functional imaging of neuronal activity at single-neuron resolution in an entire Caenorhabditis elegans and in larval zebrafish brain. Our technique captures the dynamics of spiking neurons in volumes of ?700 ?m × 700 ?m × 200 ?m at 20 Hz. Its simplicity makes it an attractive tool for high-speed volumetric calcium imaging. PMID:24836920

  11. 3D Image-Guided Automatic Pipette Positioning for Single Cell Experiments in vivo

    PubMed Central

    Long, Brian; Li, Lu; Knoblich, Ulf; Zeng, Hongkui; Peng, Hanchuan

    2015-01-01

    We report a method to facilitate single cell, image-guided experiments including in vivo electrophysiology and electroporation. Our method combines 3D image data acquisition, visualization and on-line image analysis with precise control of physical probes such as electrophysiology microelectrodes in brain tissue in vivo. Adaptive pipette positioning provides a platform for future advances in automated, single cell in vivo experiments. PMID:26689553

  12. 3D imaging of particle tracks in Solid State Nuclear Track Detectors

    NASA Astrophysics Data System (ADS)

    Wertheim, D.; Gillmore, G.; Brown, L.; Petford, N.

    2009-04-01

    Inhalation of radon gas (222Rn) and associated ionizing decay products is known to cause lung cancer in human. In the U.K., it has been suggested that 3 to 5 % of total lung cancer deaths can be linked to elevated radon concentrations in the home and/or workplace. Radon monitoring in buildings is therefore routinely undertaken in areas of known risk. Indeed, some organisations such as the Radon Council in the UK and the Environmental Protection Agency in the USA, advocate a ‘to test is best' policy. Radon gas occurs naturally, emanating from the decay of 238U in rock and soils. Its concentration can be measured using CR?39 plastic detectors which conventionally are assessed by 2D image analysis of the surface; however there can be some variation in outcomes / readings even in closely spaced detectors. A number of radon measurement methods are currently in use (for examples, activated carbon and electrets) but the most widely used are CR?39 solid state nuclear track?etch detectors (SSNTDs). In this technique, heavily ionizing alpha particles leave tracks in the form of radiation damage (via interaction between alpha particles and the atoms making up the CR?39 polymer). 3D imaging of the tracks has the potential to provide information relating to angle and energy of alpha particles but this could be time consuming. Here we describe a new method for rapid high resolution 3D imaging of SSNTDs. A ‘LEXT' OLS3100 confocal laser scanning microscope was used in confocal mode to successfully obtain 3D image data on four CR?39 plastic detectors. 3D visualisation and image analysis enabled characterisation of track features. This method may provide a means of rapid and detailed 3D analysis of SSNTDs. Keywords: Radon; SSNTDs; confocal laser scanning microscope; 3D imaging; LEXT

  13. Using videogrammetry and 3D image reconstruction to identify crime suspects

    NASA Astrophysics Data System (ADS)

    Klasen, Lena M.; Fahlander, Olov

    1997-02-01

    The anthropometry and movements are unique for every individual human being. We identify persons we know by recognizing the way the look and move. By quantifying these measures and using image processing methods this method can serve as a tool in the work of the police as a complement to the ability of the human eye. The idea is to use virtual 3-D parameterized models of the human body to measure the anthropometry and movements of a crime suspect. The Swedish National Laboratory of Forensic Science in cooperation with SAAB Military Aircraft have developed methods for measuring the lengths of persons from video sequences. However, there is so much unused information in a digital image sequence from a crime scene. The main approach for this paper is to give an overview of the current research project at Linkoping University, Image Coding Group where methods to measure anthropometrical data and movements by using virtual 3-D parameterized models of the person in the crime scene are being developed. The length of an individual might vary up to plus or minus 10 cm depending on whether the person is in upright position or not. When measuring during the best available conditions, the length still varies within plus or minus 1 cm. Using a full 3-D model provides a rich set of anthropometric measures describing the person in the crime scene. Once having obtained such a model the movements can be quantified as well. The results depend strongly on the accuracy of the 3-D model and the strategy of having such an accurate 3-D model is to make one estimate per image frame by using 3-D scene reconstruction, and an averaged 3-D model as the final result from which the anthropometry and movements are calculated.

  14. Fuzzy zoning for feature matching technique in 3D reconstruction of nasal endoscopic images.

    PubMed

    Rattanalappaiboon, Surapong; Bhongmakapat, Thongchai; Ritthipravat, Panrasee

    2015-12-01

    3D reconstruction from nasal endoscopic images greatly supports an otolaryngologist in examining nasal passages, mucosa, polyps, sinuses, and nasopharyx. In general, structure from motion is a popular technique. It consists of four main steps; (1) camera calibration, (2) feature extraction, (3) feature matching, and (4) 3D reconstruction. Scale Invariant Feature Transform (SIFT) algorithm is normally used for both feature extraction and feature matching. However, SIFT algorithm relatively consumes computational time particularly in the feature matching process because each feature in an image of interest is compared with all features in the subsequent image in order to find the best matched pair. A fuzzy zoning approach is developed for confining feature matching area. Matching between two corresponding features from different images can be efficiently performed. With this approach, it can greatly reduce the matching time. The proposed technique is tested with endoscopic images created from phantoms and compared with the original SIFT technique in terms of the matching time and average errors of the reconstructed models. Finally, original SIFT and the proposed fuzzy-based technique are applied to 3D model reconstruction of real nasal cavity based on images taken from a rigid nasal endoscope. The results showed that the fuzzy-based approach was significantly faster than traditional SIFT technique and provided similar quality of the 3D models. It could be used for creating a nasal cavity taken by a rigid nasal endoscope. PMID:26498516

  15. 3D nonrigid medical image registration using a new information theoretic measure

    NASA Astrophysics Data System (ADS)

    Li, Bicao; Yang, Guanyu; Coatrieux, Jean Louis; Li, Baosheng; Shu, Huazhong

    2015-11-01

    This work presents a novel method for the nonrigid registration of medical images based on the Arimoto entropy, a generalization of the Shannon entropy. The proposed method employed the Jensen–Arimoto divergence measure as a similarity metric to measure the statistical dependence between medical images. Free-form deformations were adopted as the transformation model and the Parzen window estimation was applied to compute the probability distributions. A penalty term is incorporated into the objective function to smooth the nonrigid transformation. The goal of registration is to optimize an objective function consisting of a dissimilarity term and a penalty term, which would be minimal when two deformed images are perfectly aligned using the limited memory BFGS optimization method, and thus to get the optimal geometric transformation. To validate the performance of the proposed method, experiments on both simulated 3D brain MR images and real 3D thoracic CT data sets were designed and performed on the open source elastix package. For the simulated experiments, the registration errors of 3D brain MR images with various magnitudes of known deformations and different levels of noise were measured. For the real data tests, four data sets of 4D thoracic CT from four patients were selected to assess the registration performance of the method, including ten 3D CT images for each 4D CT data covering an entire respiration cycle. These results were compared with the normalized cross correlation and the mutual information methods and show a slight but true improvement in registration accuracy.

  16. Web tools for large-scale 3D biological images and atlases

    PubMed Central

    2012-01-01

    Background Large-scale volumetric biomedical image data of three or more dimensions are a significant challenge for distributed browsing and visualisation. Many images now exceed 10GB which for most users is too large to handle in terms of computer RAM and network bandwidth. This is aggravated when users need to access tens or hundreds of such images from an archive. Here we solve the problem for 2D section views through archive data delivering compressed tiled images enabling users to browse through very-large volume data in the context of a standard web-browser. The system provides an interactive visualisation for grey-level and colour 3D images including multiple image layers and spatial-data overlay. Results The standard Internet Imaging Protocol (IIP) has been extended to enable arbitrary 2D sectioning of 3D data as well a multi-layered images and indexed overlays. The extended protocol is termed IIP3D and we have implemented a matching server to deliver the protocol and a series of Ajax/Javascript client codes that will run in an Internet browser. We have tested the server software on a low-cost linux-based server for image volumes up to 135GB and 64 simultaneous users. The section views are delivered with response times independent of scale and orientation. The exemplar client provided multi-layer image views with user-controlled colour-filtering and overlays. Conclusions Interactive browsing of arbitrary sections through large biomedical-image volumes is made possible by use of an extended internet protocol and efficient server-based image tiling. The tools open the possibility of enabling fast access to large image archives without the requirement of whole image download and client computers with very large memory configurations. The system was demonstrated using a range of medical and biomedical image data extending up to 135GB for a single image volume. PMID:22676296

  17. A 3D Multi-Aperture Image Sensor Architecture Keith Fife, Abbas El Gamal and H.-S. Philip Wong

    E-print Network

    El Gamal, Abbas

    A 3D Multi-Aperture Image Sensor Architecture Keith Fife, Abbas El Gamal and H.-S. Philip Wong. A lens focuses the image above the sensor creating overlapping fields of view between apertures. Multiple perspec- tives of the image in the focal plane facilitate the synthesis of a 3D image at a higher spatial

  18. Note: An improved 3D imaging system for electron-electron coincidence measurements

    NASA Astrophysics Data System (ADS)

    Lin, Yun Fei; Lee, Suk Kyoung; Adhikari, Pradip; Herath, Thushani; Lingenfelter, Steven; Winney, Alexander H.; Li, Wen

    2015-09-01

    We demonstrate an improved imaging system that can achieve highly efficient 3D detection of two electrons in coincidence. The imaging system is based on a fast frame complementary metal-oxide semiconductor camera and a high-speed waveform digitizer. We have shown previously that this detection system is capable of 3D detection of ions and electrons with good temporal and spatial resolution. Here, we show that with a new timing analysis algorithm, this system can achieve an unprecedented dead-time (<0.7 ns) and dead-space (<1 mm) when detecting two electrons. A true zero dead-time detection is also demonstrated.

  19. From pixel to voxel: a deeper view of biological tissue by 3D mass spectral imaging

    PubMed Central

    Ye, Hui; Greer, Tyler; Li, Lingjun

    2011-01-01

    Three dimensional mass spectral imaging (3D MSI) is an exciting field that grants the ability to study a broad mass range of molecular species ranging from small molecules to large proteins by creating lateral and vertical distribution maps of select compounds. Although the general premise behind 3D MSI is simple, factors such as choice of ionization method, sample handling, software considerations and many others must be taken into account for the successful design of a 3D MSI experiment. This review provides a brief overview of ionization methods, sample preparation, software types and technological advancements driving 3D MSI research of a wide range of low- to high-mass analytes. Future perspectives in this field are also provided to conclude that the positive and promises ever-growing applications in the biomedical field with continuous developments of this powerful analytical tool. PMID:21320052

  20. Nanoparticle imaging. 3D structure of individual nanocrystals in solution by electron microscopy.

    PubMed

    Park, Jungwon; Elmlund, Hans; Ercius, Peter; Yuk, Jong Min; Limmer, David T; Chen, Qian; Kim, Kwanpyo; Han, Sang Hoon; Weitz, David A; Zettl, A; Alivisatos, A Paul

    2015-07-17

    Knowledge about the synthesis, growth mechanisms, and physical properties of colloidal nanoparticles has been limited by technical impediments. We introduce a method for determining three-dimensional (3D) structures of individual nanoparticles in solution. We combine a graphene liquid cell, high-resolution transmission electron microscopy, a direct electron detector, and an algorithm for single-particle 3D reconstruction originally developed for analysis of biological molecules. This method yielded two 3D structures of individual platinum nanocrystals at near-atomic resolution. Because our method derives the 3D structure from images of individual nanoparticles rotating freely in solution, it enables the analysis of heterogeneous populations of potentially unordered nanoparticles that are synthesized in solution, thereby providing a means to understand the structure and stability of defects at the nanoscale. PMID:26185247

  1. Studying disagreements among retinal experts through image analysis.

    PubMed

    Quellec, Gwénolé; Lamard, Mathieu; Cochener, Béatrice; Droueche, Zakarya; Lay, Bruno; Chabouis, Agnès; Roux, Christian; Cazuguel, Guy

    2012-01-01

    In recent years, many image analysis algorithms have been presented to assist Diabetic Retinopathy (DR) screening. The goal was usually to detect healthy examination records automatically, in order to reduce the number of records that should be analyzed by retinal experts. In this paper, a novel application is presented: these algorithms are used to 1) discover image characteristics that sometimes cause an expert to disagree with his/her peers and 2) warn the expert whenever these characteristics are detected in an examination record. In a DR screening program, each examination record is only analyzed by one expert, therefore analyzing disagreements among experts is challenging. A statistical framework, based on Parzen-windowing and the Patrick-Fischer distance, is presented to solve this problem. Disagreements among eleven experts from the Ophdiat screening program were analyzed, using an archive of 25,702 examination records. PMID:23367286

  2. Retinal vascular image analysis as a potential screening tool for cerebrovascular disease: a rationale based on homology between cerebral and retinal microvasculatures

    PubMed Central

    Patton, Niall; Aslam, Tariq; MacGillivray, Thomas; Pattie, Alison; Deary, Ian J; Dhillon, Baljean

    2005-01-01

    The retinal and cerebral microvasculatures share many morphological and physiological properties. Assessment of the cerebral microvasculature requires highly specialized and expensive techniques. The potential for using non-invasive clinical assessment of the retinal microvasculature as a marker of the state of the cerebrovasculature offers clear advantages, owing to the ease with which the retinal vasculature can be directly visualized in vivo and photographed due to its essential two-dimensional nature. The use of retinal digital image analysis is becoming increasingly common, and offers new techniques to analyse different aspects of retinal vascular topography, including retinal vascular widths, geometrical attributes at vessel bifurcations and vessel tracking. Being predominantly automated and objective, these techniques offer an exciting opportunity to study the potential to identify retinal microvascular abnormalities as markers of cerebrovascular pathology. In this review, we describe the anatomical and physiological homology between the retinal and cerebral microvasculatures. We review the evidence that retinal microvascular changes occur in cerebrovascular disease and review current retinal image analysis tools that may allow us to use different aspects of the retinal microvasculature as potential markers for the state of the cerebral microvasculature. PMID:15817102

  3. A web-based solution for 3D medical image visualization

    NASA Astrophysics Data System (ADS)

    Hou, Xiaoshuai; Sun, Jianyong; Zhang, Jianguo

    2015-03-01

    In this presentation, we present a web-based 3D medical image visualization solution which enables interactive large medical image data processing and visualization over the web platform. To improve the efficiency of our solution, we adopt GPU accelerated techniques to process images on the server side while rapidly transferring images to the HTML5 supported web browser on the client side. Compared to traditional local visualization solution, our solution doesn't require the users to install extra software or download the whole volume dataset from PACS server. By designing this web-based solution, it is feasible for users to access the 3D medical image visualization service wherever the internet is available.

  4. 3-d reconstruction of neurons from multichannel confocal laser scanning image series.

    PubMed

    Wouterlood, Floris G

    2014-01-01

    A confocal laser scanning microscope (CLSM) collects information from a thin, focal plane and ignores out-of-focus information. Scanning of a specimen, with stepwise axial (Z-) movement of the stage in between each scan, produces Z-series of confocal images of a tissue volume, which then can be used to 3-D reconstruct structures of interest. The operator first configures separate channels (e.g., laser, filters, and detector settings) for each applied fluorochrome and then acquires Z-series of confocal images: one series per channel. Channel signal separation is extremely important. Measures to avoid bleaching are vital. Post-acquisition deconvolution of the image series is often performed to increase resolution before 3-D reconstruction takes place. In the 3-D reconstruction programs described in this unit, reconstructions can be inspected in real time from any viewing angle. By altering viewing angles and by switching channels off and on, the spatial relationships of 3-D-reconstructed structures with respect to structures visualized in other channels can be studied. Since each brand of CLSM, computer program, and 3-D reconstruction package has its own proprietary set of procedures, a general approach is provided in this protocol wherever possible. Curr. Protoc. Neurosci 67:2.8.1-2.8.18. © 2014 by John Wiley & Sons, Inc. PMID:24723320

  5. Real-time volume rendering of 4D image using 3D texture mapping

    NASA Astrophysics Data System (ADS)

    Hwang, Jinwoo; Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il

    2001-05-01

    Four dimensional image is 3D volume data that varies with time. It is used to express deforming or moving object in virtual surgery of 4D ultrasound. It is difficult to render 4D image by conventional ray-casting or shear-warp factorization methods because of their time-consuming rendering time or pre-processing stage whenever the volume data are changed. Even 3D texture mapping is used, repeated volume loading is also time-consuming in 4D image rendering. In this study, we propose a method to reduce data loading time using coherence between currently loaded volume and previously loaded volume in order to achieve real time rendering based on 3D texture mapping. Volume data are divided into small bricks and each brick being loaded is tested for similarity to one which was already loaded in memory. If the brick passed the test, it is defined as 3D texture by OpenGL functions. Later, the texture slices of the brick are mapped into polygons and blended by OpenGL blending functions. All bricks undergo this test. Continuously deforming fifty volumes are rendered in interactive time with SGI ONYX. Real-time volume rendering based on 3D texture mapping is currently available on PC.

  6. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  7. Effect of retinal ischemia on the non-image forming visual system.

    PubMed

    González Fleitas, María Florencia; Bordone, Melina; Rosenstein, Ruth E; Dorfman, Damián

    2015-03-01

    Retinal ischemic injury is an important cause of visual impairment. The loss of retinal ganglion cells (RGCs) is a key sign of retinal ischemic damage. A subset of RGCs expressing the photopigment melanopsin (mRGCs) regulates non-image-forming visual functions such as the pupillary light reflex (PLR), and circadian rhythms. We studied the effect of retinal ischemia on mRGCs and the non-image-forming visual system function. For this purpose, transient ischemia was induced by raising intraocular pressure to 120?mm Hg for 40?min followed by retinal reperfusion by restoring normal pressure. At 4 weeks post-treatment, animals were subjected to electroretinography and histological analysis. Ischemia induced a significant retinal dysfunction and histological alterations. At this time point, a significant decrease in the number of Brn3a(+) RGCs and in the anterograde transport from the retina to the superior colliculus and lateral geniculate nucleus was observed, whereas no differences in the number of mRGCs, melanopsin levels, and retinal projections to the suprachiasmatic nuclei and the olivary pretectal nucleus were detected. At low light intensity, a decrease in pupil constriction was observed in intact eyes contralateral to ischemic eyes, whereas at high light intensity, retinal ischemia did not affect the consensual PLR. Animals with ischemia in both eyes showed a conserved locomotor activity rhythm and a photoentrainment rate which did not differ from control animals. These results suggest that the non-image forming visual system was protected against retinal ischemic damage. PMID:25238585

  8. Quantitative Analysis of Mouse Retinal Layers Using Automated Segmentation of Spectral Domain Optical Coherence Tomography Images

    PubMed Central

    Dysli, Chantal; Enzmann, Volker; Sznitman, Raphael; Zinkernagel, Martin S.

    2015-01-01

    Purpose Quantification of retinal layers using automated segmentation of optical coherence tomography (OCT) images allows for longitudinal studies of retinal and neurological disorders in mice. The purpose of this study was to compare the performance of automated retinal layer segmentation algorithms with data from manual segmentation in mice using the Spectralis OCT. Methods Spectral domain OCT images from 55 mice from three different mouse strains were analyzed in total. The OCT scans from 22 C57Bl/6, 22 BALBc, and 11 C3A.Cg-Pde6b+Prph2Rd2/J mice were automatically segmented using three commercially available automated retinal segmentation algorithms and compared to manual segmentation. Results Fully automated segmentation performed well in mice and showed coefficients of variation (CV) of below 5% for the total retinal volume. However, all three automated segmentation algorithms yielded much thicker total retinal thickness values compared to manual segmentation data (P < 0.0001) due to segmentation errors in the basement membrane. Conclusions Whereas the automated retinal segmentation algorithms performed well for the inner layers, the retinal pigmentation epithelium (RPE) was delineated within the sclera, leading to consistently thicker measurements of the photoreceptor layer and the total retina. Translational Relevance The introduction of spectral domain OCT allows for accurate imaging of the mouse retina. Exact quantification of retinal layer thicknesses in mice is important to study layers of interest under various pathological conditions. PMID:26336634

  9. Improved registration for 3D image creation using multiple texel images and incorporating low-cost GPS/INS measurements

    NASA Astrophysics Data System (ADS)

    Budge, Scott E.; Xie, Xuan

    2014-06-01

    The creation of 3D imagery is an important topic in remote sensing. Several methods have been developed to create 3D images from fused ladar and digital images, known as texel images. These methods have the advantage of using both the 3D ladar information and the 2D digital imagery directly, since texel images are fused during data acquisition. A weakness of these methods is that they are dependent on correlating feature points in the digital images. This can be difficult when image perspectives are significantly different, leading to low correlation values between matching feature points. This paper presents a method to improve the quality of 3D images created using existing approaches that register multiple texel images. The proposed method incorporates relatively low accuracy measurements of the position and attitude of the texel camera from a low-cost GPS/INS into the registration process. This information can improve the accuracy and robustness of the registered texel images over methods based on point-cloud merging or image registration alone. In addition, the dependence on feature point correlation is eliminated. Examples illustrate the value of this method for significant image perspective differences.

  10. A Novel Image Compression Algorithm for High Resolution 3D Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2014-06-01

    This research presents a novel algorithm to compress high-resolution images for accurate structured light 3D reconstruction. Structured light images contain a pattern of light and shadows projected on the surface of the object, which are captured by the sensor at very high resolutions. Our algorithm is concerned with compressing such images to a high degree with minimum loss without adversely affecting 3D reconstruction. The Compression Algorithm starts with a single level discrete wavelet transform (DWT) for decomposing an image into four sub-bands. The sub-band LL is transformed by DCT yielding a DC-matrix and an AC-matrix. The Minimize-Matrix-Size Algorithm is used to compress the AC-matrix while a DWT is applied again to the DC-matrix resulting in LL2, HL2, LH2 and HH2 sub-bands. The LL2 sub-band is transformed by DCT, while the Minimize-Matrix-Size Algorithm is applied to the other sub-bands. The proposed algorithm has been tested with images of different sizes within a 3D reconstruction scenario. The algorithm is demonstrated to be more effective than JPEG2000 and JPEG concerning higher compression rates with equivalent perceived quality and the ability to more accurately reconstruct the 3D models.

  11. Combining volumetric edge display and multiview display for expression of natural 3D images

    NASA Astrophysics Data System (ADS)

    Yasui, Ryota; Matsuda, Isamu; Kakeya, Hideki

    2006-02-01

    In the present paper the authors present a novel stereoscopic display method combining volumetric edge display technology and multiview display technology to realize presentation of natural 3D images where the viewers do not suffer from contradiction between binocular convergence and focal accommodation of the eyes, which causes eyestrain and sickness. We adopt volumetric display method only for edge drawing, while we adopt stereoscopic approach for flat areas of the image. Since focal accommodation of our eyes is affected only by the edge part of the image, natural focal accommodation can be induced if the edges of the 3D image are drawn on the proper depth. The conventional stereo-matching technique can give us robust depth values of the pixels which constitute noticeable edges. Also occlusion and gloss of the objects can be roughly expressed with the proposed method since we use stereoscopic approach for the flat area. We can attain a system where many users can view natural 3D objects at the consistent position and posture at the same time in this system. A simple optometric experiment using a refractometer suggests that the proposed method can give us 3-D images without contradiction between binocular convergence and focal accommodation.

  12. GPU-based block-wise nonlocal means denoising for 3D ultrasound images.

    PubMed

    Li, Liu; Hou, Wenguang; Zhang, Xuming; Ding, Mingyue

    2013-01-01

    Speckle suppression plays an important role in improving ultrasound (US) image quality. While lots of algorithms have been proposed for 2D US image denoising with remarkable filtering quality, there is relatively less work done on 3D ultrasound speckle suppression, where the whole volume data rather than just one frame needs to be considered. Then, the most crucial problem with 3D US denoising is that the computational complexity increases tremendously. The nonlocal means (NLM) provides an effective method for speckle suppression in US images. In this paper, a programmable graphic-processor-unit- (GPU-) based fast NLM filter is proposed for 3D ultrasound speckle reduction. A Gamma distribution noise model, which is able to reliably capture image statistics for Log-compressed ultrasound images, was used for the 3D block-wise NLM filter on basis of Bayesian framework. The most significant aspect of our method was the adopting of powerful data-parallel computing capability of GPU to improve the overall efficiency. Experimental results demonstrate that the proposed method can enormously accelerate the algorithm. PMID:24348747

  13. GPU-Based Block-Wise Nonlocal Means Denoising for 3D Ultrasound Images

    PubMed Central

    Hou, Wenguang; Zhang, Xuming; Ding, Mingyue

    2013-01-01

    Speckle suppression plays an important role in improving ultrasound (US) image quality. While lots of algorithms have been proposed for 2D US image denoising with remarkable filtering quality, there is relatively less work done on 3D ultrasound speckle suppression, where the whole volume data rather than just one frame needs to be considered. Then, the most crucial problem with 3D US denoising is that the computational complexity increases tremendously. The nonlocal means (NLM) provides an effective method for speckle suppression in US images. In this paper, a programmable graphic-processor-unit- (GPU-) based fast NLM filter is proposed for 3D ultrasound speckle reduction. A Gamma distribution noise model, which is able to reliably capture image statistics for Log-compressed ultrasound images, was used for the 3D block-wise NLM filter on basis of Bayesian framework. The most significant aspect of our method was the adopting of powerful data-parallel computing capability of GPU to improve the overall efficiency. Experimental results demonstrate that the proposed method can enormously accelerate the algorithm. PMID:24348747

  14. Implementation of 3D prostrate ring-scanning mechanism for NIR diffuse optical imaging phantom validation

    NASA Astrophysics Data System (ADS)

    Yu, Jhao-Ming; Chen, Liang-Yu; Pan, Min-Cheng; Hsu, Ya-Fen; Pan, Min-Chun

    2015-03-01

    Diffuse optical imaging (DOI) providing functional information of tissues has drawn great attention for the last two decades. Near infrared (NIR) DOI systems composed of scanning bench, opt-electrical measurement module, system control, and data processing and image reconstruction schemes are developed for the screening and diagnosis of breast tumors. Mostly, the scanning bench belonging to fixed source-and-detector configuration limits computed image resolution to an extent. To cope with the issue, we propose, design and implement a 3D prostrate ring-scanning equipment for NIR DOI with flexible combinations of illumination and detection, and with the function of radial, circular and vertical movement without hard compression of breast tissue like the imaging system using or incorporating with X-ray mammographic bench. Especially, a rotation-sliding-and-moving mechanism was designed for the guidance of source- and detection-channel movement. Following the previous justification for synthesized image reconstruction, in the paper the validation using varied phantoms is further conducted and 3D image reconstruction for their absorption and scattering coefficients is illustrated through the computation of our in-house coded schemes. The source and detection NIR data are acquired to reconstruct the 3D images through the operation of scanning bench in the movement of vertical, radial and circular directions. Rather than the fixed configuration, the addressed screening/diagnosing equipment has the flexibility for optical-channel expansion with a compromise among construction cost, operation time, and spatial resolution of reconstructed ?a and ?s' images.

  15. 3D optical sectioning with a new hyperspectral confocal fluorescence imaging system.

    SciTech Connect

    Nieman, Linda T.; Sinclair, Michael B.; Davidson, George S.; Van Benthem, Mark Hilary; Haaland, David Michael; Timlin, Jerilyn Ann; Sasaki, Darryl Yoshio; Bachand, George David; Jones, Howland D. T.

    2007-02-01

    A novel hyperspectral fluorescence microscope for high-resolution 3D optical sectioning of cells and other structures has been designed, constructed, and used to investigate a number of different problems. We have significantly extended new multivariate curve resolution (MCR) data analysis methods to deconvolve the hyperspectral image data and to rapidly extract quantitative 3D concentration distribution maps of all emitting species. The imaging system has many advantages over current confocal imaging systems including simultaneous monitoring of numerous highly overlapped fluorophores, immunity to autofluorescence or impurity fluorescence, enhanced sensitivity, and dramatically improved accuracy, reliability, and dynamic range. Efficient data compression in the spectral dimension has allowed personal computers to perform quantitative analysis of hyperspectral images of large size without loss of image quality. We have also developed and tested software to perform analysis of time resolved hyperspectral images using trilinear multivariate analysis methods. The new imaging system is an enabling technology for numerous applications including (1) 3D composition mapping analysis of multicomponent processes occurring during host-pathogen interactions, (2) monitoring microfluidic processes, (3) imaging of molecular motors and (4) understanding photosynthetic processes in wild type and mutant Synechocystis cyanobacteria.

  16. Application of Medical Imaging Software to 3D Visualization of Astronomical Data

    E-print Network

    Michelle Borkin; Alyssa Goodman; Michael Halle; Douglas Alan

    2006-11-13

    The AstroMed project at Harvard University's Initiative in Innovative Computing (IIC) is working on improved visualization and data sharing solutions applicable to the fields of both astronomy and medicine. The current focus is on the application of medical imaging visualization and analysis techniques to three-dimensional astronomical data. The 3D Slicer and OsiriX medical imaging tools have been used to make isosurface and volumetric models in RA-DEC-velocity space of the Perseus star forming region from the COMPLETE Survey of Star Forming Region's spectral line maps. 3D Slicer, a brain imaging and visualization computer application developed at Brigham and Women's Hospital's Surgical Planning Lab, is capable of displaying volumes (i.e. data cubes), displaying slices in any direction through the volume, generating 3D isosurface models from the volume which can be viewed and rotated in 3D space, and making 3D models of label maps (for example CLUMPFIND output). OsiriX is able to generate volumetric models from data cubes and allows the user in real time to change the displayed intensity level, crop the models without losing the data, manipulate the model and viewing angle, and use a variety of projections. In applying 3D Slicer to 12CO and 13CO spectral line data cubes of Perseus, the visualization allowed for a rapid review of over 8 square degrees and 150,000 spectra, and the cataloging of 217 high velocity points. These points were further investigated in half of Perseus and all known outflows were detected, and 20 points were identified in these regions as possibly being associated with undocumented outflows. All IIC developed tools, as well as 3D Slicer and OsiriX, are freely available.

  17. 3D surface scan of biological samples with a Push-broom Imaging Spectrometer

    NASA Astrophysics Data System (ADS)

    Yao, Haibo; Kincaid, Russell; Hruska, Zuzana; Brown, Robert L.; Bhatnagar, Deepak; Cleveland, Thomas E.

    2013-08-01

    The food industry is always on the lookout for sensing technologies for rapid and nondestructive inspection of food products. Hyperspectral imaging technology integrates both imaging and spectroscopy into unique imaging sensors. Its application for food safety and quality inspection has made significant progress in recent years. Specifically, hyperspectral imaging has shown its potential for surface contamination detection in many food related applications. Most existing hyperspectral imaging systems use pushbroom scanning which is generally used for flat surface inspection. In some applications it is desirable to be able to acquire hyperspectral images on circular objects such as corn ears, apples, and cucumbers. Past research describes inspection systems that examine all surfaces of individual objects. Most of these systems did not employ hyperspectral imaging. These systems typically utilized a roller to rotate an object, such as an apple. During apple rotation, the camera took multiple images in order to cover the complete surface of the apple. The acquired image data lacked the spectral component present in a hyperspectral image. This paper discusses the development of a hyperspectral imaging system for a 3-D surface scan of biological samples. The new instrument is based on a pushbroom hyperspectral line scanner using a rotational stage to turn the sample. The system is suitable for whole surface hyperspectral imaging of circular objects. In addition to its value to the food industry, the system could be useful for other applications involving 3-D surface inspection.

  18. A neural network based 3D/3D image registration quality evaluator for the head-and-neck patient setup in the absence of a ground truth

    SciTech Connect

    Wu Jian; Murphy, Martin J.

    2010-11-15

    Purpose: To develop a neural network based registration quality evaluator (RQE) that can identify unsuccessful 3D/3D image registrations for the head-and-neck patient setup in radiotherapy. Methods: A two-layer feed-forward neural network was used as a RQE to classify 3D/3D rigid registration solutions as successful or unsuccessful based on the features of the similarity surface near the point-of-solution. The supervised training and test data sets were generated by rigidly registering daily cone-beam CTs to the treatment planning fan-beam CTs of six patients with head-and-neck tumors. Two different similarity metrics (mutual information and mean-squared intensity difference) and two different types of image content (entire image versus bony landmarks) were used. The best solution for each registration pair was selected from 50 optimizing attempts that differed only by the initial transformation parameters. The distance from each individual solution to the best solution in the normalized parametrical space was compared to a user-defined error threshold to determine whether that solution was successful or not. The supervised training was then used to train the RQE. The performance of the RQE was evaluated using the test data set that consisted of registration results that were not used in training. Results: The RQE constructed using the mutual information had very good performance when tested using the test data sets, yielding the sensitivity, the specificity, the positive predictive value, and the negative predictive value in the ranges of 0.960-1.000, 0.993-1.000, 0.983-1.000, and 0.909-1.000, respectively. Adding a RQE into a conventional 3D/3D image registration system incurs only about 10%-20% increase of the overall processing time. Conclusions: The authors' patient study has demonstrated very good performance of the proposed RQE when used with the mutual information in identifying unsuccessful 3D/3D registrations for daily patient setup. The classifier had very good generality and required only to be trained once for each implementation. When the RQE is incorporated with an automated 3D/3D image registration system, it can improve the robustness of the system.

  19. 3D imaging options and ultrasound contrast agents for the ultrasound assessment of pediatric rheumatic patients

    PubMed Central

    2013-01-01

    The application of 3D imaging in pediatric rheumatology helps to make the assessment of inflammatory changes more objective and to estimate accurately their volume and the actual response to treatment in the course of follow-up examinations. Additional interesting opportunities are opened up by the vascularity analysis with the help of power Doppler and color Doppler in 3D imaging. Contrast-enhanced ultrasound examinations enable a more sensitive assessment of the vascularity of inflamed structures of the locomotor system, and a more accurate analysis of treatment's effect on changes in vascularity, and thereby the inflammation process activity, as compared to the classical options of power and color Doppler. The equipment required, time limitations, as well as the high price in the case of contrast-enhanced ultrasound, contribute to the fact that the 3D analysis of inflammatory changes and contrast-enhanced ultrasound examinations are not routinely applied for pediatric patients.

  20. X-ray scattering in the elastic regime as source for 3D imaging reconstruction technique

    NASA Astrophysics Data System (ADS)

    Kocifaj, Miroslav; Mego, Michal

    2015-11-01

    X-ray beams propagate across a target object before they are projected onto a regularly spaced array of detectors to produce a routine X-ray image. A 3D attenuation coefficient distribution is obtained by tomographic reconstruction where scattering is usually regarded as a source of parasitic signals which increase the level of electromagnetic noise that is difficult to eliminate. However, the elastically scattered radiation could be a valuable source of information, because it can provide a 3D topology of electron densities and thus contribute significantly to the optical characterization of the scanned object. The scattering and attenuation data form a complementary base for concurrent retrieval of both electron density and attenuation coefficient distributions. In this paper we developed the 3D reconstruction method that combines both data inputs and produces better image resolution compared to traditional technology.

  1. Imaging the behavior of molecules in biological systems: breaking the 3D speed barrier with 3D multi-resolution microscopy.

    PubMed

    Welsher, Kevin; Yang, Haw

    2015-12-12

    The overwhelming effort in the development of new microscopy methods has been focused on increasing the spatial and temporal resolution in all three dimensions to enable the measurement of the molecular scale phenomena at the heart of biological processes. However, there exists a significant speed barrier to existing 3D imaging methods, which is associated with the overhead required to image large volumes. This overhead can be overcome to provide nearly unlimited temporal precision by simply focusing on a single molecule or particle via real-time 3D single-particle tracking and the newly developed 3D Multi-resolution Microscopy (3D-MM). Here, we investigate the optical and mechanical limits of real-time 3D single-particle tracking in the context of other methods. In particular, we investigate the use of an optical cantilever for position sensitive detection, finding that this method yields system magnifications of over 3000×. We also investigate the ideal PID control parameters and their effect on the power spectrum of simulated trajectories. Taken together, these data suggest that the speed limit in real-time 3D single particle-tracking is a result of slow piezoelectric stage response as opposed to optical sensitivity or PID control. PMID:26426758

  2. Aperture-scanning Fourier ptychography for 3D refocusing and super-resolution macroscopic imaging.

    PubMed

    Dong, Siyuan; Horstmeyer, Roarke; Shiradkar, Radhika; Guo, Kaikai; Ou, Xiaoze; Bian, Zichao; Xin, Huolin; Zheng, Guoan

    2014-06-01

    We report an imaging scheme, termed aperture-scanning Fourier ptychography, for 3D refocusing and super-resolution macroscopic imaging. The reported scheme scans an aperture at the Fourier plane of an optical system and acquires the corresponding intensity images of the object. The acquired images are then synthesized in the frequency domain to recover a high-resolution complex sample wavefront; no phase information is needed in the recovery process. We demonstrate two applications of the reported scheme. In the first example, we use an aperture-scanning Fourier ptychography platform to recover the complex hologram of extended objects. The recovered hologram is then digitally propagated into different planes along the optical axis to examine the 3D structure of the object. We also demonstrate a reconstruction resolution better than the detector pixel limit (i.e., pixel super-resolution). In the second example, we develop a camera-scanning Fourier ptychography platform for super-resolution macroscopic imaging. By simply scanning the camera over different positions, we bypass the diffraction limit of the photographic lens and recover a super-resolution image of an object placed at the far field. This platform's maximum achievable resolution is ultimately determined by the camera's traveling range, not the aperture size of the lens. The FP scheme reported in this work may find applications in 3D object tracking, synthetic aperture imaging, remote sensing, and optical/electron/X-ray microscopy. PMID:24921553

  3. A new combined prior based reconstruction method for compressed sensing in 3D ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Uddin, Muhammad S.; Islam, Rafiqul; Tahtali, Murat; Lambert, Andrew J.; Pickering, Mark R.

    2015-03-01

    Ultrasound (US) imaging is one of the most popular medical imaging modalities, with 3D US imaging gaining popularity recently due to its considerable advantages over 2D US imaging. However, as it is limited by long acquisition times and the huge amount of data processing it requires, methods for reducing these factors have attracted considerable research interest. Compressed sensing (CS) is one of the best candidates for accelerating the acquisition rate and reducing the data processing time without degrading image quality. However, CS is prone to introduce noise-like artefacts due to random under-sampling. To address this issue, we propose a combined prior-based reconstruction method for 3D US imaging. A Laplacian mixture model (LMM) constraint in the wavelet domain is combined with a total variation (TV) constraint to create a new regularization regularization prior. An experimental evaluation conducted to validate our method using synthetic 3D US images shows that it performs better than other approaches in terms of both qualitative and quantitative measures.

  4. Fully automatic scheme for measuring liver volume in 3D MR images.

    PubMed

    Le, Trong-Ngoc; Bao, Pham The; Huynh, Hieu Trung

    2015-08-17

    In this paper, a fully automatic scheme for measuring liver volume in 3D MR images was developed. The proposed MRI liver volumetry scheme consisted of four main stages. First, the preprocessing stage was applied to T1-weighted MR images of the liver in the portal-venous phase to reduce noise. The histogram of the 3D image was determined, and the second-to-last peak of the histogram was calculated using a neural network. Thresholds, which are determined based upon the second-to-last peak, were used to generate a thresholding image. This thresholding image was refined using a gradient magnitude image. The morphological and connected component operations were applied to the refined image to generate the rough shape of the liver. A 3D geodesic-active-contour segmentation algorithm refined the rough shape in order to more precisely determine the liver boundaries. The liver volumes determined by the proposed automatic volumetry were compared to those manually traced by radiologists; these manual volumes were used as a "gold standard." The two volumetric methods reached an excellent agreement. The Dice overlap coefficient and the average accuracy were 91.0 ±2.8% and 99.0 ±0.4%, respectively. The mean processing time for the proposed automatic scheme was 1.02 ±0.08 min (CPU: Intel, core i7, 2.8GHz), whereas that of the manual volumetry was 24.3 ±3.7 min (p < 0.001). PMID:26405897

  5. Spectral domain optical coherence tomography imaging in optic disk pit associated with outer retinal dehiscence

    PubMed Central

    Wong, Chee Wai; Wong, Doric; Mathur, Ranjana

    2014-01-01

    A 37-year-old Bangladeshi male presented with an inferotemporal optic disk pit and serous macular detachment in the left eye. Imaging with spectral domain optical coherence tomography (OCT) revealed a multilayer macular schisis pattern with a small subfoveal outer retinal dehiscence. This case illustrates a rare phenotype of optic disk maculopathy with macular schisis and a small outer retinal layer dehiscence. Spectral domain OCT was a useful adjunct in delineating the retinal layers in optic disk pit maculopathy, and revealed a small area of outer retinal layer dehiscence that could only have been detected on high-resolution OCT. PMID:25349471

  6. Measuring Femoral Torsion In Vivo Using Freehand 3-D Ultrasound Imaging.

    PubMed

    Passmore, Elyse; Pandy, Marcus G; Graham, H Kerr; Sangeux, Morgan

    2016-02-01

    Despite variation in bone geometry, muscle and joint function is often investigated using generic musculoskeletal models. Patient-specific bone geometry can be obtained from computerised tomography, which involves ionising radiation, or magnetic resonance imaging (MRI), which is costly and time consuming. Freehand 3-D ultrasound provides an alternative to obtain bony geometry. The purpose of this study was to determine the accuracy and repeatability of 3-D ultrasound in measuring femoral torsion. Measurements of femoral torsion were performed on 10 healthy adults using MRI and 3-D ultrasound. Measurements of femoral torsion from 3-D ultrasound were, on average, smaller than those from MRI (mean difference = 1.8°; 95% confidence interval: -3.9°, 7.5°). MRI and 3-D ultrasound had Bland and Altman repeatability coefficients of 3.1° and 3.7°, respectively. Accurate measurements of femoral torsion were obtained with 3-D ultrasound offering the potential to acquire patient-specific bone geometry for musculoskeletal modelling. Three-dimensional ultrasound is non-invasive and relatively inexpensive and can be integrated into gait analysis. PMID:26639301

  7. 3D surface reconstruction based on image stitching from gastric endoscopic video sequence

    NASA Astrophysics Data System (ADS)

    Duan, Mengyao; Xu, Rong; Ohya, Jun

    2013-09-01

    This paper proposes a method for reconstructing 3D detailed structures of internal organs such as gastric wall from endoscopic video sequences. The proposed method consists of the four major steps: Feature-point-based 3D reconstruction, 3D point cloud stitching, dense point cloud creation and Poisson surface reconstruction. Before the first step, we partition one video sequence into groups, where each group consists of two successive frames (image pairs), and each pair in each group contains one overlapping part, which is used as a stitching region. Fist, the 3D point cloud of each group is reconstructed by utilizing structure from motion (SFM). Secondly, a scheme based on SIFT features registers and stitches the obtained 3D point clouds, by estimating the transformation matrix of the overlapping part between different groups with high accuracy and efficiency. Thirdly, we select the most robust SIFT feature points as the seed points, and then obtain the dense point cloud from sparse point cloud via a depth testing method presented by Furukawa. Finally, by utilizing Poisson surface reconstruction, polygonal patches for the internal organs are obtained. Experimental results demonstrate that the proposed method achieves a high accuracy and efficiency for 3D reconstruction of gastric surface from an endoscopic video sequence.

  8. d Technical Note RING ARRAY TRANSDUCERS FOR REAL-TIME 3-D IMAGING OF AN ATRIAL

    E-print Network

    Smith, Stephen

    into the occluder deployment kit might reduce the need for the other imaging modalities with the associated x-ray) real-time 3-D ultrasound scanner. Transducer performance yielded a ­6 dB fractional bandwidth of 20 deployment kit. Figure 1 shows a schematic of an integrated ring array transducer, connected to the scanner

  9. Stochastic Tomography and its Applications in 3D Imaging of Mixing Fluids

    E-print Network

    Heidrich, Wolfgang

    of an acquisition rig for fluid phenomena, consisting of 5­16 strobe-synchronized consumer cameras. Middle: Example-render arbitrary 2D projections without the need to ever store a 3D volume grid. CR Categories: I.3.3 [COMPUTER GRAPHICS]: Picture/Image Generation--Digitizing and scanning; Keywords: Stochastic sampling, Tomography

  10. Protein structure similarity based on multi-view images generated from 3D molecular visualization

    E-print Network

    Fukui, Kazuhiro

    Protein structure similarity based on multi-view images generated from 3D molecular visualization the structures of proteins is one of the most challenging problems in structural biology. Root Mean Square Distance (RMSD) has become a standard measurement to calculate the similarity between two protein

  11. Integration of 3-D Stereographic Imaging Techniques with a Large-Chamber Scanning Electron Microscope

    E-print Network

    Abidi, Mongi A.

    Middle Drive, Knoxville, TN 37996 The scanning electron microscope (SEM) has long been used-chamber scanning electron microscope (LC-SEM) that has the largest chamber in the world at eight cubic metersIntegration of 3-D Stereographic Imaging Techniques with a Large-Chamber Scanning Electron

  12. Real-time Upper Body Detection and 3D Pose Estimation in Monoscopic Images

    E-print Network

    Bowden, Richard

    edge segment detector to locate body parts, and assemble them into a `body plan' using a pre detectors, and assemble detected parts into a body plan using pictorial structures. Ronfard et al.[10] useReal-time Upper Body Detection and 3D Pose Estimation in Monoscopic Images Antonio S. Micilotta

  13. 3D Reconstruction of Reflection Nebulae from a Single Image Andrei Lintu1

    E-print Network

    Magnor, Marcus

    3D Reconstruction of Reflection Nebulae from a Single Image Andrei Lint¸u1 , Lars Hoffmann2 nebula, light is typically emitted from a central star and then scat- tered and partially absorbed by the nebula's dust particles. We model the light transport in this kind of nebulae by considering absorption

  14. Evaluation of 3D Structure in RELAX RFP with SXR Imaging Technique

    NASA Astrophysics Data System (ADS)

    Sanpei, Akio; Masamune, Sadao; Deguchi, Kazuaki; Nakaki, Seiya; Tanaka, Hiroyuki; Nishimura, Kanae; Himura, Haruhiko; Ohdachi, Satoshi; Mizuguchi, Naoki

    2012-10-01

    In a low-A RFP machine RELAX (R = 0.51 m/a = 0.25 m (A = 2)), a quasi-periodic transition to quasi-single helicity (QSH) state has been observed. During the QSH state, the fluctuation power concentrates in the dominant m = 1/n = 4 mode, and a (toroidally rotating) 3-D helical structure has been observed with radial array of magnetic probes [1]. We applied a high-speed (10-microsecond time resolution) soft-X ray (SXR) imaging diagnostic system to take SXR images during the QSH state, identifying the characteristic helical SXR structures which suggest hot or dense helical core [2]. The high-speed SXR imaging system has been extended to take the images from tangential and vertical directions simultaneously to observe 3-D dynamic structures of the SXR emissivity. The time evolution of the 3-D helical structures associated with the QSH state will be reported, together with some discussion on 3-D reconstruction techniques.[4pt] [1] Oki et al., Plasma Fusion Res. vol.7, 1402028 (2012).[0pt] [2] Sanpei et al., IEEE Transaction Plasma Science, vol.39, 2410 (2011).

  15. 3D imaging and mechanical modeling of helical buckling in Medicago truncatula plant roots

    E-print Network

    Cohen, Itai

    of plant root systems to secure water and nutrients from the heterogeneous terrestrial environment in which in Medicago plants, it may be supposed that this morphology is purely a biological process3D imaging and mechanical modeling of helical buckling in Medicago truncatula plant roots Jesse L

  16. CONFORMAL MAPPING OF NUCLEI IN 3D TOMOGRAPHIC CELL IMAGES TO ASSESS SHAPE HETEROGENEITY

    E-print Network

    Wang, Yalin

    CONFORMAL MAPPING OF NUCLEI IN 3D TOMOGRAPHIC CELL IMAGES TO ASSESS SHAPE HETEROGENEITY Vivek with isotropic, sub-micron spatial resolution. We used adaptive thresholding schemes to segment cells and nuclei of interest. Statistical analysis of shape coefficients revealed that cancer cell nuclei from both organs

  17. TOWARDS ROBUST 3D FACE RECOGNITION FROM NOISY RANGE IMAGES WITH LOW RESOLUTION

    E-print Network

    Nabben, Reinhard

    TOWARDS ROBUST 3D FACE RECOGNITION FROM NOISY RANGE IMAGES WITH LOW RESOLUTION O. EBERS, T. EBERS. Among these meth- ods, face recognition has a number of advantages such as being non face recognition system are its low data cap- ture duration and its low cost. However, the recent

  18. 3D Ultrasound Probe Calibration Using Robotic Arm and Image Registration

    E-print Network

    Promayon, Emmanuel

    . It can also help to efficiently follow up medical targets in space. But a calibration of the probe the reference space to each robot position (i.e., a virtual reference on the arm that holds the probe3D Ultrasound Probe Calibration Using Robotic Arm and Image Registration Johan Sarrazin 1

  19. SIMULTANEOUS CELL TRACKING AND IMAGE ALIGNMENT IN 3D CLSM IMAGERY OF GROWING ARABIDOPSIS THALIANA SEPALS

    E-print Network

    California at Santa Barbara, University of

    SIMULTANEOUS CELL TRACKING AND IMAGE ALIGNMENT IN 3D CLSM IMAGERY OF GROWING ARABIDOPSIS THALIANA Arabidopsis thaliana. The method is based on ge- ometric hashing and inherits its invariance to rotation cells throughout some stages of develop- ment. The process of development in Arabidopsis thaliana

  20. Lossless Bit-plane Compression of Microarray Images Using 3D Context Models

    E-print Network

    Paiva, António R. C.

    growth of interest in microarray technol- ogy and the improvement of the technology responsible The DNA microarray technology is a new and effective tool for biomedical research. It allowsLossless Bit-plane Compression of Microarray Images Using 3D Context Models Ant´onio J. R. Neves

  1. IEEE TRANSACTIONS ON IMAGE PROCESSING 1 3D Discrete Shearlet Transform and Video

    E-print Network

    Labate, Demetrio

    IEEE TRANSACTIONS ON IMAGE PROCESSING 1 3D Discrete Shearlet Transform and Video Processing Pooran tool for tasks such as feature extraction and pattern recognition [6], [7]. Even though shearlets other state-of- the-art multiscale techniques, including curvelets and surfacelets. Index Terms

  2. Filters in 2D and 3D Cardiac SPECT Image Processing.

    PubMed

    Lyra, Maria; Ploussi, Agapi; Rouchota, Maritina; Synefia, Stella

    2014-01-01

    Nuclear cardiac imaging is a noninvasive, sensitive method providing information on cardiac structure and physiology. Single photon emission tomography (SPECT) evaluates myocardial perfusion, viability, and function and is widely used in clinical routine. The quality of the tomographic image is a key for accurate diagnosis. Image filtering, a mathematical processing, compensates for loss of detail in an image while reducing image noise, and it can improve the image resolution and limit the degradation of the image. SPECT images are then reconstructed, either by filter back projection (FBP) analytical technique or iteratively, by algebraic methods. The aim of this study is to review filters in cardiac 2D, 3D, and 4D SPECT applications and how these affect the image quality mirroring the diagnostic accuracy of SPECT images. Several filters, including the Hanning, Butterworth, and Parzen filters, were evaluated in combination with the two reconstruction methods as well as with a specified MatLab program. Results showed that for both 3D and 4D cardiac SPECT the Butterworth filter, for different critical frequencies and orders, produced the best results. Between the two reconstruction methods, the iterative one might be more appropriate for cardiac SPECT, since it improves lesion detectability due to the significant improvement of image contrast. PMID:24804144

  3. A 3D In Vitro Cancer Model as a Platform for Nanoparticle Uptake and Imaging Investigations

    PubMed Central

    Ricketts, Kate P M; Cheema, Umber; Nyga, Agata; Castoldi, Andrea; Guazzoni, Chiara; Magdeldin, Tarig; Emberton, Mark; Gibson, Adam P; Royle, Gary J; Loizidou, Marilena

    2014-01-01

    In order to maximize the potential of nanoparticles (NPs) in cancer imaging and therapy, their mechanisms of interaction with host tissue need to be fully understood. NP uptake is known to be dramatically influenced by the tumor microenvironment, and an imaging platform that could replicate in vivo cellular conditions would make big strides in NP uptake studies. Here, a novel NP uptake platform consisting of a tissue-engineered 3D in vitro cancer model (tumoroid), which mimics the microarchitecture of a solid cancer mass and stroma, is presented. As the tumoroid exhibits fundamental characteristics of solid cancer tissue and its cellular and biochemical parameters are controllable, it provides a real alternative to animal models. Furthermore, an X-ray fluorescence imaging system is developed to demonstrate 3D imaging of GNPs and to determine uptake efficiency within the tumoroid. This platform has implications for optimizing the targeted delivery of NPs to cells to benefit cancer diagnostics and therapy. PMID:24990320

  4. A New Implicit Method for Surface Segmentation by Minimal Paths in 3D Images

    SciTech Connect

    Ardon, Roberto Cohen, Laurent D. Yezzi, Anthony

    2007-03-15

    We introduce a novel implicit approach for single-object segmentation in 3D images. The boundary surface of this object is assumed to contain two known curves (the constraining curves), given by an expert. The aim of our method is to find the wanted surface by exploiting as much as possible the information given in the supplied curves and in the image. As for active surfaces, we use a cost potential that penalizes image regions of low interest (most likely areas of low gradient or too far from the surface to be extracted). In order to avoid local minima, we introduce a new partial differential equation and use its solution for segmentation. We show that the zero level set of this solution contains the constraining curves as well as a set of paths joining them. We present a fast implementation that has been successfully applied to 3D medical and synthetic images.

  5. 3D image reconstruction on x-ray micro-computed tomography

    NASA Astrophysics Data System (ADS)

    Louk, Andreas C.

    2015-03-01

    A model for 3D image reconstruction of x-ray micro-computed tomography scanner (micro-CTScan) has been developed. A small object has been put under inspection on an x-ray micro-CTScan. The object cross-section was assumed on the x-y plane, while its height was along the z-axis. Using a radiography plane detector, a set of digital radiographs represents multiple angle of views from 0º to 360º with an interval of 1º was obtained. Then, a set of crosssectional tomography, slice by slice was reconstructed. At the end, all image slices were stacked together sequentially to obtain a 3D image model of the object being inspected. From this development, lessons on the way to have better understanding on the internal structure of the object can be approached based on the cross-sectional image slice by slice and surface skin.

  6. Multi-resolution Vessel Segmentation Using Normalized Cuts in Retinal Images

    E-print Network

    Chung, Albert C. S.

    Multi-resolution Vessel Segmentation Using Normalized Cuts in Retinal Images Wenchao Cai and Albert}@cse.ust.hk Abstract. Retinal vessel segmentation is an essential step of the diag- noses of various eye diseases window where a vessel possibly exists. The normalized cut criterion, which mea- sures both the similarity

  7. Magnetic Resonance Imaging Indicates Decreased Choroidal and Retinal Blood Flow in the DBA/2J Mouse

    E-print Network

    Duong, Timothy Q.

    Magnetic Resonance Imaging Indicates Decreased Choroidal and Retinal Blood Flow in the DBA/2J Mouse. This study tests the hypothesis that reduced retinal and choroidal blood flow (BF) occur in the DBA/2J mouse contributing factor in the optic neuropathy in the DBA/2J mouse model of glaucoma. (Invest Ophthalmol Vis Sci

  8. In Vivo Autofluorescence Imaging of the Human and Macaque Retinal Pigment Epithelial Cell Mosaic

    E-print Network

    In Vivo Autofluorescence Imaging of the Human and Macaque Retinal Pigment Epithelial Cell Mosaic. Retinal pigment epithelial (RPE) cells are critical for the health of the retina, especially by detecting autofluorescence with an adaptive optics scanning laser ophthalmoscope (AOSLO). The current study

  9. In-vivo imaging of the photoreceptor mosaic in retinal dystrophies and correlations with visual function

    SciTech Connect

    Choi, S; Doble, N; Hardy, J; Jones, S; Keltner, J; Olivier, S; Werner, J S

    2005-10-26

    To relate in-vivo microscopic retinal changes to visual function assessed with clinical tests in patients with various forms of retinal dystrophies. The UC Davis Adaptive Optics (AO) Fundus Camera was used to acquire in-vivo retinal images at the cellular level. Visual function tests, consisting of visual field analysis, multifocal electroretinography (mfERG), contrast sensitivity and color vision measures, were performed on all subjects. Five patients with different forms of retinal dystrophies and three control subjects were recruited. Cone densities were quantified for all retinal images. In all images of diseased retinas, there were extensive areas of dark space between groups of photoreceptors, where no cone photoreceptors were evident. These irregular features were not seen in healthy retinas, but were characteristic features in fundi with retinal dystrophies. There was a correlation between functional vision loss and the extent to which the irregularities occurred in retinal images. Cone densities were found to decrease with an associated decrease in retinal function. AO fundus photography is a reliable technique for assessing and quantifying the changes in the photoreceptor layer as disease progresses. Furthermore, this technique can be useful in cases where visual function tests give borderline or ambiguous results, as it allows visualization of individual photoreceptors.

  10. An Image Analysis System for the Assessment of Retinal Microcirculation in Hypertension and Its Clinical Evaluation

    E-print Network

    Zabulis, Xenophon

    An Image Analysis System for the Assessment of Retinal Microcirculation in Hypertension and Its, Greece Abstract-- A system for the assessment of hypertension through the measurement of retinal vessels worldwide [4]. Hypertension (high blood pressure) is one of the most important, highly prevalent

  11. 3D image fusion and guidance for computer-assisted bronchoscopy

    NASA Astrophysics Data System (ADS)

    Higgins, W. E.; Rai, L.; Merritt, S. A.; Lu, K.; Linger, N. T.; Yu, K. C.

    2005-11-01

    The standard procedure for diagnosing lung cancer involves two stages. First, the physician evaluates a high-resolution three-dimensional (3D) computed-tomography (CT) chest image to produce a procedure plan. Next, the physician performs bronchoscopy on the patient, which involves navigating the the bronchoscope through the airways to planned biopsy sites. Unfortunately, the physician has no link between the 3D CT image data and the live video stream provided during bronchoscopy. In addition, these data sources differ greatly in what they physically give, and no true 3D planning tools exist for planning and guiding procedures. This makes it difficult for the physician to translate a CT-based procedure plan to the video domain of the bronchoscope. Thus, the physician must essentially perform biopsy blindly, and the skill levels between different physicians differ greatly. We describe a system that enables direct 3D CT-based procedure planning and provides direct 3D guidance during bronchoscopy. 3D CT-based information on biopsy sites is provided interactively as the physician moves the bronchoscope. Moreover, graphical information through a live fusion of the 3D CT data and bronchoscopic video is provided during the procedure. This information is coupled with a series of computer-graphics tools to give the physician a greatly augmented reality of the patient's interior anatomy during a procedure. Through a series of controlled tests and studies with human lung-cancer patients, we have found that the system not only reduces the variation in skill level between different physicians, but also increases biopsy success rate.

  12. Correction of a Depth-Dependent Lateral Distortion in 3D Super-Resolution Imaging

    PubMed Central

    Manley, Suliana

    2015-01-01

    Three-dimensional (3D) localization-based super-resolution microscopy (SR) requires correction of aberrations to accurately represent 3D structure. Here we show how a depth-dependent lateral shift in the apparent position of a fluorescent point source, which we term `wobble`, results in warped 3D SR images and provide a software tool to correct this distortion. This system-specific, lateral shift is typically > 80 nm across an axial range of ~ 1 ?m. A theoretical analysis based on phase retrieval data from our microscope suggests that the wobble is caused by non-rotationally symmetric phase and amplitude aberrations in the microscope’s pupil function. We then apply our correction to the bacterial cytoskeletal protein FtsZ in live bacteria and demonstrate that the corrected data more accurately represent the true shape of this vertically-oriented ring-like structure. We also include this correction method in a registration procedure for dual-color, 3D SR data and show that it improves target registration error (TRE) at the axial limits over an imaging depth of 1 ?m, yielding TRE values of < 20 nm. This work highlights the importance of correcting aberrations in 3D SR to achieve high fidelity between the measurements and the sample. PMID:26600467

  13. Generation of 3D image from computer data with dot array rainbow hologram

    NASA Astrophysics Data System (ADS)

    Cai, Luzhong; Wang, Yurong; Guo, Chengshan; Wang, Weitian

    1996-09-01

    A novel holographic method of making hard copies of a three- dimensional image from the computer data, the dot array rainbow hologram (DARH), is introduced and analyzed. The DARH is recorded dot by dot using a 1D liquid crystal panel and our proposed projection algorithm, and can reconstruct 3D image with horizontal parallax under white light illumination. The principle of making DARH and holo- animation are discussed. Preliminary experimental result showing effectiveness of this method is also presented.

  14. 4DCBCT-based motion modeling and 3D fluoroscopic image generation for lung cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Dhou, Salam; Hurwitz, Martina; Mishra, Pankaj; Berbeco, Ross; Lewis, John

    2015-03-01

    A method is developed to build patient-specific motion models based on 4DCBCT images taken at treatment time and use them to generate 3D time-varying images (referred to as 3D fluoroscopic images). Motion models are built by applying Principal Component Analysis (PCA) on the displacement vector fields (DVFs) estimated by performing deformable image registration on each phase of 4DCBCT relative to a reference phase. The resulting PCA coefficients are optimized iteratively by comparing 2D projections captured at treatment time with projections estimated using the motion model. The optimized coefficients are used to generate 3D fluoroscopic images. The method is evaluated using anthropomorphic physical and digital phantoms reproducing real patient trajectories. For physical phantom datasets, the average tumor localization error (TLE) and (95th percentile) in two datasets were 0.95 (2.2) mm. For digital phantoms assuming superior image quality of 4DCT and no anatomic or positioning disparities between 4DCT and treatment time, the average TLE and the image intensity error (IIE) in six datasets were smaller using 4DCT-based motion models. When simulating positioning disparities and tumor baseline shifts at treatment time compared to planning 4DCT, the average TLE (95th percentile) and IIE were 4.2 (5.4) mm and 0.15 using 4DCT-based models, while they were 1.2 (2.2) mm and 0.10 using 4DCBCT-based ones, respectively. 4DCBCT-based models were shown to perform better when there are positioning and tumor baseline shift uncertainties at treatment time. Thus, generating 3D fluoroscopic images based on 4DCBCT-based motion models can capture both inter- and intra- fraction anatomical changes during treatment.

  15. Geometric uncertainty of 2D projection imaging in monitoring 3D tumor motion

    NASA Astrophysics Data System (ADS)

    Suh, Yelin; Dieterich, Sonja; Keall, Paul J.

    2007-07-01

    The purpose of this study was to investigate the accuracy of two-dimensional (2D) projection imaging methods in three-dimensional (3D) tumor motion monitoring. Many commercial linear accelerator types have projection imaging capabilities, and tumor motion monitoring is useful for motion inclusive, respiratory gated or tumor tracking strategies. Since 2D projection imaging is limited in its ability to resolve the motion along the imaging beam axis, there is unresolved motion when monitoring 3D tumor motion. From the 3D tumor motion data of 160 treatment fractions for 46 thoracic and abdominal cancer patients, the unresolved motion due to the geometric limitation of 2D projection imaging was calculated as displacement in the imaging beam axis for different beam angles and time intervals. The geometric uncertainty to monitor 3D motion caused by the unresolved motion of 2D imaging was quantified using the root-mean-square (rms) metric. Geometric uncertainty showed interfractional and intrafractional variation. Patient-to-patient variation was much more significant than variation for different time intervals. For the patient cohort studied, as the time intervals increase, the rms, minimum and maximum values of the rms uncertainty show decreasing tendencies for the lung patients but increasing for the liver and retroperitoneal patients, which could be attributed to patient relaxation. Geometric uncertainty was smaller for coplanar treatments than non-coplanar treatments, as superior-inferior (SI) tumor motion, the predominant motion from patient respiration, could be always resolved for coplanar treatments. Overall rms of the rms uncertainty was 0.13 cm for all treatment fractions and 0.18 cm for the treatment fractions whose average breathing peak-trough ranges were more than 0.5 cm. The geometric uncertainty for 2D imaging varies depending on the tumor site, tumor motion range, time interval and beam angle as well as between patients, between fractions and within a fraction.

  16. Neutron radiographic image restoration using BM3D frames and nonlinear variance stabilization

    NASA Astrophysics Data System (ADS)

    Shuang, Qiao; Wei-jing, Zhao; Jia-ning, Sun

    2015-07-01

    Neutron radiography is a powerful tool for non-destructive investigations in industrial applications. However, the resulting images are degraded inevitably due to some physical limitations. In this paper, we propose a new scheme for neutron image restoration, which utilizes BM3D frames and nonlinear variance stabilization including generalized anscombe transformation and its exact unbiased inverse. Experimental results show that superior to the existing restoration methods, the proposed scheme improves the restoration quality efficiently and exhibits better visual results.

  17. Edge-Weighted Centroid Voronoi Tessellation with Propagation of Consistency Constraint for 3D Grain Segmentation in Microscopic Superalloy Images

    E-print Network

    Wang, Song

    Segmentation in Microscopic Superalloy Images Youjie Zhou, Lili Ju, Yu Cao, Jarrell Waggoner, Yuewei Lin, Jeff propagation of the inter-slice consistency constraint. It can segment a 3D superalloy image, slice by slice on a 3D superalloy image consisting of 170 2D slices. Performance is evaluated against manually annotated

  18. A 7.5 MHz Dual-Layer Transducer Array for 3-D Rectilinear Imaging

    PubMed Central

    Chen, Yuling; Nguyen, Man; Yen, Jesse T.

    2011-01-01

    The difficulties associated with fabrication and interconnection have limited the development of 2-D ultrasound transducer arrays with a large number of elements (>5000). In previous work, we described a 5 MHz center frequency PZT-P[VDF-TrFE] dual-layer transducer, which used 2 perpendicular 1-D arrays for 3-D rectilinear imaging. This design substantially reduces the channel count as well as fabrication complexity, which makes 3-D imaging more realizable. Higher frequencies (>5MHz) are more commonly used in clinical for imaging targets near transducers such as the breast, carotid, and musculoskeletal. In this paper, we present a 7.5 MHz dual-layer transducer array for 3-D rectilinear imaging. A modified acoustic stack model was designed and fabricated. PZT elements were sub-diced to eliminate lateral coupling. This sub-dicing process made the PZT into a 2–2 composite material, which could help improve transducer sensitivity and bandwidth. Full synthetic aperture 3-D data sets were acquired by interfacing the transducer with a Verasonics data acquisition system (VDAS). Offline 3-D beamforming was then performed to obtain volumes of a multi-wire phantom and a cyst phantom. The generalized coherence factor (GCF) was applied to improve the contrast of cyst images. The measured ?6 dB fractional bandwidth of the transducer was 71% with a center frequency of 7.5 MHz. The measured lateral beamwidths were 0.521 mm and 0.482 mm in azimuth and elevation respectively, compared with a simulated beamwidth of 0.43 mm. PMID:21842584

  19. Ultra wide band millimeter wave holographic ``3-D`` imaging of concealed targets on mannequins

    SciTech Connect

    Collins, H.D.; Hall, T.E.; Gribble, R.P.

    1994-08-01

    Ultra wide band (chirp frequency) millimeter wave ``3-D`` holography is a unique technique for imaging concealed targets on human subjects with extremely high lateral and depth resolution. Recent ``3-D`` holographic images of full size mannequins with concealed weapons illustrate the efficacy of this technique for airport security. A chirp frequency (24 GHz to 40 GHz) holographic system was used to construct extremely high resolution images (optical quality) using polyrod antenna in a bi-static configuration using an x-y scanner. Millimeter wave chirp frequency holography can be simply described as a multi-frequency detection and imaging technique where the target`s reflected signals are decomposed into discrete frequency holograms and reconstructed into a single composite ``3-D`` image. The implementation of this technology for security at airports, government installations, etc., will require real-time (video rate) data acquisition and computer image reconstruction of large volumetric data sets. This implies rapid scanning techniques or large, complex ``2-D`` arrays and high speed computing for successful commercialization of this technology.

  20. Fast 3D Spatial EPR Imaging Using Spiral Magnetic Field Gradient

    PubMed Central

    Deng, Yuanmu; Petryakov, Sergy; He, Guanglong; Kesselring, Eric; Kuppusamy, Periannan; Zweier, Jay L.

    2007-01-01

    Electron paramagnetic resonance imaging (EPRI) provides direct detection and mapping of free radicals. The continuous wave (CW) EPRI technique, in particular, has been widely used in a variety of applications in the fields of biology and medicine due to its high sensitivity and applicability to a wide range of free radicals and paramagnetic species. However, the technique requires long image acquisition periods, and this limits its use for many in vivo applications where relatively rapid changes occur in the magnitude and distribution of spins. Therefore, there has been a great need to develop fast EPRI techniques. We report the development of a fast 3D CW EPRI technique using spiral magnetic field gradient. By spiraling the magnetic field gradient and stepping the main magnetic field, this approach acquires a 3D image in one sweep of the main magnetic field, enabling significant reduction of the imaging time. A direct one-stage 3D image reconstruction algorithm, modified for reconstruction of the EPR images from the projections acquired with the spiral magnetic field gradient, was used. We demonstrated using a home-built L-band EPR system that the spiral magnetic field gradient technique enabled a 4 to 7-fold accelerated acquisition of projections. This technique has great potential for in vivo studies of free radicals and their metabolism. PMID:17267252

  1. Automated segmentation and geometrical modeling of the tricuspid aortic valve in 3D echocardiographic images

    PubMed Central

    Pouch, Alison M.; Wang, Hongzhi; Takabe, Manabu; Jackson, Benjamin M.; Sehgal, Chandra M.; Gorman, Joseph H.; Gorman, Robert C.; Yushkevich, Paul A.

    2013-01-01

    The aortic valve has been described with variable anatomical definitions, and the consistency of 2D manual measurement of valve dimensions in medical image data has been questionable. Given the importance of image-based morphological assessment in the diagnosis and surgical treatment of aortic valve disease, there is considerable need to develop a standardized framework for 3D valve segmentation and shape representation. Towards this goal, this work integrates template-based medial modeling and multi-atlas label fusion techniques to automatically delineate and quantitatively describe aortic leaflet geometry in 3D echocardiographic (3DE) images, a challenging task that has been explored only to a limited extent. The method makes use of expert knowledge of aortic leaflet image appearance, generates segmentations with consistent topology, and establishes a shape-based coordinate system on the aortic leaflets that enables standardized automated measurements. In this study, the algorithm is evaluated on 11 3DE images of normal human aortic leaflets acquired at mid systole. The clinical relevance of the method is its ability to capture leaflet geometry in 3DE image data with minimal user interaction while producing consistent measurements of 3D aortic leaflet geometry. PMID:24505702

  2. Automated localization of implanted seeds in 3D TRUS images used for prostate brachytherapy

    SciTech Connect

    Wei Zhouping; Gardi, Lori; Downey, Donal B.; Fenster, Aaron

    2006-07-15

    An algorithm has been developed in this paper to localize implanted radioactive seeds in 3D ultrasound images for a dynamic intraoperative brachytherapy procedure. Segmentation of the seeds is difficult, due to their small size in relatively low quality of transrectal ultrasound (TRUS) images. In this paper, intraoperative seed segmentation in 3D TRUS images is achieved by performing a subtraction of the image before the needle has been inserted, and the image after the seeds have been implanted. The seeds are searched in a 'local' space determined by the needle position and orientation information, which are obtained from a needle segmentation algorithm. To test this approach, 3D TRUS images of the agar and chicken tissue phantoms were obtained. Within these phantoms, dummy seeds were implanted. The seed locations determined by the seed segmentation algorithm were compared with those obtained from a volumetric cone-beam flat-panel micro-CT scanner and human observers. Evaluation of the algorithm showed that the rms error in determining the seed locations using the seed segmentation algorithm was 0.98 mm in agar phantoms and 1.02 mm in chicken phantoms.

  3. Modeling and Measurement of 3D Deformation of Scoliotic Spine Using 2D X-ray Images

    E-print Network

    Leow, Wee Kheng

    Modeling and Measurement of 3D Deformation of Scoliotic Spine Using 2D X-ray Images Hao Li1 , Wee of the spine. To correct scoliotic deformation, the extents of 3D spinal deformation need to be measured. This paper studies the modeling and measurement of scoliotic spine based on 3D curve model. Through modeling

  4. Automatic 3D Model Acquisition from Uncalibrated Image Sequences Reinhard Koch, Marc Pollefeys, and Luc Van Gool

    E-print Network

    Pollefeys, Marc

    range scanners and other 3D digitizing devices. These devices are often very expensive, require carefulAutomatic 3D Model Acquisition from Uncalibrated Image Sequences Reinhard Koch, Marc Pollefeys, Belgium firstname.lastname¡ @esat.kuleuven.ac.be Abstract In this paper the problem of obtaining 3D

  5. SU-E-T-154: Establishment and Implement of 3D Image Guided Brachytherapy Planning System

    SciTech Connect

    Jiang, S; Zhao, S; Chen, Y; Li, Z; Li, P; Huang, Z; Yang, Z; Zhang, X

    2014-06-01

    Purpose: Cannot observe the dose intuitionally is a limitation of the existing 2D pre-implantation dose planning. Meanwhile, a navigation module is essential to improve the accuracy and efficiency of the implantation. Hence a 3D Image Guided Brachytherapy Planning System conducting dose planning and intra-operative navigation based on 3D multi-organs reconstruction is developed. Methods: Multi-organs including the tumor are reconstructed in one sweep of all the segmented images using the multiorgans reconstruction method. The reconstructed organs group establishs a three-dimensional visualized operative environment. The 3D dose maps of the three-dimentional conformal localized dose planning are calculated with Monte Carlo method while the corresponding isodose lines and isodose surfaces are displayed in a stereo view. The real-time intra-operative navigation is based on an electromagnetic tracking system (ETS) and the fusion between MRI and ultrasound images. Applying Least Square Method, the coordinate registration between 3D models and patient is realized by the ETS which is calibrated by a laser tracker. The system is validated by working on eight patients with prostate cancer. The navigation has passed the precision measurement in the laboratory. Results: The traditional marching cubes (MC) method reconstructs one organ at one time and assembles them together. Compared to MC, presented multi-organs reconstruction method has superiorities in reserving the integrality and connectivity of reconstructed organs. The 3D conformal localized dose planning, realizing the 'exfoliation display' of different isodose surfaces, helps make sure the dose distribution has encompassed the nidus and avoid the injury of healthy tissues. During the navigation, surgeons could observe the coordinate of instruments real-timely employing the ETS. After the calibration, accuracy error of the needle position is less than 2.5mm according to the experiments. Conclusion: The speed and quality of 3D reconstruction, the efficiency in dose planning and accuracy in navigation all can be improved simultaneously.

  6. Accurate 3D kinematic measurement of temporomandibular joint using X-ray fluoroscopic images

    NASA Astrophysics Data System (ADS)

    Yamazaki, Takaharu; Matsumoto, Akiko; Sugamoto, Kazuomi; Matsumoto, Ken; Kakimoto, Naoya; Yura, Yoshiaki

    2014-04-01

    Accurate measurement and analysis of 3D kinematics of temporomandibular joint (TMJ) is very important for assisting clinical diagnosis and treatment of prosthodontics and orthodontics, and oral surgery. This study presents a new 3D kinematic measurement technique of the TMJ using X-ray fluoroscopic images, which can easily obtain the TMJ kinematic data in natural motion. In vivo kinematics of the TMJ (maxilla and mandibular bone) is determined using a feature-based 2D/3D registration, which uses beads silhouette on fluoroscopic images and 3D surface bone models with beads. The 3D surface models of maxilla and mandibular bone with beads were created from CT scans data of the subject using the mouthpiece with the seven strategically placed beads. In order to validate the accuracy of pose estimation for the maxilla and mandibular bone, computer simulation test was performed using five patterns of synthetic tantalum beads silhouette images. In the clinical applications, dynamic movement during jaw opening and closing was conducted, and the relative pose of the mandibular bone with respect to the maxilla bone was determined. The results of computer simulation test showed that the root mean square errors were sufficiently smaller than 1.0 mm and 1.0 degree. In the results of clinical application, during jaw opening from 0.0 to 36.8 degree of rotation, mandibular condyle exhibited 19.8 mm of anterior sliding relative to maxillary articular fossa, and these measurement values were clinically similar to the previous reports. Consequently, present technique was thought to be suitable for the 3D TMJ kinematic analysis.

  7. Using a wireless motion controller for 3D medical image catheter interactions

    NASA Astrophysics Data System (ADS)

    Vitanovski, Dime; Hahn, Dieter; Daum, Volker; Hornegger, Joachim

    2009-02-01

    State-of-the-art morphological imaging techniques usually provide high resolution 3D images with a huge number of slices. In clinical practice, however, 2D slice-based examinations are still the method of choice even for these large amounts of data. Providing intuitive interaction methods for specific 3D medical visualization applications is therefore a critical feature for clinical imaging applications. For the domain of catheter navigation and surgery planning, it is crucial to assist the physician with appropriate visualization techniques, such as 3D segmentation maps, fly-through cameras or virtual interaction approaches. There has been an ongoing development and improvement for controllers that help to interact with 3D environments in the domain of computer games. These controllers are based on both motion and infrared sensors and are typically used to detect 3D position and orientation. We have investigated how a state-of-the-art wireless motion sensor controller (Wiimote), developed by Nintendo, can be used for catheter navigation and planning purposes. By default the Wiimote controller only measure rough acceleration over a range of +/- 3g with 10% sensitivity and orientation. Therefore, a pose estimation algorithm was developed for computing accurate position and orientation in 3D space regarding 4 Infrared LEDs. Current results show that for the translation it is possible to obtain a mean error of (0.38cm, 0.41cm, 4.94cm) and for the rotation (0.16, 0.28) respectively. Within this paper we introduce a clinical prototype that allows steering of a virtual fly-through camera attached to the catheter tip by the Wii controller on basis of a segmented vessel tree.

  8. Predicting the Incidence of Human Cataract through Retinal Imaging Technology.

    PubMed

    Horng, Chi-Ting; Sun, Han-Ying; Liu, Hsiang-Jui; Lue, Jiann-Hwa; Yeh, Shang-Min

    2015-01-01

    With the progress of science, technology and medicine, the proportion of elderly people in society has gradually increased over the years. Thus, the medical care and health issues of this population have drawn increasing attention. In particular, among the common medical problems of the elderly, the occurrence of cataracts has been widely observed. In this study, we developed retinal imaging technology by establishing a human eye module with ray tracing. Periodic hole arrays with different degrees were constructed on the anterior surface of the lens to emulate the eyesight decline caused by cataracts. Then, we successfully predicted the incidence of cataracts among people with myopia ranging from -3.0 D to -9.0 D. Results show that periodic hole arrays cause severe eyesight decline when they are centralized in the visual center. However, the wide distribution of these arrays on the anterior surface of the lens would not significantly affect one's eyesight. PMID:26610533

  9. Predicting the Incidence of Human Cataract through Retinal Imaging Technology

    PubMed Central

    Horng, Chi-Ting; Sun, Han-Ying; Liu, Hsiang-Jui; Lue, Jiann-Hwa; Yeh, Shang-Min

    2015-01-01

    With the progress of science, technology and medicine, the proportion of elderly people in society has gradually increased over the years. Thus, the medical care and health issues of this population have drawn increasing attention. In particular, among the common medical problems of the elderly, the occurrence of cataracts has been widely observed. In this study, we developed retinal imaging technology by establishing a human eye module with ray tracing. Periodic hole arrays with different degrees were constructed on the anterior surface of the lens to emulate the eyesight decline caused by cataracts. Then, we successfully predicted the incidence of cataracts among people with myopia ranging from ?3.0 D to ?9.0 D. Results show that periodic hole arrays cause severe eyesight decline when they are centralized in the visual center. However, the wide distribution of these arrays on the anterior surface of the lens would not significantly affect one’s eyesight. PMID:26610533

  10. Imaging microscopic structures in pathological retinas using a flood-illumination adaptive optics retinal camera

    NASA Astrophysics Data System (ADS)

    Viard, Clément; Nakashima, Kiyoko; Lamory, Barbara; Pâques, Michel; Levecq, Xavier; Château, Nicolas

    2011-03-01

    This research is aimed at characterizing in vivo differences between healthy and pathological retinal tissues at the microscopic scale using a compact adaptive optics (AO) retinal camera. Tests were performed in 120 healthy eyes and 180 eyes suffering from 19 different pathological conditions, including age-related maculopathy (ARM), glaucoma and rare diseases such as inherited retinal dystrophies. Each patient was first examined using SD-OCT and infrared SLO. Retinal areas of 4°x4° were imaged using an AO flood-illumination retinal camera based on a large-stroke deformable mirror. Contrast was finally enhanced by registering and averaging rough images using classical algorithms. Cellular-resolution images could be obtained in most cases. In ARM, AO images revealed granular contents in drusen, which were invisible in SLO or OCT images, and allowed the observation of the cone mosaic between drusen. In glaucoma cases, visual field was correlated to changes in cone visibility. In inherited retinal dystrophies, AO helped to evaluate cone loss across the retina. Other microstructures, slightly larger in size than cones, were also visible in several retinas. AO provided potentially useful diagnostic and prognostic information in various diseases. In addition to cones, other microscopic structures revealed by AO images may also be of interest in monitoring retinal diseases.

  11. Comparison of retinal image quality with spherical and customized aspheric intraocular lenses

    PubMed Central

    Guo, Huanqing; Goncharov, Alexander V.; Dainty, Chris

    2012-01-01

    We hypothesize that an intraocular lens (IOL) with higher-order aspheric surfaces customized for an individual eye provides improved retinal image quality, despite the misalignments that accompany cataract surgery. To test this hypothesis, ray-tracing eye models were used to investigate 10 designs of mono-focal single lens IOLs with rotationally symmetric spherical, aspheric, and customized surfaces. Retinal image quality of pseudo-phakic eyes using these IOLs together with individual variations in ocular and IOL parameters, are evaluated using a Monte Carlo analysis. We conclude that customized lenses should give improved retinal image quality despite the random errors resulting from IOL insertion. PMID:22574257

  12. Dual array 3D electron cyclotron emission imaging at ASDEX Upgrade

    SciTech Connect

    Classen, I. G. J. Bogomolov, A. V.; Domier, C. W.; Luhmann, N. C.; Suttrop, W.; Boom, J. E.; Tobias, B. J.; Donné, A. J. H.

    2014-11-15

    In a major upgrade, the (2D) electron cyclotron emission imaging diagnostic (ECEI) at ASDEX Upgrade has been equipped with a second detector array, observing a different toroidal position in the plasma, to enable quasi-3D measurements of the electron temperature. The new system will measure a total of 288 channels, in two 2D arrays, toroidally separated by 40 cm. The two detector arrays observe the plasma through the same vacuum window, both under a slight toroidal angle. The majority of the field lines are observed by both arrays simultaneously, thereby enabling a direct measurement of the 3D properties of plasma instabilities like edge localized mode filaments.

  13. Audiovisual biofeedback improves image quality and reduces scan time for respiratory-gated 3D MRI

    NASA Astrophysics Data System (ADS)

    Lee, D.; Greer, P. B.; Arm, J.; Keall, P.; Kim, T.

    2014-03-01

    The purpose of this study was to test the hypothesis that audiovisual (AV) biofeedback can improve image quality and reduce scan time for respiratory-gated 3D thoracic MRI. For five healthy human subjects respiratory motion guidance in MR scans was provided using an AV biofeedback system, utilizing real-time respiratory motion signals. To investigate the improvement of respiratory-gated 3D MR images between free breathing (FB) and AV biofeedback (AV), each subject underwent two imaging sessions. Respiratory-related motion artifacts and imaging time were qualitatively evaluated in addition to the reproducibility of external (abdominal) motion. In the results, 3D MR images in AV biofeedback showed more anatomic information such as a clear distinction of diaphragm, lung lobes and sharper organ boundaries. The scan time was reduced from 401±215 s in FB to 334±94 s in AV (p-value 0.36). The root mean square variation of the displacement and period of the abdominal motion was reduced from 0.4±0.22 cm and 2.8±2.5 s in FB to 0.1±0.15 cm and 0.9±1.3 s in AV (p-value of displacement <0.01 and p-value of period 0.12). This study demonstrated that audiovisual biofeedback improves image quality and reduces scan time for respiratory-gated 3D MRI. These results suggest that AV biofeedback has the potential to be a useful motion management tool in medical imaging and radiation therapy procedures.

  14. Simplified laser-speckle-imaging analysis method and its application to retinal blood flow

    E-print Network

    Duong, Timothy Q.

    Simplified laser-speckle-imaging analysis method and its application to retinal blood flow imaging speckle imaging (LSI) is widely used to study blood flow at high spatiotemporal resolution. Several papers of America OCIS codes: 170.6480, 170.3880, 170.4470. Laser speckle imaging (LSI) [1] can be used to image

  15. Recovering 3D Shape with Absolute Size from Endoscope Images Using RBF Neural Network

    PubMed Central

    Tsuda, Seiya; Iwahori, Yuji; Bhuyan, M. K.; Woodham, Robert J.; Kasugai, Kunio

    2015-01-01

    Medical diagnosis judges the status of polyp from the size and the 3D shape of the polyp from its medical endoscope image. However the medical doctor judges the status empirically from the endoscope image and more accurate 3D shape recovery from its 2D image has been demanded to support this judgment. As a method to recover 3D shape with high speed, VBW (Vogel-Breuß-Weickert) model is proposed to recover 3D shape under the condition of point light source illumination and perspective projection. However, VBW model recovers the relative shape but there is a problem that the shape cannot be recovered with the exact size. Here, shape modification is introduced to recover the exact shape with modification from that with VBW model. RBF-NN is introduced for the mapping between input and output. Input is given as the output of gradient parameters of VBW model for the generated sphere. Output is given as the true gradient parameters of true values of the generated sphere. Learning mapping with NN can modify the gradient and the depth can be recovered according to the modified gradient parameters. Performance of the proposed approach is confirmed via computer simulation and real experiment. PMID:25949235

  16. Defragmented image based autostereoscopic 3D displays with dynamic eye tracking

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-12-01

    We studied defragmented image based autostereoscopic 3D displays with dynamic eye tracking. Specifically, we examined the impact of parallax barrier (PB) angular orientation on their image quality. The 3D display system required fine adjustment of PB angular orientation with respect to a display panel. This was critical for both image color balancing and minimizing image resolution mismatch between horizontal and vertical directions. For evaluating uniformity of image brightness, we applied optical ray tracing simulations. The simulations took effects of PB orientation misalignment into account. The simulation results were then compared with recorded experimental data. Our optimal simulated system produced significantly enhanced image uniformity at around sweet spots in viewing zones. However this was contradicted by real experimental results. We offer quantitative treatment of illuminance uniformity of view images to estimate misalignment of PB orientation, which could account for brightness non-uniformity observed experimentally. Our study also shows that slight imperfection in the adjustment of PB orientation due to practical restrictions of adjustment accuracy can induce substantial non-uniformity of view images' brightness. We find that image brightness non-uniformity critically depends on misalignment of PB angular orientation, for example, as slight as ? 0.01 ° in our system. This reveals that reducing misalignment of PB angular orientation from the order of 10-2 to 10-3 degrees can greatly improve the brightness uniformity.

  17. 3D image of protein visualization in a whole rice grain using an automatic precision microtome system

    NASA Astrophysics Data System (ADS)

    Ogawa, Yukiharu; Ohtani, Toshio; Sugiyama, Junichi; Hagiwara, Shoji; Tanaka, Kunisuke; Kudoh, Ken-ichi; Higuchi, Toshiro

    2000-05-01

    The 3D image formation technique using confocal microscopy has allows visualization of the 3D chemical structure in small parts of the bio-body. However, the large-scale 3D structure such as the distribution of chemical components throughout the whole body has not been shown. To allow such large scale visualization of the 3D internal analysis technique for bio-body has been developed.

  18. 3D surface imaging for guidance in breast cancer radiotherapy: organs at risk

    NASA Astrophysics Data System (ADS)

    Alderliesten, Tanja; Betgen, Anja; van Vliet-Vroegindeweij, Corine; Remeijer, Peter

    2013-03-01

    Purpose: To evaluate the variability in heart position in deep-inspiration breath-hold (DIBH) radiotherapy for breast cancer when 3D surface imaging would be used for monitoring the depth of the breath hold during treatment. Materials and Methods: Ten patients who received DIBH radiotherapy after breast-conserving surgery (BCS) were included. Retrospectively, heart-based registrations were performed for cone-beam computed tomography (CBCT) to planning CT and breast surface registrations were performed for a 3D surface (two different regions of interest [ROIs]), captured concurrently with CBCT, to planning CT. The resulting setup errors were compared with linear regression analysis and receiver operating characteristic (ROC) analysis was performed to investigate the prediction quality of 3D surface imaging for 3D heart displacement. Further, the residual setup errors (systematic [?] and random [?]) of the heart were estimated relative to the surface registrations. Results: When surface imaging [ROIleft-side;ROIboth-sides] would be used for monitoring, the residual errors of the heart position are in left-right: ?=[0.360.12], ?=[0.160.14] cranio-caudal: ?=[0.540.54], ?=[0.280.31] and in anteriorposterior: ?=[0.180.14], ?=[0.200.19] cm. Correlations between setup errors were: R2 = [0.23;0.73], [0.67;0.65], [0.65;0.73] in left-right, cranio-caudal, and anterior-posterior direction, respectively. ROC analysis resulted in an area under the ROC curve of [0.82;0.78]. Conclusion: The use of ROIboth-sides provided promising results. However, considerable variability in the heart position, particularly in CC direction, is observed when 3D surface imaging would be used for guidance in DIBH radiotherapy after BCS. Planning organ at risk volume margins should be used to take into account the heart-position variability.

  19. In vivo validation of a 3D ultrasound system for imaging the lateral ventricles of neonates

    NASA Astrophysics Data System (ADS)

    Kishimoto, J.; Fenster, A.; Chen, N.; Lee, D.; de Ribaupierre, S.

    2014-03-01

    Dilated lateral ventricles in neonates can be due to many different causes, such as brain loss, or congenital malformation; however, the main cause is hydrocephalus, which is the accumulation of fluid within the ventricular system. Hydrocephalus can raise intracranial pressure resulting in secondary brain damage, and up to 25% of patients with severely enlarged ventricles have epilepsy in later life. Ventricle enlargement is clinically monitored using 2D US through the fontanels. The sensitivity of 2D US to dilation is poor because it cannot provide accurate measurements of irregular volumes such as the ventricles, so most clinical evaluations are of a qualitative nature. We developed a 3D US system to image the cerebral ventricles of neonates within the confines of incubators that can be easily translated to more open environments. Ventricle volumes can be segmented from these images giving a quantitative volumetric measurement of ventricle enlargement without moving the patient into an imaging facility. In this paper, we report on in vivo validation studies: 1) comparing 3D US ventricle volumes before and after clinically necessary interventions removing CSF, and 2) comparing 3D US ventricle volumes to those from MRI. Post-intervention ventricle volumes were less than pre-intervention measurements for all patients and all interventions. We found high correlations (R = 0.97) between the difference in ventricle volume and the reported removed CSF with the slope not significantly different than 1 (p < 0.05). Comparisons between ventricle volumes from 3D US and MR images taken 4 (±3.8) days of each other did not show significant difference (p=0.44) between 3D US and MRI through paired t-test.

  20. Total 3D imaging of phase objects using defocusing microscopy: application to red blood cells

    E-print Network

    Roma, P M S; Amaral, F T; Agero, U; Mesquita, O N

    2014-01-01

    We present Defocusing Microscopy (DM), a bright-field optical microscopy technique able to perform total 3D imaging of transparent objects. By total 3D imaging we mean the determination of the actual shapes of the upper and lower surfaces of a phase object. We propose a new methodology using DM and apply it to red blood cells subject to different osmolality conditions: hypotonic, isotonic and hypertonic solutions. For each situation the shape of the upper and lower cell surface-membranes (lipid bilayer/cytoskeleton) are completely recovered, displaying the deformation of RBCs surfaces due to adhesion on the glass-substrate. The axial resolution of our technique allowed us to image surface-membranes separated by distances as small as 300 nm. Finally, we determine volume, superficial area, sphericity index and RBCs refractive index for each osmotic condition.