Sample records for image registration framework

  1. Scalable High Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning

    PubMed Central

    Wu, Guorong; Kim, Minjeong; Wang, Qian; Munsell, Brent C.

    2015-01-01

    Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data,, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked auto-encoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework image registration experiments were conducted on 7.0-tesla brain MR images. In all experiments, the results showed the new image registration framework consistently demonstrated more accurate registration results when compared to state-of-the-art. PMID:26552069

  2. Scalable High-Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning.

    PubMed

    Wu, Guorong; Kim, Minjeong; Wang, Qian; Munsell, Brent C; Shen, Dinggang

    2016-07-01

    Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked autoencoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework, image registration experiments were conducted on 7.0-T brain MR images. In all experiments, the results showed that the new image registration framework consistently demonstrated more accurate registration results when compared to state of the art.

  3. An object-oriented framework for medical image registration, fusion, and visualization.

    PubMed

    Zhu, Yang-Ming; Cochoff, Steven M

    2006-06-01

    An object-oriented framework for image registration, fusion, and visualization was developed based on the classic model-view-controller paradigm. The framework employs many design patterns to facilitate legacy code reuse, manage software complexity, and enhance the maintainability and portability of the framework. Three sample applications built a-top of this framework are illustrated to show the effectiveness of this framework: the first one is for volume image grouping and re-sampling, the second one is for 2D registration and fusion, and the last one is for visualization of single images as well as registered volume images.

  4. Registration of 4D time-series of cardiac images with multichannel Diffeomorphic Demons.

    PubMed

    Peyrat, Jean-Marc; Delingette, Hervé; Sermesant, Maxime; Pennec, Xavier; Xu, Chenyang; Ayache, Nicholas

    2008-01-01

    In this paper, we propose a generic framework for intersubject non-linear registration of 4D time-series images. In this framework, spatio-temporal registration is defined by mapping trajectories of physical points as opposed to spatial registration that solely aims at mapping homologous points. First, we determine the trajectories we want to register in each sequence using a motion tracking algorithm based on the Diffeomorphic Demons algorithm. Then, we perform simultaneously pairwise registrations of corresponding time-points with the constraint to map the same physical points over time. We show this trajectory registration can be formulated as a multichannel registration of 3D images. We solve it using the Diffeomorphic Demons algorithm extended to vector-valued 3D images. This framework is applied to the inter-subject non-linear registration of 4D cardiac CT sequences.

  5. Cross contrast multi-channel image registration using image synthesis for MR brain images.

    PubMed

    Chen, Min; Carass, Aaron; Jog, Amod; Lee, Junghoon; Roy, Snehashis; Prince, Jerry L

    2017-02-01

    Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. dPIRPLE: a joint estimation framework for deformable registration and penalized-likelihood CT image reconstruction using prior images

    NASA Astrophysics Data System (ADS)

    Dang, H.; Wang, A. S.; Sussman, Marc S.; Siewerdsen, J. H.; Stayman, J. W.

    2014-09-01

    Sequential imaging studies are conducted in many clinical scenarios. Prior images from previous studies contain a great deal of patient-specific anatomical information and can be used in conjunction with subsequent imaging acquisitions to maintain image quality while enabling radiation dose reduction (e.g., through sparse angular sampling, reduction in fluence, etc). However, patient motion between images in such sequences results in misregistration between the prior image and current anatomy. Existing prior-image-based approaches often include only a simple rigid registration step that can be insufficient for capturing complex anatomical motion, introducing detrimental effects in subsequent image reconstruction. In this work, we propose a joint framework that estimates the 3D deformation between an unregistered prior image and the current anatomy (based on a subsequent data acquisition) and reconstructs the current anatomical image using a model-based reconstruction approach that includes regularization based on the deformed prior image. This framework is referred to as deformable prior image registration, penalized-likelihood estimation (dPIRPLE). Central to this framework is the inclusion of a 3D B-spline-based free-form-deformation model into the joint registration-reconstruction objective function. The proposed framework is solved using a maximization strategy whereby alternating updates to the registration parameters and image estimates are applied allowing for improvements in both the registration and reconstruction throughout the optimization process. Cadaver experiments were conducted on a cone-beam CT testbench emulating a lung nodule surveillance scenario. Superior reconstruction accuracy and image quality were demonstrated using the dPIRPLE algorithm as compared to more traditional reconstruction methods including filtered backprojection, penalized-likelihood estimation (PLE), prior image penalized-likelihood estimation (PIPLE) without registration, and prior image penalized-likelihood estimation with rigid registration of a prior image (PIRPLE) over a wide range of sampling sparsity and exposure levels.

  7. Learning intervention-induced deformations for non-rigid MR-CT registration and electrode localization in epilepsy patients

    PubMed Central

    Onofrey, John A.; Staib, Lawrence H.; Papademetris, Xenophon

    2015-01-01

    This paper describes a framework for learning a statistical model of non-rigid deformations induced by interventional procedures. We make use of this learned model to perform constrained non-rigid registration of pre-procedural and post-procedural imaging. We demonstrate results applying this framework to non-rigidly register post-surgical computed tomography (CT) brain images to pre-surgical magnetic resonance images (MRIs) of epilepsy patients who had intra-cranial electroencephalography electrodes surgically implanted. Deformations caused by this surgical procedure, imaging artifacts caused by the electrodes, and the use of multi-modal imaging data make non-rigid registration challenging. Our results show that the use of our proposed framework to constrain the non-rigid registration process results in significantly improved and more robust registration performance compared to using standard rigid and non-rigid registration methods. PMID:26900569

  8. Low-rank Atlas Image Analyses in the Presence of Pathologies

    PubMed Central

    Liu, Xiaoxiao; Niethammer, Marc; Kwitt, Roland; Singh, Nikhil; McCormick, Matt; Aylward, Stephen

    2015-01-01

    We present a common framework, for registering images to an atlas and for forming an unbiased atlas, that tolerates the presence of pathologies such as tumors and traumatic brain injury lesions. This common framework is particularly useful when a sufficient number of protocol-matched scans from healthy subjects cannot be easily acquired for atlas formation and when the pathologies in a patient cause large appearance changes. Our framework combines a low-rank-plus-sparse image decomposition technique with an iterative, diffeomorphic, group-wise image registration method. At each iteration of image registration, the decomposition technique estimates a “healthy” version of each image as its low-rank component and estimates the pathologies in each image as its sparse component. The healthy version of each image is used for the next iteration of image registration. The low-rank and sparse estimates are refined as the image registrations iteratively improve. When that framework is applied to image-to-atlas registration, the low-rank image is registered to a pre-defined atlas, to establish correspondence that is independent of the pathologies in the sparse component of each image. Ultimately, image-to-atlas registrations can be used to define spatial priors for tissue segmentation and to map information across subjects. When that framework is applied to unbiased atlas formation, at each iteration, the average of the low-rank images from the patients is used as the atlas image for the next iteration, until convergence. Since each iteration’s atlas is comprised of low-rank components, it provides a population-consistent, pathology-free appearance. Evaluations of the proposed methodology are presented using synthetic data as well as simulated and clinical tumor MRI images from the brain tumor segmentation (BRATS) challenge from MICCAI 2012. PMID:26111390

  9. Intraoperative Image-based Multiview 2D/3D Registration for Image-Guided Orthopaedic Surgery: Incorporation of Fiducial-Based C-Arm Tracking and GPU-Acceleration

    PubMed Central

    Armand, Mehran; Armiger, Robert S.; Kutzer, Michael D.; Basafa, Ehsan; Kazanzides, Peter; Taylor, Russell H.

    2012-01-01

    Intraoperative patient registration may significantly affect the outcome of image-guided surgery (IGS). Image-based registration approaches have several advantages over the currently dominant point-based direct contact methods and are used in some industry solutions in image-guided radiation therapy with fixed X-ray gantries. However, technical challenges including geometric calibration and computational cost have precluded their use with mobile C-arms for IGS. We propose a 2D/3D registration framework for intraoperative patient registration using a conventional mobile X-ray imager combining fiducial-based C-arm tracking and graphics processing unit (GPU)-acceleration. The two-stage framework 1) acquires X-ray images and estimates relative pose between the images using a custom-made in-image fiducial, and 2) estimates the patient pose using intensity-based 2D/3D registration. Experimental validations using a publicly available gold standard dataset, a plastic bone phantom and cadaveric specimens have been conducted. The mean target registration error (mTRE) was 0.34 ± 0.04 mm (success rate: 100%, registration time: 14.2 s) for the phantom with two images 90° apart, and 0.99 ± 0.41 mm (81%, 16.3 s) for the cadaveric specimen with images 58.5° apart. The experimental results showed the feasibility of the proposed registration framework as a practical alternative for IGS routines. PMID:22113773

  10. A two-step framework for the registration of HE stained and FTIR images

    NASA Astrophysics Data System (ADS)

    Peñaranda, Francisco; Naranjo, Valery; Verdú, Rafaél.; Lloyd, Gavin R.; Nallala, Jayakrupakar; Stone, Nick

    2016-03-01

    FTIR spectroscopy is an emerging technology with high potential for cancer diagnosis but with particular physical phenomena that require special processing. Little work has been done in the field with the aim of registering hyperspectral Fourier-Transform Infrared (FTIR) spectroscopic images and Hematoxilin and Eosin (HE) stained histological images of contiguous slices of tissue. This registration is necessary to transfer the location of relevant structures that the pathologist may identify in the gold standard HE images. A two-step registration framework is presented where a representative gray image extracted from the FTIR hypercube is used as an input. This representative image, which must have a spatial contrast as similar as possible to a gray image obtained from the HE image, is calculated through the spectrum variation in the fingerprint region. In the first step of the registration algorithm a similarity transformation is estimated from interest points, which are automatically detected by the popular SURF algorithm. In the second stage, a variational registration framework defined in the frequency domain compensates for local anatomical variations between both images. After a proper tuning of some parameters the proposed registration framework works in an automated way. The method was tested on 7 samples of colon tissue in different stages of cancer. Very promising qualitative and quantitative results were obtained (a mean correlation ratio of 92.16% with a standard deviation of 3.10%).

  11. A Log-Euclidean polyaffine registration for articulated structures in medical images.

    PubMed

    Martín-Fernández, Miguel Angel; Martín-Fernández, Marcos; Alberola-López, Carlos

    2009-01-01

    In this paper we generalize the Log-Euclidean polyaffine registration framework of Arsigny et al. to deal with articulated structures. This framework has very useful properties as it guarantees the invertibility of smooth geometric transformations. In articulated registration a skeleton model is defined for rigid structures such as bones. The final transformation is affine for the bones and elastic for other tissues in the image. We extend the Arsigny el al.'s method to deal with locally-affine registration of pairs of wires. This enables the possibility of using this registration framework to deal with articulated structures. In this context, the design of the weighting functions, which merge the affine transformations defined for each pair of wires, has a great impact not only on the final result of the registration algorithm, but also on the invertibility of the global elastic transformation. Several experiments, using both synthetic images and hand radiographs, are also presented.

  12. Deep Adaptive Log-Demons: Diffeomorphic Image Registration with Very Large Deformations

    PubMed Central

    Jia, Kebin

    2015-01-01

    This paper proposes a new framework for capturing large and complex deformation in image registration. Traditionally, this challenging problem relies firstly on a preregistration, usually an affine matrix containing rotation, scale, and translation and afterwards on a nonrigid transformation. According to preregistration, the directly calculated affine matrix, which is obtained by limited pixel information, may misregistrate when large biases exist, thus misleading following registration subversively. To address this problem, for two-dimensional (2D) images, the two-layer deep adaptive registration framework proposed in this paper firstly accurately classifies the rotation parameter through multilayer convolutional neural networks (CNNs) and then identifies scale and translation parameters separately. For three-dimensional (3D) images, affine matrix is located through feature correspondences by a triplanar 2D CNNs. Then deformation removal is done iteratively through preregistration and demons registration. By comparison with the state-of-the-art registration framework, our method gains more accurate registration results on both synthetic and real datasets. Besides, principal component analysis (PCA) is combined with correlation like Pearson and Spearman to form new similarity standards in 2D and 3D registration. Experiment results also show faster convergence speed. PMID:26120356

  13. Deep Adaptive Log-Demons: Diffeomorphic Image Registration with Very Large Deformations.

    PubMed

    Zhao, Liya; Jia, Kebin

    2015-01-01

    This paper proposes a new framework for capturing large and complex deformation in image registration. Traditionally, this challenging problem relies firstly on a preregistration, usually an affine matrix containing rotation, scale, and translation and afterwards on a nonrigid transformation. According to preregistration, the directly calculated affine matrix, which is obtained by limited pixel information, may misregistrate when large biases exist, thus misleading following registration subversively. To address this problem, for two-dimensional (2D) images, the two-layer deep adaptive registration framework proposed in this paper firstly accurately classifies the rotation parameter through multilayer convolutional neural networks (CNNs) and then identifies scale and translation parameters separately. For three-dimensional (3D) images, affine matrix is located through feature correspondences by a triplanar 2D CNNs. Then deformation removal is done iteratively through preregistration and demons registration. By comparison with the state-of-the-art registration framework, our method gains more accurate registration results on both synthetic and real datasets. Besides, principal component analysis (PCA) is combined with correlation like Pearson and Spearman to form new similarity standards in 2D and 3D registration. Experiment results also show faster convergence speed.

  14. An ITK framework for deterministic global optimization for medical image registration

    NASA Astrophysics Data System (ADS)

    Dru, Florence; Wachowiak, Mark P.; Peters, Terry M.

    2006-03-01

    Similarity metric optimization is an essential step in intensity-based rigid and nonrigid medical image registration. For clinical applications, such as image guidance of minimally invasive procedures, registration accuracy and efficiency are prime considerations. In addition, clinical utility is enhanced when registration is integrated into image analysis and visualization frameworks, such as the popular Insight Toolkit (ITK). ITK is an open source software environment increasingly used to aid the development, testing, and integration of new imaging algorithms. In this paper, we present a new ITK-based implementation of the DIRECT (Dividing Rectangles) deterministic global optimization algorithm for medical image registration. Previously, it has been shown that DIRECT improves the capture range and accuracy for rigid registration. Our ITK class also contains enhancements over the original DIRECT algorithm by improving stopping criteria, adaptively adjusting a locality parameter, and by incorporating Powell's method for local refinement. 3D-3D registration experiments with ground-truth brain volumes and clinical cardiac volumes show that combining DIRECT with Powell's method improves registration accuracy over Powell's method used alone, is less sensitive to initial misorientation errors, and, with the new stopping criteria, facilitates adequate exploration of the search space without expending expensive iterations on non-improving function evaluations. Finally, in this framework, a new parallel implementation for computing mutual information is presented, resulting in near-linear speedup with two processors.

  15. Intra-operative fiducial-based CT/fluoroscope image registration framework for image-guided robot-assisted joint fracture surgery.

    PubMed

    Dagnino, Giulio; Georgilas, Ioannis; Morad, Samir; Gibbons, Peter; Tarassoli, Payam; Atkins, Roger; Dogramadzi, Sanja

    2017-08-01

    Joint fractures must be accurately reduced minimising soft tissue damages to avoid negative surgical outcomes. To this regard, we have developed the RAFS surgical system, which allows the percutaneous reduction of intra-articular fractures and provides intra-operative real-time 3D image guidance to the surgeon. Earlier experiments showed the effectiveness of the RAFS system on phantoms, but also key issues which precluded its use in a clinical application. This work proposes a redesign of the RAFS's navigation system overcoming the earlier version's issues, aiming to move the RAFS system into a surgical environment. The navigation system is improved through an image registration framework allowing the intra-operative registration between pre-operative CT images and intra-operative fluoroscopic images of a fractured bone using a custom-made fiducial marker. The objective of the registration is to estimate the relative pose between a bone fragment and an orthopaedic manipulation pin inserted into it intra-operatively. The actual pose of the bone fragment can be updated in real time using an optical tracker, enabling the image guidance. Experiments on phantom and cadavers demonstrated the accuracy and reliability of the registration framework, showing a reduction accuracy (sTRE) of about [Formula: see text] (phantom) and [Formula: see text] (cadavers). Four distal femur fractures were successfully reduced in cadaveric specimens using the improved navigation system and the RAFS system following the new clinical workflow (reduction error [Formula: see text], [Formula: see text]. Experiments showed the feasibility of the image registration framework. It was successfully integrated into the navigation system, allowing the use of the RAFS system in a realistic surgical application.

  16. SU-E-J-08: A Hybrid Three Dimensional Registration Framework for Image-Guided Accurate Radiotherapy System ARTS-IGRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Q; School of Nuclear Science and Technology, Hefei, Anhui; Anhui Medical University, Hefei, Anhui

    Purpose: The purpose of this work was to develop a registration framework and method based on the software platform of ARTS-IGRT and implement in C++ based on ITK libraries to register CT images and CBCT images. ARTS-IGRT was a part of our self-developed accurate radiation planning system ARTS. Methods: Mutual information (MI) registration treated each voxel equally. Actually, different voxels even having same intensity should be treated differently in the registration procedure. According to their importance values calculated from self-information, a similarity measure was proposed which combined the spatial importance of a voxel with MI (S-MI). For lung registration, Firstly,more » a global alignment method was adopted to minimize the margin error and achieve the alignment of these two images on the whole. The result obtained at the low resolution level was then interpolated to become the initial conditions for the higher resolution computation. Secondly, a new similarity measurement S-MI was established to quantify how close the two input image volumes were to each other. Finally, Demons model was applied to compute the deformable map. Results: Registration tools were tested for head-neck and lung images and the average region was 128*128*49. The rigid registration took approximately 2 min and converged 10% faster than traditional MI algorithm, the accuracy reached 1mm for head-neck images. For lung images, the improved symmetric Demons registration process was completed in an average of 5 min using a 2.4GHz dual core CPU. Conclusion: A registration framework was developed to correct patient's setup according to register the planning CT volume data and the daily reconstructed 3D CBCT data. The experiments showed that the spatial MI algorithm can be adopted for head-neck images. The improved Demons deformable registration was more suitable to lung images, and rigid alignment should be applied before deformable registration to get more accurate result. Supported by National Natural Science Foundation of China (NO.81101132) and Natural Science Foundation of Anhui Province (NO.11040606Q55)« less

  17. Geodesic active fields--a geometric framework for image registration.

    PubMed

    Zosso, Dominique; Bresson, Xavier; Thiran, Jean-Philippe

    2011-05-01

    In this paper we present a novel geometric framework called geodesic active fields for general image registration. In image registration, one looks for the underlying deformation field that best maps one image onto another. This is a classic ill-posed inverse problem, which is usually solved by adding a regularization term. Here, we propose a multiplicative coupling between the registration term and the regularization term, which turns out to be equivalent to embed the deformation field in a weighted minimal surface problem. Then, the deformation field is driven by a minimization flow toward a harmonic map corresponding to the solution of the registration problem. This proposed approach for registration shares close similarities with the well-known geodesic active contours model in image segmentation, where the segmentation term (the edge detector function) is coupled with the regularization term (the length functional) via multiplication as well. As a matter of fact, our proposed geometric model is actually the exact mathematical generalization to vector fields of the weighted length problem for curves and surfaces introduced by Caselles-Kimmel-Sapiro. The energy of the deformation field is measured with the Polyakov energy weighted by a suitable image distance, borrowed from standard registration models. We investigate three different weighting functions, the squared error and the approximated absolute error for monomodal images, and the local joint entropy for multimodal images. As compared to specialized state-of-the-art methods tailored for specific applications, our geometric framework involves important contributions. Firstly, our general formulation for registration works on any parametrizable, smooth and differentiable surface, including nonflat and multiscale images. In the latter case, multiscale images are registered at all scales simultaneously, and the relations between space and scale are intrinsically being accounted for. Second, this method is, to the best of our knowledge, the first reparametrization invariant registration method introduced in the literature. Thirdly, the multiplicative coupling between the registration term, i.e. local image discrepancy, and the regularization term naturally results in a data-dependent tuning of the regularization strength. Finally, by choosing the metric on the deformation field one can freely interpolate between classic Gaussian and more interesting anisotropic, TV-like regularization.

  18. A framework for automatic creation of gold-standard rigid 3D-2D registration datasets.

    PubMed

    Madan, Hennadii; Pernuš, Franjo; Likar, Boštjan; Špiclin, Žiga

    2017-02-01

    Advanced image-guided medical procedures incorporate 2D intra-interventional information into pre-interventional 3D image and plan of the procedure through 3D/2D image registration (32R). To enter clinical use, and even for publication purposes, novel and existing 32R methods have to be rigorously validated. The performance of a 32R method can be estimated by comparing it to an accurate reference or gold standard method (usually based on fiducial markers) on the same set of images (gold standard dataset). Objective validation and comparison of methods are possible only if evaluation methodology is standardized, and the gold standard  dataset is made publicly available. Currently, very few such datasets exist and only one contains images of multiple patients acquired during a procedure. To encourage the creation of gold standard 32R datasets, we propose an automatic framework. The framework is based on rigid registration of fiducial markers. The main novelty is spatial grouping of fiducial markers on the carrier device, which enables automatic marker localization and identification across the 3D and 2D images. The proposed framework was demonstrated on clinical angiograms of 20 patients. Rigid 32R computed by the framework was more accurate than that obtained manually, with the respective target registration error below 0.027 mm compared to 0.040 mm. The framework is applicable for gold standard setup on any rigid anatomy, provided that the acquired images contain spatially grouped fiducial markers. The gold standard datasets and software will be made publicly available.

  19. Multiscale multimodal fusion of histological and MRI volumes for characterization of lung inflammation

    NASA Astrophysics Data System (ADS)

    Rusu, Mirabela; Wang, Haibo; Golden, Thea; Gow, Andrew; Madabhushi, Anant

    2013-03-01

    Mouse lung models facilitate the investigation of conditions such as chronic inflammation which are associated with common lung diseases. The multi-scale manifestation of lung inflammation prompted us to use multi-scale imaging - both in vivo, ex vivo MRI along with ex vivo histology, for its study in a new quantitative way. Some imaging modalities, such as MRI, are non-invasive and capture macroscopic features of the pathology, while others, e.g. ex vivo histology, depict detailed structures. Registering such multi-modal data to the same spatial coordinates will allow the construction of a comprehensive 3D model to enable the multi-scale study of diseases. Moreover, it may facilitate the identification and definition of quantitative of in vivo imaging signatures for diseases and pathologic processes. We introduce a quantitative, image analytic framework to integrate in vivo MR images of the entire mouse with ex vivo histology of the lung alone, using lung ex vivo MRI as conduit to facilitate their co-registration. In our framework, we first align the MR images by registering the in vivo and ex vivo MRI of the lung using an interactive rigid registration approach. Then we reconstruct the 3D volume of the ex vivo histological specimen by efficient group wise registration of the 2D slices. The resulting 3D histologic volume is subsequently registered to the MRI volumes by interactive rigid registration, directly to the ex vivo MRI, and implicitly to in vivo MRI. Qualitative evaluation of the registration framework was performed by comparing airway tree structures in ex vivo MRI and ex vivo histology where airways are visible and may be annotated. We present a use case for evaluation of our co-registration framework in the context of studying chronic inammation in a diseased mouse.

  20. A gaussian mixture + demons deformable registration method for cone-beam CT-guided robotic transoral base-of-tongue surgery

    NASA Astrophysics Data System (ADS)

    Reaungamornrat, S.; Liu, W. P.; Schafer, S.; Otake, Y.; Nithiananthan, S.; Uneri, A.; Richmon, J.; Sorger, J.; Siewerdsen, J. H.; Taylor, R. H.

    2013-03-01

    Purpose: An increasingly popular minimally invasive approach to resection of oropharyngeal / base-of-tongue cancer is made possible by a transoral technique conducted with the assistance of a surgical robot. However, the highly deformed surgical setup (neck flexed, mouth open, and tongue retracted) compared to the typical patient orientation in preoperative images poses a challenge to guidance and localization of the tumor target and adjacent critical anatomy. Intraoperative cone-beam CT (CBCT) can account for such deformation, but due to the low contrast of soft-tissue in CBCT images, direct localization of the target and critical tissues in CBCT images can be difficult. Such structures may be more readily delineated in preoperative CT or MR images, so a method to deformably register such information to intraoperative CBCT could offer significant value. This paper details the initial implementation of a deformable registration framework to align preoperative images with the deformed intraoperative scene and gives preliminary evaluation of the geometric accuracy of registration in CBCT-guided TORS. Method: The deformable registration aligns preoperative CT or MR to intraoperative CBCT by integrating two established approaches. The volume of interest is first segmented (specifically, the region of the tongue from the tip to the hyoid), and a Gaussian mixture (GM) mode1 of surface point clouds is used for rigid initialization (GMRigid) as well as an initial deformation (GMNonRigid). Next, refinement of the registration is performed using the Demons algorithm applied to distance transformations of the GM-registered and CBCT volumes. The registration accuracy of the framework was quantified in preliminary studies using a cadaver emulating preoperative and intraoperative setups. Geometric accuracy of registration was quantified in terms of target registration error (TRE) and surface distance error. Result: With each step of the registration process, the framework demonstrated improved registration, achieving mean TRE of 3.0 mm following the GM rigid, 1.9 mm following GM nonrigid, and 1.5 mm at the output of the registration process. Analysis of surface distance demonstrated a corresponding improvement of 2.2, 0.4, and 0.3 mm, respectively. The evaluation of registration error revealed the accurate alignment in the region of interest for base-of-tongue robotic surgery owing to point-set selection in the GM steps and refinement in the deep aspect of the tongue in the Demons step. Conclusions: A promising framework has been developed for CBCT-guided TORS in which intraoperative CBCT provides a basis for registration of preoperative images to the highly deformed intraoperative setup. The registration framework is invariant to imaging modality (accommodating preoperative CT or MR) and is robust against CBCT intensity variations and artifact, provided corresponding segmentation of the volume of interest. The approach could facilitate overlay of preoperative planning data directly in stereo-endoscopic video in support of CBCT-guided TORS.

  1. Scalable Joint Segmentation and Registration Framework for Infant Brain Images.

    PubMed

    Dong, Pei; Wang, Li; Lin, Weili; Shen, Dinggang; Wu, Guorong

    2017-03-15

    The first year of life is the most dynamic and perhaps the most critical phase of postnatal brain development. The ability to accurately measure structure changes is critical in early brain development study, which highly relies on the performances of image segmentation and registration techniques. However, either infant image segmentation or registration, if deployed independently, encounters much more challenges than segmentation/registration of adult brains due to dynamic appearance change with rapid brain development. In fact, image segmentation and registration of infant images can assists each other to overcome the above challenges by using the growth trajectories (i.e., temporal correspondences) learned from a large set of training subjects with complete longitudinal data. Specifically, a one-year-old image with ground-truth tissue segmentation can be first set as the reference domain. Then, to register the infant image of a new subject at earlier age, we can estimate its tissue probability maps, i.e., with sparse patch-based multi-atlas label fusion technique, where only the training images at the respective age are considered as atlases since they have similar image appearance. Next, these probability maps can be fused as a good initialization to guide the level set segmentation. Thus, image registration between the new infant image and the reference image is free of difficulty of appearance changes, by establishing correspondences upon the reasonably segmented images. Importantly, the segmentation of new infant image can be further enhanced by propagating the much more reliable label fusion heuristics at the reference domain to the corresponding location of the new infant image via the learned growth trajectories, which brings image segmentation and registration to assist each other. It is worth noting that our joint segmentation and registration framework is also flexible to handle the registration of any two infant images even with significant age gap in the first year of life, by linking their joint segmentation and registration through the reference domain. Thus, our proposed joint segmentation and registration method is scalable to various registration tasks in early brain development studies. Promising segmentation and registration results have been achieved for infant brain MR images aged from 2-week-old to 1-year-old, indicating the applicability of our method in early brain development study.

  2. 2D to 3D fusion of echocardiography and cardiac CT for TAVR and TAVI image guidance.

    PubMed

    Khalil, Azira; Faisal, Amir; Lai, Khin Wee; Ng, Siew Cheok; Liew, Yih Miin

    2017-08-01

    This study proposed a registration framework to fuse 2D echocardiography images of the aortic valve with preoperative cardiac CT volume. The registration facilitates the fusion of CT and echocardiography to aid the diagnosis of aortic valve diseases and provide surgical guidance during transcatheter aortic valve replacement and implantation. The image registration framework consists of two major steps: temporal synchronization and spatial registration. Temporal synchronization allows time stamping of echocardiography time series data to identify frames that are at similar cardiac phase as the CT volume. Spatial registration is an intensity-based normalized mutual information method applied with pattern search optimization algorithm to produce an interpolated cardiac CT image that matches the echocardiography image. Our proposed registration method has been applied on the short-axis "Mercedes Benz" sign view of the aortic valve and long-axis parasternal view of echocardiography images from ten patients. The accuracy of our fully automated registration method was 0.81 ± 0.08 and 1.30 ± 0.13 mm in terms of Dice coefficient and Hausdorff distance for short-axis aortic valve view registration, whereas for long-axis parasternal view registration it was 0.79 ± 0.02 and 1.19 ± 0.11 mm, respectively. This accuracy is comparable to gold standard manual registration by expert. There was no significant difference in aortic annulus diameter measurement between the automatically and manually registered CT images. Without the use of optical tracking, we have shown the applicability of this technique for effective fusion of echocardiography with preoperative CT volume to potentially facilitate catheter-based surgery.

  3. SU-E-J-110: A Novel Level Set Active Contour Algorithm for Multimodality Joint Segmentation/Registration Using the Jensen-Rényi Divergence.

    PubMed

    Markel, D; Naqa, I El; Freeman, C; Vallières, M

    2012-06-01

    To present a novel joint segmentation/registration for multimodality image-guided and adaptive radiotherapy. A major challenge to this framework is the sensitivity of many segmentation or registration algorithms to noise. Presented is a level set active contour based on the Jensen-Renyi (JR) divergence to achieve improved noise robustness in a multi-modality imaging space. To present a novel joint segmentation/registration for multimodality image-guided and adaptive radiotherapy. A major challenge to this framework is the sensitivity of many segmentation or registration algorithms to noise. Presented is a level set active contour based on the Jensen-Renyi (JR) divergence to achieve improved noise robustness in a multi-modality imaging space. It was found that JR divergence when used for segmentation has an improved robustness to noise compared to using mutual information, or other entropy-based metrics. The MI metric failed at around 2/3 the noise power than the JR divergence. The JR divergence metric is useful for the task of joint segmentation/registration of multimodality images and shows improved results compared entropy based metric. The algorithm can be easily modified to incorporate non-intensity based images, which would allow applications into multi-modality and texture analysis. © 2012 American Association of Physicists in Medicine.

  4. Propagation of registration uncertainty during multi-fraction cervical cancer brachytherapy

    NASA Astrophysics Data System (ADS)

    Amir-Khalili, A.; Hamarneh, G.; Zakariaee, R.; Spadinger, I.; Abugharbieh, R.

    2017-10-01

    Multi-fraction cervical cancer brachytherapy is a form of image-guided radiotherapy that heavily relies on 3D imaging during treatment planning, delivery, and quality control. In this context, deformable image registration can increase the accuracy of dosimetric evaluations, provided that one can account for the uncertainties associated with the registration process. To enable such capability, we propose a mathematical framework that first estimates the registration uncertainty and subsequently propagates the effects of the computed uncertainties from the registration stage through to the visualizations, organ segmentations, and dosimetric evaluations. To ensure the practicality of our proposed framework in real world image-guided radiotherapy contexts, we implemented our technique via a computationally efficient and generalizable algorithm that is compatible with existing deformable image registration software. In our clinical context of fractionated cervical cancer brachytherapy, we perform a retrospective analysis on 37 patients and present evidence that our proposed methodology for computing and propagating registration uncertainties may be beneficial during therapy planning and quality control. Specifically, we quantify and visualize the influence of registration uncertainty on dosimetric analysis during the computation of the total accumulated radiation dose on the bladder wall. We further show how registration uncertainty may be leveraged into enhanced visualizations that depict the quality of the registration and highlight potential deviations from the treatment plan prior to the delivery of radiation treatment. Finally, we show that we can improve the transfer of delineated volumetric organ segmentation labels from one fraction to the next by encoding the computed registration uncertainties into the segmentation labels.

  5. Improving alignment in Tract-based spatial statistics: evaluation and optimization of image registration.

    PubMed

    de Groot, Marius; Vernooij, Meike W; Klein, Stefan; Ikram, M Arfan; Vos, Frans M; Smith, Stephen M; Niessen, Wiro J; Andersson, Jesper L R

    2013-08-01

    Anatomical alignment in neuroimaging studies is of such importance that considerable effort is put into improving the registration used to establish spatial correspondence. Tract-based spatial statistics (TBSS) is a popular method for comparing diffusion characteristics across subjects. TBSS establishes spatial correspondence using a combination of nonlinear registration and a "skeleton projection" that may break topological consistency of the transformed brain images. We therefore investigated feasibility of replacing the two-stage registration-projection procedure in TBSS with a single, regularized, high-dimensional registration. To optimize registration parameters and to evaluate registration performance in diffusion MRI, we designed an evaluation framework that uses native space probabilistic tractography for 23 white matter tracts, and quantifies tract similarity across subjects in standard space. We optimized parameters for two registration algorithms on two diffusion datasets of different quality. We investigated reproducibility of the evaluation framework, and of the optimized registration algorithms. Next, we compared registration performance of the regularized registration methods and TBSS. Finally, feasibility and effect of incorporating the improved registration in TBSS were evaluated in an example study. The evaluation framework was highly reproducible for both algorithms (R(2) 0.993; 0.931). The optimal registration parameters depended on the quality of the dataset in a graded and predictable manner. At optimal parameters, both algorithms outperformed the registration of TBSS, showing feasibility of adopting such approaches in TBSS. This was further confirmed in the example experiment. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Image Segmentation, Registration, Compression, and Matching

    NASA Technical Reports Server (NTRS)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity/topology components of the generated models. The highly efficient triangular mesh compression compacts the connectivity information at the rate of 1.5-4 bits per vertex (on average for triangle meshes), while reducing the 3D geometry by 40-50 percent. Finally, taking into consideration the characteristics of 3D terrain data, and using the innovative, regularized binary decomposition mesh modeling, a multistage, pattern-drive modeling, and compression technique has been developed to provide an effective framework for compressing digital elevation model (DEM) surfaces, high-resolution aerial imagery, and other types of NASA data.

  7. MR to CT registration of brains using image synthesis

    NASA Astrophysics Data System (ADS)

    Roy, Snehashis; Carass, Aaron; Jog, Amod; Prince, Jerry L.; Lee, Junghoon

    2014-03-01

    Computed tomography (CT) is the preferred imaging modality for patient dose calculation for radiation therapy. Magnetic resonance (MR) imaging (MRI) is used along with CT to identify brain structures due to its superior soft tissue contrast. Registration of MR and CT is necessary for accurate delineation of the tumor and other structures, and is critical in radiotherapy planning. Mutual information (MI) or its variants are typically used as a similarity metric to register MRI to CT. However, unlike CT, MRI intensity does not have an accepted calibrated intensity scale. Therefore, MI-based MR-CT registration may vary from scan to scan as MI depends on the joint histogram of the images. In this paper, we propose a fully automatic framework for MR-CT registration by synthesizing a synthetic CT image from MRI using a co-registered pair of MR and CT images as an atlas. Patches of the subject MRI are matched to the atlas and the synthetic CT patches are estimated in a probabilistic framework. The synthetic CT is registered to the original CT using a deformable registration and the computed deformation is applied to the MRI. In contrast to most existing methods, we do not need any manual intervention such as picking landmarks or regions of interests. The proposed method was validated on ten brain cancer patient cases, showing 25% improvement in MI and correlation between MR and CT images after registration compared to state-of-the-art registration methods.

  8. Multimodal image registration based on binary gradient angle descriptor.

    PubMed

    Jiang, Dongsheng; Shi, Yonghong; Yao, Demin; Fan, Yifeng; Wang, Manning; Song, Zhijian

    2017-12-01

    Multimodal image registration plays an important role in image-guided interventions/therapy and atlas building, and it is still a challenging task due to the complex intensity variations in different modalities. The paper addresses the problem and proposes a simple, compact, fast and generally applicable modality-independent binary gradient angle descriptor (BGA) based on the rationale of gradient orientation alignment. The BGA can be easily calculated at each voxel by coding the quadrant in which a local gradient vector falls, and it has an extremely low computational complexity, requiring only three convolutions, two multiplication operations and two comparison operations. Meanwhile, the binarized encoding of the gradient orientation makes the BGA more resistant to image degradations compared with conventional gradient orientation methods. The BGA can extract similar feature descriptors for different modalities and enable the use of simple similarity measures, which makes it applicable within a wide range of optimization frameworks. The results for pairwise multimodal and monomodal registrations between various images (T1, T2, PD, T1c, Flair) consistently show that the BGA significantly outperforms localized mutual information. The experimental results also confirm that the BGA can be a reliable alternative to the sum of absolute difference in monomodal image registration. The BGA can also achieve an accuracy of [Formula: see text], similar to that of the SSC, for the deformable registration of inhale and exhale CT scans. Specifically, for the highly challenging deformable registration of preoperative MRI and 3D intraoperative ultrasound images, the BGA achieves a similar registration accuracy of [Formula: see text] compared with state-of-the-art approaches, with a computation time of 18.3 s per case. The BGA improves the registration performance in terms of both accuracy and time efficiency. With further acceleration, the framework has the potential for application in time-sensitive clinical environments, such as for preoperative MRI and intraoperative US image registration for image-guided intervention.

  9. A multi-resolution strategy for a multi-objective deformable image registration framework that accommodates large anatomical differences

    NASA Astrophysics Data System (ADS)

    Alderliesten, Tanja; Bosman, Peter A. N.; Sonke, Jan-Jakob; Bel, Arjan

    2014-03-01

    Currently, two major challenges dominate the field of deformable image registration. The first challenge is related to the tuning of the developed methods to specific problems (i.e. how to best combine different objectives such as similarity measure and transformation effort). This is one of the reasons why, despite significant progress, clinical implementation of such techniques has proven to be difficult. The second challenge is to account for large anatomical differences (e.g. large deformations, (dis)appearing structures) that occurred between image acquisitions. In this paper, we study a framework based on multi-objective optimization to improve registration robustness and to simplify tuning for specific applications. Within this framework we specifically consider the use of an advanced model-based evolutionary algorithm for optimization and a dual-dynamic transformation model (i.e. two "non-fixed" grids: one for the source- and one for the target image) to accommodate for large anatomical differences. The framework computes and presents multiple outcomes that represent efficient trade-offs between the different objectives (a so-called Pareto front). In image processing it is common practice, for reasons of robustness and accuracy, to use a multi-resolution strategy. This is, however, only well-established for single-objective registration methods. Here we describe how such a strategy can be realized for our multi-objective approach and compare its results with a single-resolution strategy. For this study we selected the case of prone-supine breast MRI registration. Results show that the well-known advantages of a multi-resolution strategy are successfully transferred to our multi-objective approach, resulting in superior (i.e. Pareto-dominating) outcomes.

  10. Automatic parameter selection for feature-based multi-sensor image registration

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan

    2006-05-01

    Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.

  11. Cortical Surface Registration for Image-Guided Neurosurgery Using Laser-Range Scanning

    PubMed Central

    Sinha, Tuhin K.; Cash, David M.; Galloway, Robert L.; Weil, Robert J.

    2013-01-01

    In this paper, a method of acquiring intraoperative data using a laser range scanner (LRS) is presented within the context of model-updated image-guided surgery. Registering textured point clouds generated by the LRS to tomographic data is explored using established point-based and surface techniques as well as a novel method that incorporates geometry and intensity information via mutual information (SurfaceMI). Phantom registration studies were performed to examine accuracy and robustness for each framework. In addition, an in vivo registration is performed to demonstrate feasibility of the data acquisition system in the operating room. Results indicate that SurfaceMI performed better in many cases than point-based (PBR) and iterative closest point (ICP) methods for registration of textured point clouds. Mean target registration error (TRE) for simulated deep tissue targets in a phantom were 1.0 ± 0.2, 2.0 ± 0.3, and 1.2 ± 0.3 mm for PBR, ICP, and SurfaceMI, respectively. With regard to in vivo registration, the mean TRE of vessel contour points for each framework was 1.9 ± 1.0, 0 9 ± 0.6, and 1.3 ± 0.5 for PBR, ICP, and SurfaceMI, respectively. The methods discussed in this paper in conjunction with the quantitative data provide impetus for using LRS technology within the model-updated image-guided surgery framework. PMID:12906252

  12. On the nature of data collection for soft-tissue image-to-physical organ registration: a noise characterization study

    NASA Astrophysics Data System (ADS)

    Collins, Jarrod A.; Heiselman, Jon S.; Weis, Jared A.; Clements, Logan W.; Simpson, Amber L.; Jarnagin, William R.; Miga, Michael I.

    2017-03-01

    In image-guided liver surgery (IGLS), sparse representations of the anterior organ surface may be collected intraoperatively to drive image-to-physical space registration. Soft tissue deformation represents a significant source of error for IGLS techniques. This work investigates the impact of surface data quality on current surface based IGLS registration methods. In this work, we characterize the robustness of our IGLS registration methods to noise in organ surface digitization. We study this within a novel human-to-phantom data framework that allows a rapid evaluation of clinically realistic data and noise patterns on a fully characterized hepatic deformation phantom. Additionally, we implement a surface data resampling strategy that is designed to decrease the impact of differences in surface acquisition. For this analysis, n=5 cases of clinical intraoperative data consisting of organ surface and salient feature digitizations from open liver resection were collected and analyzed within our human-to-phantom validation framework. As expected, results indicate that increasing levels of noise in surface acquisition cause registration fidelity to deteriorate. With respect to rigid registration using the raw and resampled data at clinically realistic levels of noise (i.e. a magnitude of 1.5 mm), resampling improved TRE by 21%. In terms of nonrigid registration, registrations using resampled data outperformed the raw data result by 14% at clinically realistic levels and were less susceptible to noise across the range of noise investigated. These results demonstrate the types of analyses our novel human-to-phantom validation framework can provide and indicate the considerable benefits of resampling strategies.

  13. Deformable image registration using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Eppenhof, Koen A. J.; Lafarge, Maxime W.; Moeskops, Pim; Veta, Mitko; Pluim, Josien P. W.

    2018-03-01

    Deformable image registration can be time-consuming and often needs extensive parameterization to perform well on a specific application. We present a step towards a registration framework based on a three-dimensional convolutional neural network. The network directly learns transformations between pairs of three-dimensional images. The outputs of the network are three maps for the x, y, and z components of a thin plate spline transformation grid. The network is trained on synthetic random transformations, which are applied to a small set of representative images for the desired application. Training therefore does not require manually annotated ground truth deformation information. The methodology is demonstrated on public data sets of inspiration-expiration lung CT image pairs, which come with annotated corresponding landmarks for evaluation of the registration accuracy. Advantages of this methodology are its fast registration times and its minimal parameterization.

  14. Pydpiper: a flexible toolkit for constructing novel registration pipelines.

    PubMed

    Friedel, Miriam; van Eede, Matthijs C; Pipitone, Jon; Chakravarty, M Mallar; Lerch, Jason P

    2014-01-01

    Using neuroimaging technologies to elucidate the relationship between genotype and phenotype and brain and behavior will be a key contribution to biomedical research in the twenty-first century. Among the many methods for analyzing neuroimaging data, image registration deserves particular attention due to its wide range of applications. Finding strategies to register together many images and analyze the differences between them can be a challenge, particularly given that different experimental designs require different registration strategies. Moreover, writing software that can handle different types of image registration pipelines in a flexible, reusable and extensible way can be challenging. In response to this challenge, we have created Pydpiper, a neuroimaging registration toolkit written in Python. Pydpiper is an open-source, freely available software package that provides multiple modules for various image registration applications. Pydpiper offers five key innovations. Specifically: (1) a robust file handling class that allows access to outputs from all stages of registration at any point in the pipeline; (2) the ability of the framework to eliminate duplicate stages; (3) reusable, easy to subclass modules; (4) a development toolkit written for non-developers; (5) four complete applications that run complex image registration pipelines "out-of-the-box." In this paper, we will discuss both the general Pydpiper framework and the various ways in which component modules can be pieced together to easily create new registration pipelines. This will include a discussion of the core principles motivating code development and a comparison of Pydpiper with other available toolkits. We also provide a comprehensive, line-by-line example to orient users with limited programming knowledge and highlight some of the most useful features of Pydpiper. In addition, we will present the four current applications of the code.

  15. Pydpiper: a flexible toolkit for constructing novel registration pipelines

    PubMed Central

    Friedel, Miriam; van Eede, Matthijs C.; Pipitone, Jon; Chakravarty, M. Mallar; Lerch, Jason P.

    2014-01-01

    Using neuroimaging technologies to elucidate the relationship between genotype and phenotype and brain and behavior will be a key contribution to biomedical research in the twenty-first century. Among the many methods for analyzing neuroimaging data, image registration deserves particular attention due to its wide range of applications. Finding strategies to register together many images and analyze the differences between them can be a challenge, particularly given that different experimental designs require different registration strategies. Moreover, writing software that can handle different types of image registration pipelines in a flexible, reusable and extensible way can be challenging. In response to this challenge, we have created Pydpiper, a neuroimaging registration toolkit written in Python. Pydpiper is an open-source, freely available software package that provides multiple modules for various image registration applications. Pydpiper offers five key innovations. Specifically: (1) a robust file handling class that allows access to outputs from all stages of registration at any point in the pipeline; (2) the ability of the framework to eliminate duplicate stages; (3) reusable, easy to subclass modules; (4) a development toolkit written for non-developers; (5) four complete applications that run complex image registration pipelines “out-of-the-box.” In this paper, we will discuss both the general Pydpiper framework and the various ways in which component modules can be pieced together to easily create new registration pipelines. This will include a discussion of the core principles motivating code development and a comparison of Pydpiper with other available toolkits. We also provide a comprehensive, line-by-line example to orient users with limited programming knowledge and highlight some of the most useful features of Pydpiper. In addition, we will present the four current applications of the code. PMID:25126069

  16. INVITED REVIEW--IMAGE REGISTRATION IN VETERINARY RADIATION ONCOLOGY: INDICATIONS, IMPLICATIONS, AND FUTURE ADVANCES.

    PubMed

    Feng, Yang; Lawrence, Jessica; Cheng, Kun; Montgomery, Dean; Forrest, Lisa; Mclaren, Duncan B; McLaughlin, Stephen; Argyle, David J; Nailon, William H

    2016-01-01

    The field of veterinary radiation therapy (RT) has gained substantial momentum in recent decades with significant advances in conformal treatment planning, image-guided radiation therapy (IGRT), and intensity-modulated (IMRT) techniques. At the root of these advancements lie improvements in tumor imaging, image alignment (registration), target volume delineation, and identification of critical structures. Image registration has been widely used to combine information from multimodality images such as computerized tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET) to improve the accuracy of radiation delivery and reliably identify tumor-bearing areas. Many different techniques have been applied in image registration. This review provides an overview of medical image registration in RT and its applications in veterinary oncology. A summary of the most commonly used approaches in human and veterinary medicine is presented along with their current use in IGRT and adaptive radiation therapy (ART). It is important to realize that registration does not guarantee that target volumes, such as the gross tumor volume (GTV), are correctly identified on the image being registered, as limitations unique to registration algorithms exist. Research involving novel registration frameworks for automatic segmentation of tumor volumes is ongoing and comparative oncology programs offer a unique opportunity to test the efficacy of proposed algorithms. © 2016 American College of Veterinary Radiology.

  17. Registration of 4D cardiac CT sequences under trajectory constraints with multichannel diffeomorphic demons.

    PubMed

    Peyrat, Jean-Marc; Delingette, Hervé; Sermesant, Maxime; Xu, Chenyang; Ayache, Nicholas

    2010-07-01

    We propose a framework for the nonlinear spatiotemporal registration of 4D time-series of images based on the Diffeomorphic Demons (DD) algorithm. In this framework, the 4D spatiotemporal registration is decoupled into a 4D temporal registration, defined as mapping physiological states, and a 4D spatial registration, defined as mapping trajectories of physical points. Our contribution focuses more specifically on the 4D spatial registration that should be consistent over time as opposed to 3D registration that solely aims at mapping homologous points at a given time-point. First, we estimate in each sequence the motion displacement field, which is a dense representation of the point trajectories we want to register. Then, we perform simultaneously 3D registrations of corresponding time-points with the constraints to map the same physical points over time called the trajectory constraints. Under these constraints, we show that the 4D spatial registration can be formulated as a multichannel registration of 3D images. To solve it, we propose a novel version of the Diffeomorphic Demons (DD) algorithm extended to vector-valued 3D images, the Multichannel Diffeomorphic Demons (MDD). For evaluation, this framework is applied to the registration of 4D cardiac computed tomography (CT) sequences and compared to other standard methods with real patient data and synthetic data simulated from a physiologically realistic electromechanical cardiac model. Results show that the trajectory constraints act as a temporal regularization consistent with motion whereas the multichannel registration acts as a spatial regularization. Finally, using these trajectory constraints with multichannel registration yields the best compromise between registration accuracy, temporal and spatial smoothness, and computation times. A prospective example of application is also presented with the spatiotemporal registration of 4D cardiac CT sequences of the same patient before and after radiofrequency ablation (RFA) in case of atrial fibrillation (AF). The intersequence spatial transformations over a cardiac cycle allow to analyze and quantify the regression of left ventricular hypertrophy and its impact on the cardiac function.

  18. A generalized framework unifying image registration and respiratory motion models and incorporating image reconstruction, for partial image data or full images

    NASA Astrophysics Data System (ADS)

    McClelland, Jamie R.; Modat, Marc; Arridge, Simon; Grimes, Helen; D'Souza, Derek; Thomas, David; O' Connell, Dylan; Low, Daniel A.; Kaza, Evangelia; Collins, David J.; Leach, Martin O.; Hawkes, David J.

    2017-06-01

    Surrogate-driven respiratory motion models relate the motion of the internal anatomy to easily acquired respiratory surrogate signals, such as the motion of the skin surface. They are usually built by first using image registration to determine the motion from a number of dynamic images, and then fitting a correspondence model relating the motion to the surrogate signals. In this paper we present a generalized framework that unifies the image registration and correspondence model fitting into a single optimization. This allows the use of ‘partial’ imaging data, such as individual slices, projections, or k-space data, where it would not be possible to determine the motion from an individual frame of data. Motion compensated image reconstruction can also be incorporated using an iterative approach, so that both the motion and a motion-free image can be estimated from the partial image data. The framework has been applied to real 4DCT, Cine CT, multi-slice CT, and multi-slice MR data, as well as simulated datasets from a computer phantom. This includes the use of a super-resolution reconstruction method for the multi-slice MR data. Good results were obtained for all datasets, including quantitative results for the 4DCT and phantom datasets where the ground truth motion was known or could be estimated.

  19. A generalized framework unifying image registration and respiratory motion models and incorporating image reconstruction, for partial image data or full images.

    PubMed

    McClelland, Jamie R; Modat, Marc; Arridge, Simon; Grimes, Helen; D'Souza, Derek; Thomas, David; Connell, Dylan O'; Low, Daniel A; Kaza, Evangelia; Collins, David J; Leach, Martin O; Hawkes, David J

    2017-06-07

    Surrogate-driven respiratory motion models relate the motion of the internal anatomy to easily acquired respiratory surrogate signals, such as the motion of the skin surface. They are usually built by first using image registration to determine the motion from a number of dynamic images, and then fitting a correspondence model relating the motion to the surrogate signals. In this paper we present a generalized framework that unifies the image registration and correspondence model fitting into a single optimization. This allows the use of 'partial' imaging data, such as individual slices, projections, or k-space data, where it would not be possible to determine the motion from an individual frame of data. Motion compensated image reconstruction can also be incorporated using an iterative approach, so that both the motion and a motion-free image can be estimated from the partial image data. The framework has been applied to real 4DCT, Cine CT, multi-slice CT, and multi-slice MR data, as well as simulated datasets from a computer phantom. This includes the use of a super-resolution reconstruction method for the multi-slice MR data. Good results were obtained for all datasets, including quantitative results for the 4DCT and phantom datasets where the ground truth motion was known or could be estimated.

  20. A generalized framework unifying image registration and respiratory motion models and incorporating image reconstruction, for partial image data or full images

    PubMed Central

    McClelland, Jamie R; Modat, Marc; Arridge, Simon; Grimes, Helen; D’Souza, Derek; Thomas, David; Connell, Dylan O’; Low, Daniel A; Kaza, Evangelia; Collins, David J; Leach, Martin O; Hawkes, David J

    2017-01-01

    Abstract Surrogate-driven respiratory motion models relate the motion of the internal anatomy to easily acquired respiratory surrogate signals, such as the motion of the skin surface. They are usually built by first using image registration to determine the motion from a number of dynamic images, and then fitting a correspondence model relating the motion to the surrogate signals. In this paper we present a generalized framework that unifies the image registration and correspondence model fitting into a single optimization. This allows the use of ‘partial’ imaging data, such as individual slices, projections, or k-space data, where it would not be possible to determine the motion from an individual frame of data. Motion compensated image reconstruction can also be incorporated using an iterative approach, so that both the motion and a motion-free image can be estimated from the partial image data. The framework has been applied to real 4DCT, Cine CT, multi-slice CT, and multi-slice MR data, as well as simulated datasets from a computer phantom. This includes the use of a super-resolution reconstruction method for the multi-slice MR data. Good results were obtained for all datasets, including quantitative results for the 4DCT and phantom datasets where the ground truth motion was known or could be estimated. PMID:28195833

  1. Groupwise registration of MR brain images with tumors.

    PubMed

    Tang, Zhenyu; Wu, Yihong; Fan, Yong

    2017-08-04

    A novel groupwise image registration framework is developed for registering MR brain images with tumors. Our method iteratively estimates a normal-appearance counterpart for each tumor image to be registered and constructs a directed graph (digraph) of normal-appearance images to guide the groupwise image registration. Particularly, our method maps each tumor image to its normal appearance counterpart by identifying and inpainting brain tumor regions with intensity information estimated using a low-rank plus sparse matrix decomposition based image representation technique. The estimated normal-appearance images are groupwisely registered to a group center image guided by a digraph of images so that the total length of 'image registration paths' to be the minimum, and then the original tumor images are warped to the group center image using the resulting deformation fields. We have evaluated our method based on both simulated and real MR brain tumor images. The registration results were evaluated with overlap measures of corresponding brain regions and average entropy of image intensity information, and Wilcoxon signed rank tests were adopted to compare different methods with respect to their regional overlap measures. Compared with a groupwise image registration method that is applied to normal-appearance images estimated using the traditional low-rank plus sparse matrix decomposition based image inpainting, our method achieved higher image registration accuracy with statistical significance (p  =  7.02  ×  10 -9 ).

  2. Effective 2D-3D medical image registration using Support Vector Machine.

    PubMed

    Qi, Wenyuan; Gu, Lixu; Zhao, Qiang

    2008-01-01

    Registration of pre-operative 3D volume dataset and intra-operative 2D images gradually becomes an important technique to assist radiologists in diagnosing complicated diseases easily and quickly. In this paper, we proposed a novel 2D/3D registration framework based on Support Vector Machine (SVM) to compensate the disadvantages of generating large number of DRR images in the stage of intra-operation. Estimated similarity metric distribution could be built up from the relationship between parameters of transform and prior sparse target metric values by means of SVR method. Based on which, global optimal parameters of transform are finally searched out by an optimizer in order to guide 3D volume dataset to match intra-operative 2D image. Experiments reveal that our proposed registration method improved performance compared to conventional registration method and also provided a precise registration result efficiently.

  3. Automated Geo/Co-Registration of Multi-Temporal Very-High-Resolution Imagery.

    PubMed

    Han, Youkyung; Oh, Jaehong

    2018-05-17

    For time-series analysis using very-high-resolution (VHR) multi-temporal satellite images, both accurate georegistration to the map coordinates and subpixel-level co-registration among the images should be conducted. However, applying well-known matching methods, such as scale-invariant feature transform and speeded up robust features for VHR multi-temporal images, has limitations. First, they cannot be used for matching an optical image to heterogeneous non-optical data for georegistration. Second, they produce a local misalignment induced by differences in acquisition conditions, such as acquisition platform stability, the sensor's off-nadir angle, and relief displacement of the considered scene. Therefore, this study addresses the problem by proposing an automated geo/co-registration framework for full-scene multi-temporal images acquired from a VHR optical satellite sensor. The proposed method comprises two primary steps: (1) a global georegistration process, followed by (2) a fine co-registration process. During the first step, two-dimensional multi-temporal satellite images are matched to three-dimensional topographic maps to assign the map coordinates. During the second step, a local analysis of registration noise pixels extracted between the multi-temporal images that have been mapped to the map coordinates is conducted to extract a large number of well-distributed corresponding points (CPs). The CPs are finally used to construct a non-rigid transformation function that enables minimization of the local misalignment existing among the images. Experiments conducted on five Kompsat-3 full scenes confirmed the effectiveness of the proposed framework, showing that the georegistration performance resulted in an approximately pixel-level accuracy for most of the scenes, and the co-registration performance further improved the results among all combinations of the georegistered Kompsat-3 image pairs by increasing the calculated cross-correlation values.

  4. Adaptive Diffeomorphic Multiresolution Demons and Their Application to Same Modality Medical Image Registration with Large Deformation

    PubMed Central

    Wang, Chang; Ren, Qiongqiong; Qin, Xin

    2018-01-01

    Diffeomorphic demons can guarantee smooth and reversible deformation and avoid unreasonable deformation. However, the number of iterations needs to be set manually, and this greatly influences the registration result. In order to solve this problem, we proposed adaptive diffeomorphic multiresolution demons in this paper. We used an optimized framework with nonrigid registration and diffeomorphism strategy, designed a similarity energy function based on grey value, and stopped iterations adaptively. This method was tested by synthetic image and same modality medical image. Large deformation was simulated by rotational distortion and extrusion transform, medical image registration with large deformation was performed, and quantitative analyses were conducted using the registration evaluation indexes, and the influence of different driving forces and parameters on the registration result was analyzed. The registration results of same modality medical images were compared with those obtained using active demons, additive demons, and diffeomorphic demons. Quantitative analyses showed that the proposed method's normalized cross-correlation coefficient and structural similarity were the highest and mean square error was the lowest. Medical image registration with large deformation could be performed successfully; evaluation indexes remained stable with an increase in deformation strength. The proposed method is effective and robust, and it can be applied to nonrigid registration of same modality medical images with large deformation.

  5. Adaptive Diffeomorphic Multiresolution Demons and Their Application to Same Modality Medical Image Registration with Large Deformation.

    PubMed

    Wang, Chang; Ren, Qiongqiong; Qin, Xin; Yu, Yi

    2018-01-01

    Diffeomorphic demons can guarantee smooth and reversible deformation and avoid unreasonable deformation. However, the number of iterations needs to be set manually, and this greatly influences the registration result. In order to solve this problem, we proposed adaptive diffeomorphic multiresolution demons in this paper. We used an optimized framework with nonrigid registration and diffeomorphism strategy, designed a similarity energy function based on grey value, and stopped iterations adaptively. This method was tested by synthetic image and same modality medical image. Large deformation was simulated by rotational distortion and extrusion transform, medical image registration with large deformation was performed, and quantitative analyses were conducted using the registration evaluation indexes, and the influence of different driving forces and parameters on the registration result was analyzed. The registration results of same modality medical images were compared with those obtained using active demons, additive demons, and diffeomorphic demons. Quantitative analyses showed that the proposed method's normalized cross-correlation coefficient and structural similarity were the highest and mean square error was the lowest. Medical image registration with large deformation could be performed successfully; evaluation indexes remained stable with an increase in deformation strength. The proposed method is effective and robust, and it can be applied to nonrigid registration of same modality medical images with large deformation.

  6. Fundamental limits of image registration performance: Effects of image noise and resolution in CT-guided interventions.

    PubMed

    Ketcha, M D; de Silva, T; Han, R; Uneri, A; Goerres, J; Jacobson, M; Vogt, S; Kleinszig, G; Siewerdsen, J H

    2017-02-11

    In image-guided procedures, image acquisition is often performed primarily for the task of geometrically registering information from another image dataset, rather than detection / visualization of a particular feature. While the ability to detect a particular feature in an image has been studied extensively with respect to image quality characteristics (noise, resolution) and is an ongoing, active area of research, comparatively little has been accomplished to relate such image quality characteristics to registration performance. To establish such a framework, we derived Cramer-Rao lower bounds (CRLB) for registration accuracy, revealing the underlying dependencies on image variance and gradient strength. The CRLB was analyzed as a function of image quality factors (in particular, dose) for various similarity metrics and compared to registration accuracy using CT images of an anthropomorphic head phantom at various simulated dose levels. Performance was evaluated in terms of root mean square error (RMSE) of the registration parameters. Analysis of the CRLB shows two primary dependencies: 1) noise variance (related to dose); and 2) sum of squared image gradients (related to spatial resolution and image content). Comparison of the measured RMSE to the CRLB showed that the best registration method, RMSE achieved the CRLB to within an efficiency factor of 0.21, and optimal estimators followed the predicted inverse proportionality between registration performance and radiation dose. Analysis of the CRLB for image registration is an important step toward understanding and evaluating an intraoperative imaging system with respect to a registration task. While the CRLB is optimistic in absolute performance, it reveals a basis for relating the performance of registration estimators as a function of noise content and may be used to guide acquisition parameter selection (e.g., dose) for purposes of intraoperative registration.

  7. Fully-integrated framework for the segmentation and registration of the spinal cord white and gray matter.

    PubMed

    Dupont, Sara M; De Leener, Benjamin; Taso, Manuel; Le Troter, Arnaud; Nadeau, Sylvie; Stikov, Nikola; Callot, Virginie; Cohen-Adad, Julien

    2017-04-15

    The spinal cord white and gray matter can be affected by various pathologies such as multiple sclerosis, amyotrophic lateral sclerosis or trauma. Being able to precisely segment the white and gray matter could help with MR image analysis and hence be useful in further understanding these pathologies, and helping with diagnosis/prognosis and drug development. Up to date, white/gray matter segmentation has mostly been done manually, which is time consuming, induces a bias related to the rater and prevents large-scale multi-center studies. Recently, few methods have been proposed to automatically segment the spinal cord white and gray matter. However, no single method exists that combines the following criteria: (i) fully automatic, (ii) works on various MRI contrasts, (iii) robust towards pathology and (iv) freely available and open source. In this study we propose a multi-atlas based method for the segmentation of the spinal cord white and gray matter that addresses the previous limitations. Moreover, to study the spinal cord morphology, atlas-based approaches are increasingly used. These approaches rely on the registration of a spinal cord template to an MR image, however the registration usually doesn't take into account the spinal cord internal structure and thus lacks accuracy. In this study, we propose a new template registration framework that integrates the white and gray matter segmentation to account for the specific gray matter shape of each individual subject. Validation of segmentation was performed in 24 healthy subjects using T 2 * -weighted images, in 8 healthy subjects using diffusion weighted images (exhibiting inverted white-to-gray matter contrast compared to T 2 *-weighted), and in 5 patients with spinal cord injury. The template registration was validated in 24 subjects using T 2 *-weighted data. Results of automatic segmentation on T 2 *-weighted images was in close correspondence with the manual segmentation (Dice coefficient in the white/gray matter of 0.91/0.71 respectively). Similarly, good results were obtained in data with inverted contrast (diffusion-weighted image) and in patients. When compared to the classical template registration framework, the proposed framework that accounts for gray matter shape significantly improved the quality of the registration (comparing Dice coefficient in gray matter: p=9.5×10 -6 ). While further validation is needed to show the benefits of the new registration framework in large cohorts and in a variety of patients, this study provides a fully-integrated tool for quantitative assessment of white/gray matter morphometry and template-based analysis. All the proposed methods are implemented in the Spinal Cord Toolbox (SCT), an open-source software for processing spinal cord multi-parametric MRI data. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Object-constrained meshless deformable algorithm for high speed 3D nonrigid registration between CT and CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Ting; Kim, Sung; Goyal, Sharad

    2010-01-15

    Purpose: High-speed nonrigid registration between the planning CT and the treatment CBCT data is critical for real time image guided radiotherapy (IGRT) to improve the dose distribution and to reduce the toxicity to adjacent organs. The authors propose a new fully automatic 3D registration framework that integrates object-based global and seed constraints with the grayscale-based ''demons'' algorithm. Methods: Clinical objects were segmented on the planning CT images and were utilized as meshless deformable models during the nonrigid registration process. The meshless models reinforced a global constraint in addition to the grayscale difference between CT and CBCT in order to maintainmore » the shape and the volume of geometrically complex 3D objects during the registration. To expedite the registration process, the framework was stratified into hierarchies, and the authors used a frequency domain formulation to diffuse the displacement between the reference and the target in each hierarchy. Also during the registration of pelvis images, they replaced the air region inside the rectum with estimated pixel values from the surrounding rectal wall and introduced an additional seed constraint to robustly track and match the seeds implanted into the prostate. The proposed registration framework and algorithm were evaluated on 15 real prostate cancer patients. For each patient, prostate gland, seminal vesicle, bladder, and rectum were first segmented by a radiation oncologist on planning CT images for radiotherapy planning purpose. The same radiation oncologist also manually delineated the tumor volumes and critical anatomical structures in the corresponding CBCT images acquired at treatment. These delineated structures on the CBCT were only used as the ground truth for the quantitative validation, while structures on the planning CT were used both as the input to the registration method and the ground truth in validation. By registering the planning CT to the CBCT, a displacement map was generated. Segmented volumes in the CT images deformed using the displacement field were compared against the manual segmentations in the CBCT images to quantitatively measure the convergence of the shape and the volume. Other image features were also used to evaluate the overall performance of the registration. Results: The algorithm was able to complete the segmentation and registration process within 1 min, and the superimposed clinical objects achieved a volumetric similarity measure of over 90% between the reference and the registered data. Validation results also showed that the proposed registration could accurately trace the deformation inside the target volume with average errors of less than 1 mm. The method had a solid performance in registering the simulated images with up to 20 Hounsfield unit white noise added. Also, the side by side comparison with the original demons algorithm demonstrated its improved registration performance over the local pixel-based registration approaches. Conclusions: Given the strength and efficiency of the algorithm, the proposed method has significant clinical potential to accelerate and to improve the CBCT delineation and targets tracking in online IGRT applications.« less

  9. Intermediate Templates Guided Groupwise Registration of Diffusion Tensor Images

    PubMed Central

    Jia, Hongjun; Yap, Pew-Thian; Wu, Guorong; Wang, Qian; Shen, Dinggang

    2010-01-01

    Registration of a population of diffusion tensor images (DTIs) is one of the key steps in medical image analysis, and it plays an important role in the statistical analysis of white matter related neurological diseases. However, pairwise registration with respect to a pre-selected template may not give precise results if the selected template deviates significantly from the distribution of images. To cater for more accurate and consistent registration, a novel framework is proposed for groupwise registration with the guidance from one or more intermediate templates determined from the population of images. Specifically, we first use a Euclidean distance, defined as a combinative measure based on the FA map and ADC map, for gauging the similarity of each pair of DTIs. A fully connected graph is then built with each node denoting an image and each edge denoting the distance between a pair of images. The root template image is determined automatically as the image with the overall shortest path length to all other images on the minimum spanning tree (MST) of the graph. Finally, a sequence of registration steps is applied to progressively warping each image towards the root template image with the help of intermediate templates distributed along its path to the root node on the MST. Extensive experimental results using diffusion tensor images of real subjects indicate that registration accuracy and fiber tract alignment are significantly improved, compared with the direct registration from each image to the root template image. PMID:20851197

  10. Performance of U-net based pyramidal lucas-kanade registration on free-breathing multi-b-value diffusion MRI of the kidney.

    PubMed

    Lv, Jun; Huang, Wenjian; Zhang, Jue; Wang, Xiaoying

    2018-06-01

    In free-breathing multi-b-value diffusion-weighted imaging (DWI), a series of images typically requires several minutes to collect. During respiration the kidney is routinely displaced and may also undergo deformation. These respiratory motion effects generate artifacts and these are the main sources of error in the quantification of intravoxel incoherent motion (IVIM) derived parameters. This work proposes a fully automated framework that combines a kidney segmentation to improve the registration accuracy. 10 healthy subjects were recruited to participate in this experiment. For the segmentation, U-net was adopted to acquire the kidney's contour. The segmented kidney then served as a region of interest (ROI) for the registration method, known as pyramidal Lucas-Kanade. Our proposed framework confines the kidney's solution range, thus increasing the pyramidal Lucas-Kanade's accuracy. To demonstrate the feasibility of our presented framework, eight regions of interest were selected in the cortex and medulla, and data stability was estimated by comparing the normalized root-mean-square error (NRMSE) values of the fitted data from the bi-exponential intravoxel incoherent motion model pre- and post- registration. The results show that the NRMSE was significantly lower after registration both in the cortex (p < 0.05) and medulla (p < 0.01) during free-breathing measurements. In addition, expert visual scoring of the derived apparent diffusion coefficient (ADC), f, D and D* maps indicated there were significant improvements in the alignment of the kidney in the post-registered image. The proposed framework can effectively reduce the motion artifacts of misaligned multi-b-value DWIs and the inaccuracies of the ADC, f, D and D* estimations. Advances in knowledge: This study demonstrates the feasibility of our proposed fully automated framework combining U-net based segmentation and pyramidal Lucas-Kanade registration method for improving the alignment of multi-b-value diffusion-weighted MRIs and reducing the inaccuracy of parameter estimation during free-breathing.

  11. Active edge maps for medical image registration

    NASA Astrophysics Data System (ADS)

    Kerwin, William; Yuan, Chun

    2001-07-01

    Applying edge detection prior to performing image registration yields several advantages over raw intensity- based registration. Advantages include the ability to register multicontrast or multimodality images, immunity to intensity variations, and the potential for computationally efficient algorithms. In this work, a common framework for edge-based image registration is formulated as an adaptation of snakes used in boundary detection. Called active edge maps, the new formulation finds a one-to-one transformation T(x) that maps points in a source image to corresponding locations in a target image using an energy minimization approach. The energy consists of an image component that is small when edge features are well matched in the two images, and an internal term that restricts T(x) to allowable configurations. The active edge map formulation is illustrated here with a specific example developed for affine registration of carotid artery magnetic resonance images. In this example, edges are identified using a magnitude of gradient operator, image energy is determined using a Gaussian weighted distance function, and the internal energy includes separate, adjustable components that control volume preservation and rigidity.

  12. Hierarchical and symmetric infant image registration by robust longitudinal-example-guided correspondence detection

    PubMed Central

    Wu, Yao; Wu, Guorong; Wang, Li; Munsell, Brent C.; Wang, Qian; Lin, Weili; Feng, Qianjin; Chen, Wufan; Shen, Dinggang

    2015-01-01

    Purpose: To investigate anatomical differences across individual subjects, or longitudinal changes in early brain development, it is important to perform accurate image registration. However, due to fast brain development and dynamic tissue appearance changes, it is very difficult to align infant brain images acquired from birth to 1-yr-old. Methods: To solve this challenging problem, a novel image registration method is proposed to align two infant brain images, regardless of age at acquisition. The main idea is to utilize the growth trajectories, or spatial-temporal correspondences, learned from a set of longitudinal training images, for guiding the registration of two different time-point images with different image appearances. Specifically, in the training stage, an intrinsic growth trajectory is first estimated for each training subject using the longitudinal images. To register two new infant images with potentially a large age gap, the corresponding images patches between each new image and its respective training images with similar age are identified. Finally, the registration between the two new images can be assisted by the learned growth trajectories from one time point to another time point that have been established in the training stage. To further improve registration accuracy, the proposed method is combined with a hierarchical and symmetric registration framework that can iteratively add new key points in both images to steer the estimation of the deformation between the two infant brain images under registration. Results: To evaluate image registration accuracy, the proposed method is used to align 24 infant subjects at five different time points (2-week-old, 3-month-old, 6-month-old, 9-month-old, and 12-month-old). Compared to the state-of-the-art methods, the proposed method demonstrated superior registration performance. Conclusions: The proposed method addresses the difficulties in the infant brain registration and produces better results compared to existing state-of-the-art registration methods. PMID:26133617

  13. Global image registration using a symmetric block-matching approach

    PubMed Central

    Modat, Marc; Cash, David M.; Daga, Pankaj; Winston, Gavin P.; Duncan, John S.; Ourselin, Sébastien

    2014-01-01

    Abstract. Most medical image registration algorithms suffer from a directionality bias that has been shown to largely impact subsequent analyses. Several approaches have been proposed in the literature to address this bias in the context of nonlinear registration, but little work has been done for global registration. We propose a symmetric approach based on a block-matching technique and least-trimmed square regression. The proposed method is suitable for multimodal registration and is robust to outliers in the input images. The symmetric framework is compared with the original asymmetric block-matching technique and is shown to outperform it in terms of accuracy and robustness. The methodology presented in this article has been made available to the community as part of the NiftyReg open-source package. PMID:26158035

  14. Accurate quantification of local changes for carotid arteries in 3D ultrasound images using convex optimization-based deformable registration

    NASA Astrophysics Data System (ADS)

    Cheng, Jieyu; Qiu, Wu; Yuan, Jing; Fenster, Aaron; Chiu, Bernard

    2016-03-01

    Registration of longitudinally acquired 3D ultrasound (US) images plays an important role in monitoring and quantifying progression/regression of carotid atherosclerosis. We introduce an image-based non-rigid registration algorithm to align the baseline 3D carotid US with longitudinal images acquired over several follow-up time points. This algorithm minimizes the sum of absolute intensity differences (SAD) under a variational optical-flow perspective within a multi-scale optimization framework to capture local and global deformations. Outer wall and lumen were segmented manually on each image, and the performance of the registration algorithm was quantified by Dice similarity coefficient (DSC) and mean absolute distance (MAD) of the outer wall and lumen surfaces after registration. In this study, images for 5 subjects were registered initially by rigid registration, followed by the proposed algorithm. Mean DSC generated by the proposed algorithm was 79:3+/-3:8% for lumen and 85:9+/-4:0% for outer wall, compared to 73:9+/-3:4% and 84:7+/-3:2% generated by rigid registration. Mean MAD of 0:46+/-0:08mm and 0:52+/-0:13mm were generated for lumen and outer wall respectively by the proposed algorithm, compared to 0:55+/-0:08mm and 0:54+/-0:11mm generated by rigid registration. The mean registration time of our method per image pair was 143+/-23s.

  15. Investigation of 3D histograms of oriented gradients for image-based registration of CT with interventional CBCT

    NASA Astrophysics Data System (ADS)

    Trimborn, Barbara; Wolf, Ivo; Abu-Sammour, Denis; Henzler, Thomas; Schad, Lothar R.; Zöllner, Frank G.

    2017-03-01

    Image registration of preprocedural contrast-enhanced CTs to intraprocedual cone-beam computed tomography (CBCT) can provide additional information for interventional liver oncology procedures such as transcatheter arterial chemoembolisation (TACE). In this paper, a novel similarity metric for gradient-based image registration is proposed. The metric relies on the patch-based computation of histograms of oriented gradients (HOG) building the basis for a feature descriptor. The metric was implemented in a framework for rigid 3D-3D-registration of pre-interventional CT with intra-interventional CBCT data obtained during the workflow of a TACE. To evaluate the performance of the new metric, the capture range was estimated based on the calculation of the mean target registration error and compared to the results obtained with a normalized cross correlation metric. The results show that 3D HOG feature descriptors are suitable as image-similarity metric and that the novel metric can compete with established methods in terms of registration accuracy

  16. Explicit B-spline regularization in diffeomorphic image registration

    PubMed Central

    Tustison, Nicholas J.; Avants, Brian B.

    2013-01-01

    Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline “flavored” diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools. PMID:24409140

  17. Cortical surface registration using spherical thin-plate spline with sulcal lines and mean curvature as features.

    PubMed

    Park, Hyunjin; Park, Jun-Sung; Seong, Joon-Kyung; Na, Duk L; Lee, Jong-Min

    2012-04-30

    Analysis of cortical patterns requires accurate cortical surface registration. Many researchers map the cortical surface onto a unit sphere and perform registration of two images defined on the unit sphere. Here we have developed a novel registration framework for the cortical surface based on spherical thin-plate splines. Small-scale composition of spherical thin-plate splines was used as the geometric interpolant to avoid folding in the geometric transform. Using an automatic algorithm based on anisotropic skeletons, we extracted seven sulcal lines, which we then incorporated as landmark information. Mean curvature was chosen as an additional feature for matching between spherical maps. We employed a two-term cost function to encourage matching of both sulcal lines and the mean curvature between the spherical maps. Application of our registration framework to fifty pairwise registrations of T1-weighted MRI scans resulted in improved registration accuracy, which was computed from sulcal lines. Our registration approach was tested as an additional procedure to improve an existing surface registration algorithm. Our registration framework maintained an accurate registration over the sulcal lines while significantly increasing the cross-correlation of mean curvature between the spherical maps being registered. Copyright © 2012 Elsevier B.V. All rights reserved.

  18. Pairwise domain adaptation module for CNN-based 2-D/3-D registration.

    PubMed

    Zheng, Jiannan; Miao, Shun; Jane Wang, Z; Liao, Rui

    2018-04-01

    Accurate two-dimensional to three-dimensional (2-D/3-D) registration of preoperative 3-D data and intraoperative 2-D x-ray images is a key enabler for image-guided therapy. Recent advances in 2-D/3-D registration formulate the problem as a learning-based approach and exploit the modeling power of convolutional neural networks (CNN) to significantly improve the accuracy and efficiency of 2-D/3-D registration. However, for surgery-related applications, collecting a large clinical dataset with accurate annotations for training can be very challenging or impractical. Therefore, deep learning-based 2-D/3-D registration methods are often trained with synthetically generated data, and a performance gap is often observed when testing the trained model on clinical data. We propose a pairwise domain adaptation (PDA) module to adapt the model trained on source domain (i.e., synthetic data) to target domain (i.e., clinical data) by learning domain invariant features with only a few paired real and synthetic data. The PDA module is designed to be flexible for different deep learning-based 2-D/3-D registration frameworks, and it can be plugged into any pretrained CNN model such as a simple Batch-Norm layer. The proposed PDA module has been quantitatively evaluated on two clinical applications using different frameworks of deep networks, demonstrating its significant advantages of generalizability and flexibility for 2-D/3-D medical image registration when a small number of paired real-synthetic data can be obtained.

  19. On the usefulness of gradient information in multi-objective deformable image registration using a B-spline-based dual-dynamic transformation model: comparison of three optimization algorithms

    NASA Astrophysics Data System (ADS)

    Pirpinia, Kleopatra; Bosman, Peter A. N.; Sonke, Jan-Jakob; van Herk, Marcel; Alderliesten, Tanja

    2015-03-01

    The use of gradient information is well-known to be highly useful in single-objective optimization-based image registration methods. However, its usefulness has not yet been investigated for deformable image registration from a multi-objective optimization perspective. To this end, within a previously introduced multi-objective optimization framework, we use a smooth B-spline-based dual-dynamic transformation model that allows us to derive gradient information analytically, while still being able to account for large deformations. Within the multi-objective framework, we previously employed a powerful evolutionary algorithm (EA) that computes and advances multiple outcomes at once, resulting in a set of solutions (a so-called Pareto front) that represents efficient trade-offs between the objectives. With the addition of the B-spline-based transformation model, we studied the usefulness of gradient information in multiobjective deformable image registration using three different optimization algorithms: the (gradient-less) EA, a gradientonly algorithm, and a hybridization of these two. We evaluated the algorithms to register highly deformed images: 2D MRI slices of the breast in prone and supine positions. Results demonstrate that gradient-based multi-objective optimization significantly speeds up optimization in the initial stages of optimization. However, allowing sufficient computational resources, better results could still be obtained with the EA. Ultimately, the hybrid EA found the best overall approximation of the optimal Pareto front, further indicating that adding gradient-based optimization for multiobjective optimization-based deformable image registration can indeed be beneficial

  20. Temporal subtraction contrast-enhanced dedicated breast CT

    NASA Astrophysics Data System (ADS)

    Gazi, Peymon M.; Aminololama-Shakeri, Shadi; Yang, Kai; Boone, John M.

    2016-09-01

    The development of a framework of deformable image registration and segmentation for the purpose of temporal subtraction contrast-enhanced breast CT is described. An iterative histogram-based two-means clustering method was used for the segmentation. Dedicated breast CT images were segmented into background (air), adipose, fibroglandular and skin components. Fibroglandular tissue was classified as either normal or contrast-enhanced then divided into tiers for the purpose of categorizing degrees of contrast enhancement. A variant of the Demons deformable registration algorithm, intensity difference adaptive Demons (IDAD), was developed to correct for the large deformation forces that stemmed from contrast enhancement. In this application, the accuracy of the proposed method was evaluated in both mathematically-simulated and physically-acquired phantom images. Clinical usage and accuracy of the temporal subtraction framework was demonstrated using contrast-enhanced breast CT datasets from five patients. Registration performance was quantified using normalized cross correlation (NCC), symmetric uncertainty coefficient, normalized mutual information (NMI), mean square error (MSE) and target registration error (TRE). The proposed method outperformed conventional affine and other Demons variations in contrast enhanced breast CT image registration. In simulation studies, IDAD exhibited improvement in MSE (0-16%), NCC (0-6%), NMI (0-13%) and TRE (0-34%) compared to the conventional Demons approaches, depending on the size and intensity of the enhancing lesion. As lesion size and contrast enhancement levels increased, so did the improvement. The drop in the correlation between the pre- and post-contrast images for the largest enhancement levels in phantom studies is less than 1.2% (150 Hounsfield units). Registration error, measured by TRE, shows only submillimeter mismatches between the concordant anatomical target points in all patient studies. The algorithm was implemented using a parallel processing architecture resulting in rapid execution time for the iterative segmentation and intensity-adaptive registration techniques. Characterization of contrast-enhanced lesions is improved using temporal subtraction contrast-enhanced dedicated breast CT. Adaptation of Demons registration forces as a function of contrast-enhancement levels provided a means to accurately align breast tissue in pre- and post-contrast image acquisitions, improving subtraction results. Spatial subtraction of the aligned images yields useful diagnostic information with respect to enhanced lesion morphology and uptake.

  1. Joint tumor segmentation and dense deformable registration of brain MR images.

    PubMed

    Parisot, Sarah; Duffau, Hugues; Chemouny, Stéphane; Paragios, Nikos

    2012-01-01

    In this paper we propose a novel graph-based concurrent registration and segmentation framework. Registration is modeled with a pairwise graphical model formulation that is modular with respect to the data and regularization term. Segmentation is addressed by adopting a similar graphical model, using image-based classification techniques while producing a smooth solution. The two problems are coupled via a relaxation of the registration criterion in the presence of tumors as well as a segmentation through a registration term aiming the separation between healthy and diseased tissues. Efficient linear programming is used to solve both problems simultaneously. State of the art results demonstrate the potential of our method on a large and challenging low-grade glioma data set.

  2. Elastic registration of prostate MR images based on state estimation of dynamical systems

    NASA Astrophysics Data System (ADS)

    Marami, Bahram; Ghoul, Suha; Sirouspour, Shahin; Capson, David W.; Davidson, Sean R. H.; Trachtenberg, John; Fenster, Aaron

    2014-03-01

    Magnetic resonance imaging (MRI) is being increasingly used for image-guided biopsy and focal therapy of prostate cancer. A combined rigid and deformable registration technique is proposed to register pre-treatment diagnostic 3T magnetic resonance (MR) images, with the identified target tumor(s), to the intra-treatment 1.5T MR images. The pre-treatment 3T images are acquired with patients in strictly supine position using an endorectal coil, while 1.5T images are obtained intra-operatively just before insertion of the ablation needle with patients in the lithotomy position. An intensity-based registration routine rigidly aligns two images in which the transformation parameters is initialized using three pairs of manually selected approximate corresponding points. The rigid registration is followed by a deformable registration algorithm employing a generic dynamic linear elastic deformation model discretized by the finite element method (FEM). The model is used in a classical state estimation framework to estimate the deformation of the prostate based on a similarity metric between pre- and intra-treatment images. Registration results using 10 sets of prostate MR images showed that the proposed method can significantly improve registration accuracy in terms of target registration error (TRE) for all prostate substructures. The root mean square (RMS) TRE of 46 manually identified fiducial points was found to be 2.40+/-1.20 mm, 2.51+/-1.20 mm, and 2.28+/-1.22mm for the whole gland (WG), central gland (CG), and peripheral zone (PZ), respectively after deformable registration. These values are improved from 3.15+/-1.60 mm, 3.09+/-1.50 mm, and 3.20+/-1.73mm in the WG, CG and PZ, respectively resulted from rigid registration. Registration results are also evaluated based on the Dice similarity coefficient (DSC), mean absolute surface distances (MAD) and maximum absolute surface distances (MAXD) of the WG and CG in the prostate images.

  3. Slice-to-Volume Nonrigid Registration of Histological Sections to MR Images of the Human Brain

    PubMed Central

    Osechinskiy, Sergey; Kruggel, Frithjof

    2011-01-01

    Registration of histological images to three-dimensional imaging modalities is an important step in quantitative analysis of brain structure, in architectonic mapping of the brain, and in investigation of the pathology of a brain disease. Reconstruction of histology volume from serial sections is a well-established procedure, but it does not address registration of individual slices from sparse sections, which is the aim of the slice-to-volume approach. This study presents a flexible framework for intensity-based slice-to-volume nonrigid registration algorithms with a geometric transformation deformation field parametrized by various classes of spline functions: thin-plate splines (TPS), Gaussian elastic body splines (GEBS), or cubic B-splines. Algorithms are applied to cross-modality registration of histological and magnetic resonance images of the human brain. Registration performance is evaluated across a range of optimization algorithms and intensity-based cost functions. For a particular case of histological data, best results are obtained with a TPS three-dimensional (3D) warp, a new unconstrained optimization algorithm (NEWUOA), and a correlation-coefficient-based cost function. PMID:22567290

  4. SU-E-J-92: CERR: New Tools to Analyze Image Registration Precision.

    PubMed

    Apte, A; Wang, Y; Oh, J; Saleh, Z; Deasy, J

    2012-06-01

    To present new tools in CERR (The Computational Environment for Radiotherapy Research) to analyze image registration and other software updates/additions. CERR continues to be a key environment (cited more than 129 times to date) for numerous RT-research studies involving outcomes modeling, prototyping algorithms for segmentation, and registration, experiments with phantom dosimetry, IMRT research, etc. Image registration is one of the key technologies required in many research studies. CERR has been interfaced with popular image registration frameworks like Plastimatch and ITK. Once the images have been autoregistered, CERR provides tools to analyze the accuracy of registration using the following innovative approaches (1)Distance Discordance Histograms (DDH), described in detail in a separate paper and (2)'MirrorScope', explained as follows: for any view plane the 2-d image is broken up into a 2d grid of medium-sized squares. Each square contains a right-half, which is the reference image, and a left-half, which is the mirror flipped version of the overlay image. The user can increase or decrease the size of this grid to control the resolution of the analysis. Other updates to CERR include tools to extract image and dosimetric features programmatically and storage in a central database and tools to interface with Statistical analysis software like SPSS and Matlab Statistics toolbox. MirrorScope was compared on various examples, including 'perfect' registration examples and 'artificially translated' registrations. for 'perfect' registration, the patterns obtained within each circles are symmetric, and are easily, visually recognized as aligned. For registrations that are off, the patterns obtained in the circles located in the regions of imperfections show unsymmetrical patterns that are easily recognized. The new updates to CERR further increase its utility for RT-research. Mirrorscope is a visually intuitive method of monitoring the accuracy of image registration that improves on the visual confusion of standard methods. © 2012 American Association of Physicists in Medicine.

  5. Fusion of cone-beam CT and 3D photographic images for soft tissue simulation in maxillofacial surgery

    NASA Astrophysics Data System (ADS)

    Chung, Soyoung; Kim, Joojin; Hong, Helen

    2016-03-01

    During maxillofacial surgery, prediction of the facial outcome after surgery is main concern for both surgeons and patients. However, registration of the facial CBCT images and 3D photographic images has some difficulties that regions around the eyes and mouth are affected by facial expressions or the registration speed is low due to their dense clouds of points on surfaces. Therefore, we propose a framework for the fusion of facial CBCT images and 3D photos with skin segmentation and two-stage surface registration. Our method is composed of three major steps. First, to obtain a CBCT skin surface for the registration with 3D photographic surface, skin is automatically segmented from CBCT images and the skin surface is generated by surface modeling. Second, to roughly align the scale and the orientation of the CBCT skin surface and 3D photographic surface, point-based registration with four corresponding landmarks which are located around the mouth is performed. Finally, to merge the CBCT skin surface and 3D photographic surface, Gaussian-weight-based surface registration is performed within narrow-band of 3D photographic surface.

  6. Mid-space-independent deformable image registration.

    PubMed

    Aganj, Iman; Iglesias, Juan Eugenio; Reuter, Martin; Sabuncu, Mert Rory; Fischl, Bruce

    2017-05-15

    Aligning images in a mid-space is a common approach to ensuring that deformable image registration is symmetric - that it does not depend on the arbitrary ordering of the input images. The results are, however, generally dependent on the mathematical definition of the mid-space. In particular, the set of possible solutions is typically restricted by the constraints that are enforced on the transformations to prevent the mid-space from drifting too far from the native image spaces. The use of an implicit atlas has been proposed as an approach to mid-space image registration. In this work, we show that when the atlas is aligned to each image in the native image space, the data term of implicit-atlas-based deformable registration is inherently independent of the mid-space. In addition, we show that the regularization term can be reformulated independently of the mid-space as well. We derive a new symmetric cost function that only depends on the transformation morphing the images to each other, rather than to the atlas. This eliminates the need for anti-drift constraints, thereby expanding the space of allowable deformations. We provide an implementation scheme for the proposed framework, and validate it through diffeomorphic registration experiments on brain magnetic resonance images. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Mid-Space-Independent Deformable Image Registration

    PubMed Central

    Aganj, Iman; Iglesias, Juan Eugenio; Reuter, Martin; Sabuncu, Mert Rory; Fischl, Bruce

    2017-01-01

    Aligning images in a mid-space is a common approach to ensuring that deformable image registration is symmetric – that it does not depend on the arbitrary ordering of the input images. The results are, however, generally dependent on the mathematical definition of the mid-space. In particular, the set of possible solutions is typically restricted by the constraints that are enforced on the transformations to prevent the mid-space from drifting too far from the native image spaces. The use of an implicit atlas has been proposed as an approach to mid-space image registration. In this work, we show that when the atlas is aligned to each image in the native image space, the data term of implicit-atlas-based deformable registration is inherently independent of the mid-space. In addition, we show that the regularization term can be reformulated independently of the mid-space as well. We derive a new symmetric cost function that only depends on the transformation morphing the images to each other, rather than to the atlas. This eliminates the need for anti-drift constraints, thereby expanding the space of allowable deformations. We provide an implementation scheme for the proposed framework, and validate it through diffeomorphic registration experiments on brain magnetic resonance images. PMID:28242316

  8. Liver DCE-MRI Registration in Manifold Space Based on Robust Principal Component Analysis.

    PubMed

    Feng, Qianjin; Zhou, Yujia; Li, Xueli; Mei, Yingjie; Lu, Zhentai; Zhang, Yu; Feng, Yanqiu; Liu, Yaqin; Yang, Wei; Chen, Wufan

    2016-09-29

    A technical challenge in the registration of dynamic contrast-enhanced magnetic resonance (DCE-MR) imaging in the liver is intensity variations caused by contrast agents. Such variations lead to the failure of the traditional intensity-based registration method. To address this problem, a manifold-based registration framework for liver DCE-MR time series is proposed. We assume that liver DCE-MR time series are located on a low-dimensional manifold and determine intrinsic similarities between frames. Based on the obtained manifold, the large deformation of two dissimilar images can be decomposed into a series of small deformations between adjacent images on the manifold through gradual deformation of each frame to the template image along the geodesic path. Furthermore, manifold construction is important in automating the selection of the template image, which is an approximation of the geodesic mean. Robust principal component analysis is performed to separate motion components from intensity changes induced by contrast agents; the components caused by motion are used to guide registration in eliminating the effect of contrast enhancement. Visual inspection and quantitative assessment are further performed on clinical dataset registration. Experiments show that the proposed method effectively reduces movements while preserving the topology of contrast-enhancing structures and provides improved registration performance.

  9. Nonlinear image registration with bidirectional metric and reciprocal regularization

    PubMed Central

    Ying, Shihui; Li, Dan; Xiao, Bin; Peng, Yaxin; Du, Shaoyi; Xu, Meifeng

    2017-01-01

    Nonlinear registration is an important technique to align two different images and widely applied in medical image analysis. In this paper, we develop a novel nonlinear registration framework based on the diffeomorphic demons, where a reciprocal regularizer is introduced to assume that the deformation between two images is an exact diffeomorphism. In detail, first, we adopt a bidirectional metric to improve the symmetry of the energy functional, whose variables are two reciprocal deformations. Secondly, we slack these two deformations into two independent variables and introduce a reciprocal regularizer to assure the deformations being the exact diffeomorphism. Then, we utilize an alternating iterative strategy to decouple the model into two minimizing subproblems, where a new closed form for the approximate velocity of deformation is calculated. Finally, we compare our proposed algorithm on two data sets of real brain MR images with two relative and conventional methods. The results validate that our proposed method improves accuracy and robustness of registration, as well as the gained bidirectional deformations are actually reciprocal. PMID:28231342

  10. Image-guided radiotherapy quality control: Statistical process control using image similarity metrics.

    PubMed

    Shiraishi, Satomi; Grams, Michael P; Fong de Los Santos, Luis E

    2018-05-01

    The purpose of this study was to demonstrate an objective quality control framework for the image review process. A total of 927 cone-beam computed tomography (CBCT) registrations were retrospectively analyzed for 33 bilateral head and neck cancer patients who received definitive radiotherapy. Two registration tracking volumes (RTVs) - cervical spine (C-spine) and mandible - were defined, within which a similarity metric was calculated and used as a registration quality tracking metric over the course of treatment. First, sensitivity to large misregistrations was analyzed for normalized cross-correlation (NCC) and mutual information (MI) in the context of statistical analysis. The distribution of metrics was obtained for displacements that varied according to a normal distribution with standard deviation of σ = 2 mm, and the detectability of displacements greater than 5 mm was investigated. Then, similarity metric control charts were created using a statistical process control (SPC) framework to objectively monitor the image registration and review process. Patient-specific control charts were created using NCC values from the first five fractions to set a patient-specific process capability limit. Population control charts were created using the average of the first five NCC values for all patients in the study. For each patient, the similarity metrics were calculated as a function of unidirectional translation, referred to as the effective displacement. Patient-specific action limits corresponding to 5 mm effective displacements were defined. Furthermore, effective displacements of the ten registrations with the lowest similarity metrics were compared with a three dimensional (3DoF) couch displacement required to align the anatomical landmarks. Normalized cross-correlation identified suboptimal registrations more effectively than MI within the framework of SPC. Deviations greater than 5 mm were detected at 2.8σ and 2.1σ from the mean for NCC and MI, respectively. Patient-specific control charts using NCC evaluated daily variation and identified statistically significant deviations. This study also showed that subjective evaluations of the images were not always consistent. Population control charts identified a patient whose tracking metrics were significantly lower than those of other patients. The patient-specific action limits identified registrations that warranted immediate evaluation by an expert. When effective displacements in the anterior-posterior direction were compared to 3DoF couch displacements, the agreement was ±1 mm for seven of 10 patients for both C-spine and mandible RTVs. Qualitative review alone of IGRT images can result in inconsistent feedback to the IGRT process. Registration tracking using NCC objectively identifies statistically significant deviations. When used in conjunction with the current image review process, this tool can assist in improving the safety and consistency of the IGRT process. © 2018 American Association of Physicists in Medicine.

  11. A Bayesian nonrigid registration method to enhance intraoperative target definition in image-guided prostate procedures through uncertainty characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pursley, Jennifer; Risholm, Petter; Fedorov, Andriy

    2012-11-15

    Purpose: This study introduces a probabilistic nonrigid registration method for use in image-guided prostate brachytherapy. Intraoperative imaging for prostate procedures, usually transrectal ultrasound (TRUS), is typically inferior to diagnostic-quality imaging of the pelvis such as endorectal magnetic resonance imaging (MRI). MR images contain superior detail of the prostate boundaries and provide substructure features not otherwise visible. Previous efforts to register diagnostic prostate images with the intraoperative coordinate system have been deterministic and did not offer a measure of the registration uncertainty. The authors developed a Bayesian registration method to estimate the posterior distribution on deformations and provide a case-specific measuremore » of the associated registration uncertainty. Methods: The authors adapted a biomechanical-based probabilistic nonrigid method to register diagnostic to intraoperative images by aligning a physician's segmentations of the prostate in the two images. The posterior distribution was characterized with a Markov Chain Monte Carlo method; the maximum a posteriori deformation and the associated uncertainty were estimated from the collection of deformation samples drawn from the posterior distribution. The authors validated the registration method using a dataset created from ten patients with MRI-guided prostate biopsies who had both diagnostic and intraprocedural 3 Tesla MRI scans. The accuracy and precision of the estimated posterior distribution on deformations were evaluated from two predictive distance distributions: between the deformed central zone-peripheral zone (CZ-PZ) interface and the physician-labeled interface, and based on physician-defined landmarks. Geometric margins on the registration of the prostate's peripheral zone were determined from the posterior predictive distance to the CZ-PZ interface separately for the base, mid-gland, and apical regions of the prostate. Results: The authors observed variation in the shape and volume of the segmented prostate in diagnostic and intraprocedural images. The probabilistic method allowed us to convey registration results in terms of posterior distributions, with the dispersion providing a patient-specific estimate of the registration uncertainty. The median of the predictive distance distribution between the deformed prostate boundary and the segmented boundary was Less-Than-Or-Slanted-Equal-To 3 mm (95th percentiles within {+-}4 mm) for all ten patients. The accuracy and precision of the internal deformation was evaluated by comparing the posterior predictive distance distribution for the CZ-PZ interface for each patient, with the median distance ranging from -0.6 to 2.4 mm. Posterior predictive distances between naturally occurring landmarks showed registration errors of Less-Than-Or-Slanted-Equal-To 5 mm in any direction. The uncertainty was not a global measure, but instead was local and varied throughout the registration region. Registration uncertainties were largest in the apical region of the prostate. Conclusions: Using a Bayesian nonrigid registration method, the authors determined the posterior distribution on deformations between diagnostic and intraprocedural MR images and quantified the uncertainty in the registration results. The feasibility of this approach was tested and results were positive. The probabilistic framework allows us to evaluate both patient-specific and location-specific estimates of the uncertainty in the registration result. Although the framework was tested on MR-guided procedures, the preliminary results suggest that it may be applied to TRUS-guided procedures as well, where the addition of diagnostic MR information may have a larger impact on target definition and clinical guidance.« less

  12. A Bayesian nonrigid registration method to enhance intraoperative target definition in image-guided prostate procedures through uncertainty characterization

    PubMed Central

    Pursley, Jennifer; Risholm, Petter; Fedorov, Andriy; Tuncali, Kemal; Fennessy, Fiona M.; Wells, William M.; Tempany, Clare M.; Cormack, Robert A.

    2012-01-01

    Purpose: This study introduces a probabilistic nonrigid registration method for use in image-guided prostate brachytherapy. Intraoperative imaging for prostate procedures, usually transrectal ultrasound (TRUS), is typically inferior to diagnostic-quality imaging of the pelvis such as endorectal magnetic resonance imaging (MRI). MR images contain superior detail of the prostate boundaries and provide substructure features not otherwise visible. Previous efforts to register diagnostic prostate images with the intraoperative coordinate system have been deterministic and did not offer a measure of the registration uncertainty. The authors developed a Bayesian registration method to estimate the posterior distribution on deformations and provide a case-specific measure of the associated registration uncertainty. Methods: The authors adapted a biomechanical-based probabilistic nonrigid method to register diagnostic to intraoperative images by aligning a physician's segmentations of the prostate in the two images. The posterior distribution was characterized with a Markov Chain Monte Carlo method; the maximum a posteriori deformation and the associated uncertainty were estimated from the collection of deformation samples drawn from the posterior distribution. The authors validated the registration method using a dataset created from ten patients with MRI-guided prostate biopsies who had both diagnostic and intraprocedural 3 Tesla MRI scans. The accuracy and precision of the estimated posterior distribution on deformations were evaluated from two predictive distance distributions: between the deformed central zone-peripheral zone (CZ-PZ) interface and the physician-labeled interface, and based on physician-defined landmarks. Geometric margins on the registration of the prostate's peripheral zone were determined from the posterior predictive distance to the CZ-PZ interface separately for the base, mid-gland, and apical regions of the prostate. Results: The authors observed variation in the shape and volume of the segmented prostate in diagnostic and intraprocedural images. The probabilistic method allowed us to convey registration results in terms of posterior distributions, with the dispersion providing a patient-specific estimate of the registration uncertainty. The median of the predictive distance distribution between the deformed prostate boundary and the segmented boundary was ⩽3 mm (95th percentiles within ±4 mm) for all ten patients. The accuracy and precision of the internal deformation was evaluated by comparing the posterior predictive distance distribution for the CZ-PZ interface for each patient, with the median distance ranging from −0.6 to 2.4 mm. Posterior predictive distances between naturally occurring landmarks showed registration errors of ⩽5 mm in any direction. The uncertainty was not a global measure, but instead was local and varied throughout the registration region. Registration uncertainties were largest in the apical region of the prostate. Conclusions: Using a Bayesian nonrigid registration method, the authors determined the posterior distribution on deformations between diagnostic and intraprocedural MR images and quantified the uncertainty in the registration results. The feasibility of this approach was tested and results were positive. The probabilistic framework allows us to evaluate both patient-specific and location-specific estimates of the uncertainty in the registration result. Although the framework was tested on MR-guided procedures, the preliminary results suggest that it may be applied to TRUS-guided procedures as well, where the addition of diagnostic MR information may have a larger impact on target definition and clinical guidance. PMID:23127078

  13. Multiresolution image registration in digital x-ray angiography with intensity variation modeling.

    PubMed

    Nejati, Mansour; Pourghassem, Hossein

    2014-02-01

    Digital subtraction angiography (DSA) is a widely used technique for visualization of vessel anatomy in diagnosis and treatment. However, due to unavoidable patient motions, both externally and internally, the subtracted angiography images often suffer from motion artifacts that adversely affect the quality of the medical diagnosis. To cope with this problem and improve the quality of DSA images, registration algorithms are often employed before subtraction. In this paper, a novel elastic registration algorithm for registration of digital X-ray angiography images, particularly for the coronary location, is proposed. This algorithm includes a multiresolution search strategy in which a global transformation is calculated iteratively based on local search in coarse and fine sub-image blocks. The local searches are accomplished in a differential multiscale framework which allows us to capture both large and small scale transformations. The local registration transformation also explicitly accounts for local variations in the image intensities which incorporated into our model as a change of local contrast and brightness. These local transformations are then smoothly interpolated using thin-plate spline interpolation function to obtain the global model. Experimental results with several clinical datasets demonstrate the effectiveness of our algorithm in motion artifact reduction.

  14. A Rigid Image Registration Based on the Nonsubsampled Contourlet Transform and Genetic Algorithms

    PubMed Central

    Meskine, Fatiha; Chikr El Mezouar, Miloud; Taleb, Nasreddine

    2010-01-01

    Image registration is a fundamental task used in image processing to match two or more images taken at different times, from different sensors or from different viewpoints. The objective is to find in a huge search space of geometric transformations, an acceptable accurate solution in a reasonable time to provide better registered images. Exhaustive search is computationally expensive and the computational cost increases exponentially with the number of transformation parameters and the size of the data set. In this work, we present an efficient image registration algorithm that uses genetic algorithms within a multi-resolution framework based on the Non-Subsampled Contourlet Transform (NSCT). An adaptable genetic algorithm for registration is adopted in order to minimize the search space. This approach is used within a hybrid scheme applying the two techniques fitness sharing and elitism. Two NSCT based methods are proposed for registration. A comparative study is established between these methods and a wavelet based one. Because the NSCT is a shift-invariant multidirectional transform, the second method is adopted for its search speeding up property. Simulation results clearly show that both proposed techniques are really promising methods for image registration compared to the wavelet approach, while the second technique has led to the best performance results of all. Moreover, to demonstrate the effectiveness of these methods, these registration techniques have been successfully applied to register SPOT, IKONOS and Synthetic Aperture Radar (SAR) images. The algorithm has been shown to work perfectly well for multi-temporal satellite images as well, even in the presence of noise. PMID:22163672

  15. A rigid image registration based on the nonsubsampled contourlet transform and genetic algorithms.

    PubMed

    Meskine, Fatiha; Chikr El Mezouar, Miloud; Taleb, Nasreddine

    2010-01-01

    Image registration is a fundamental task used in image processing to match two or more images taken at different times, from different sensors or from different viewpoints. The objective is to find in a huge search space of geometric transformations, an acceptable accurate solution in a reasonable time to provide better registered images. Exhaustive search is computationally expensive and the computational cost increases exponentially with the number of transformation parameters and the size of the data set. In this work, we present an efficient image registration algorithm that uses genetic algorithms within a multi-resolution framework based on the Non-Subsampled Contourlet Transform (NSCT). An adaptable genetic algorithm for registration is adopted in order to minimize the search space. This approach is used within a hybrid scheme applying the two techniques fitness sharing and elitism. Two NSCT based methods are proposed for registration. A comparative study is established between these methods and a wavelet based one. Because the NSCT is a shift-invariant multidirectional transform, the second method is adopted for its search speeding up property. Simulation results clearly show that both proposed techniques are really promising methods for image registration compared to the wavelet approach, while the second technique has led to the best performance results of all. Moreover, to demonstrate the effectiveness of these methods, these registration techniques have been successfully applied to register SPOT, IKONOS and Synthetic Aperture Radar (SAR) images. The algorithm has been shown to work perfectly well for multi-temporal satellite images as well, even in the presence of noise.

  16. Joint estimation of subject motion and tracer kinetic parameters of dynamic PET data in an EM framework

    NASA Astrophysics Data System (ADS)

    Jiao, Jieqing; Salinas, Cristian A.; Searle, Graham E.; Gunn, Roger N.; Schnabel, Julia A.

    2012-02-01

    Dynamic Positron Emission Tomography is a powerful tool for quantitative imaging of in vivo biological processes. The long scan durations necessitate motion correction, to maintain the validity of the dynamic measurements, which can be particularly challenging due to the low signal-to-noise ratio (SNR) and spatial resolution, as well as the complex tracer behaviour in the dynamic PET data. In this paper we develop a novel automated expectation-maximisation image registration framework that incorporates temporal tracer kinetic information to correct for inter-frame subject motion during dynamic PET scans. We employ the Zubal human brain phantom to simulate dynamic PET data using SORTEO (a Monte Carlo-based simulator), in order to validate the proposed method for its ability to recover imposed rigid motion. We have conducted a range of simulations using different noise levels, and corrupted the data with a range of rigid motion artefacts. The performance of our motion correction method is compared with pairwise registration using normalised mutual information as a voxel similarity measure (an approach conventionally used to correct for dynamic PET inter-frame motion based solely on intensity information). To quantify registration accuracy, we calculate the target registration error across the images. The results show that our new dynamic image registration method based on tracer kinetics yields better realignment of the simulated datasets, halving the target registration error when compared to the conventional method at small motion levels, as well as yielding smaller residuals in translation and rotation parameters. We also show that our new method is less affected by the low signal in the first few frames, which the conventional method based on normalised mutual information fails to realign.

  17. Temporal subtraction contrast-enhanced dedicated breast CT

    PubMed Central

    Gazi, Peymon M.; Aminololama-Shakeri, Shadi; Yang, Kai; Boone, John M.

    2016-01-01

    Purpose To develop a framework of deformable image registration and segmentation for the purpose of temporal subtraction contrast-enhanced breast CT is described. Methods An iterative histogram-based two-means clustering method was used for the segmentation. Dedicated breast CT images were segmented into background (air), adipose, fibroglandular and skin components. Fibroglandular tissue was classified as either normal or contrast-enhanced then divided into tiers for the purpose of categorizing degrees of contrast enhancement. A variant of the Demons deformable registration algorithm, Intensity Difference Adaptive Demons (IDAD), was developed to correct for the large deformation forces that stemmed from contrast enhancement. In this application, the accuracy of the proposed method was evaluated in both mathematically-simulated and physically-acquired phantom images. Clinical usage and accuracy of the temporal subtraction framework was demonstrated using contrast-enhanced breast CT datasets from five patients. Registration performance was quantified using Normalized Cross Correlation (NCC), Symmetric Uncertainty Coefficient (SUC), Normalized Mutual Information (NMI), Mean Square Error (MSE) and Target Registration Error (TRE). Results The proposed method outperformed conventional affine and other Demons variations in contrast enhanced breast CT image registration. In simulation studies, IDAD exhibited improvement in MSE(0–16%), NCC (0–6%), NMI (0–13%) and TRE (0–34%) compared to the conventional Demons approaches, depending on the size and intensity of the enhancing lesion. As lesion size and contrast enhancement levels increased, so did the improvement. The drop in the correlation between the pre- and post-contrast images for the largest enhancement levels in phantom studies is less than 1.2% (150 Hounsfield units). Registration error, measured by TRE, shows only submillimeter mismatches between the concordant anatomical target points in all patient studies. The algorithm was implemented using a parallel processing architecture resulting in rapid execution time for the iterative segmentation and intensity-adaptive registration techniques. Conclusion Characterization of contrast-enhanced lesions is improved using temporal subtraction contrast-enhanced dedicated breast CT. Adaptation of Demons registration forces as a function of contrast-enhancement levels provided a means to accurately align breast tissue in pre- and post-contrast image acquisitions, improving subtraction results. Spatial subtraction of the aligned images yields useful diagnostic information with respect to enhanced lesion morphology and uptake. PMID:27494376

  18. Non-rigid image registration using a statistical spline deformation model.

    PubMed

    Loeckx, Dirk; Maes, Frederik; Vandermeulen, Dirk; Suetens, Paul

    2003-07-01

    We propose a statistical spline deformation model (SSDM) as a method to solve non-rigid image registration. Within this model, the deformation is expressed using a statistically trained B-spline deformation mesh. The model is trained by principal component analysis of a training set. This approach allows to reduce the number of degrees of freedom needed for non-rigid registration by only retaining the most significant modes of variation observed in the training set. User-defined transformation components, like affine modes, are merged with the principal components into a unified framework. Optimization proceeds along the transformation components rather then along the individual spline coefficients. The concept of SSDM's is applied to the temporal registration of thorax CR-images using pattern intensity as the registration measure. Our results show that, using 30 training pairs, a reduction of 33% is possible in the number of degrees of freedom without deterioration of the result. The same accuracy as without SSDM's is still achieved after a reduction up to 66% of the degrees of freedom.

  19. Automatic Mrf-Based Registration of High Resolution Satellite Video Data

    NASA Astrophysics Data System (ADS)

    Platias, C.; Vakalopoulou, M.; Karantzalos, K.

    2016-06-01

    In this paper we propose a deformable registration framework for high resolution satellite video data able to automatically and accurately co-register satellite video frames and/or register them to a reference map/image. The proposed approach performs non-rigid registration, formulates a Markov Random Fields (MRF) model, while efficient linear programming is employed for reaching the lowest potential of the cost function. The developed approach has been applied and validated on satellite video sequences from Skybox Imaging and compared with a rigid, descriptor-based registration method. Regarding the computational performance, both the MRF-based and the descriptor-based methods were quite efficient, with the first one converging in some minutes and the second in some seconds. Regarding the registration accuracy the proposed MRF-based method significantly outperformed the descriptor-based one in all the performing experiments.

  20. Fast parallel image registration on CPU and GPU for diagnostic classification of Alzheimer's disease

    PubMed Central

    Shamonin, Denis P.; Bron, Esther E.; Lelieveldt, Boudewijn P. F.; Smits, Marion; Klein, Stefan; Staring, Marius

    2013-01-01

    Nonrigid image registration is an important, but time-consuming task in medical image analysis. In typical neuroimaging studies, multiple image registrations are performed, i.e., for atlas-based segmentation or template construction. Faster image registration routines would therefore be beneficial. In this paper we explore acceleration of the image registration package elastix by a combination of several techniques: (i) parallelization on the CPU, to speed up the cost function derivative calculation; (ii) parallelization on the GPU building on and extending the OpenCL framework from ITKv4, to speed up the Gaussian pyramid computation and the image resampling step; (iii) exploitation of certain properties of the B-spline transformation model; (iv) further software optimizations. The accelerated registration tool is employed in a study on diagnostic classification of Alzheimer's disease and cognitively normal controls based on T1-weighted MRI. We selected 299 participants from the publicly available Alzheimer's Disease Neuroimaging Initiative database. Classification is performed with a support vector machine based on gray matter volumes as a marker for atrophy. We evaluated two types of strategies (voxel-wise and region-wise) that heavily rely on nonrigid image registration. Parallelization and optimization resulted in an acceleration factor of 4–5x on an 8-core machine. Using OpenCL a speedup factor of 2 was realized for computation of the Gaussian pyramids, and 15–60 for the resampling step, for larger images. The voxel-wise and the region-wise classification methods had an area under the receiver operator characteristic curve of 88 and 90%, respectively, both for standard and accelerated registration. We conclude that the image registration package elastix was substantially accelerated, with nearly identical results to the non-optimized version. The new functionality will become available in the next release of elastix as open source under the BSD license. PMID:24474917

  1. A Framework for Linear and Non-Linear Registration of Diffusion-Weighted MRIs Using Angular Interpolation

    PubMed Central

    Duarte-Carvajalino, Julio M.; Sapiro, Guillermo; Harel, Noam; Lenglet, Christophe

    2013-01-01

    Registration of diffusion-weighted magnetic resonance images (DW-MRIs) is a key step for population studies, or construction of brain atlases, among other important tasks. Given the high dimensionality of the data, registration is usually performed by relying on scalar representative images, such as the fractional anisotropy (FA) and non-diffusion-weighted (b0) images, thereby ignoring much of the directional information conveyed by DW-MR datasets itself. Alternatively, model-based registration algorithms have been proposed to exploit information on the preferred fiber orientation(s) at each voxel. Models such as the diffusion tensor or orientation distribution function (ODF) have been used for this purpose. Tensor-based registration methods rely on a model that does not completely capture the information contained in DW-MRIs, and largely depends on the accurate estimation of tensors. ODF-based approaches are more recent and computationally challenging, but also better describe complex fiber configurations thereby potentially improving the accuracy of DW-MRI registration. A new algorithm based on angular interpolation of the diffusion-weighted volumes was proposed for affine registration, and does not rely on any specific local diffusion model. In this work, we first extensively compare the performance of registration algorithms based on (i) angular interpolation, (ii) non-diffusion-weighted scalar volume (b0), and (iii) diffusion tensor image (DTI). Moreover, we generalize the concept of angular interpolation (AI) to non-linear image registration, and implement it in the FMRIB Software Library (FSL). We demonstrate that AI registration of DW-MRIs is a powerful alternative to volume and tensor-based approaches. In particular, we show that AI improves the registration accuracy in many cases over existing state-of-the-art algorithms, while providing registered raw DW-MRI data, which can be used for any subsequent analysis. PMID:23596381

  2. A Framework for Linear and Non-Linear Registration of Diffusion-Weighted MRIs Using Angular Interpolation.

    PubMed

    Duarte-Carvajalino, Julio M; Sapiro, Guillermo; Harel, Noam; Lenglet, Christophe

    2013-01-01

    Registration of diffusion-weighted magnetic resonance images (DW-MRIs) is a key step for population studies, or construction of brain atlases, among other important tasks. Given the high dimensionality of the data, registration is usually performed by relying on scalar representative images, such as the fractional anisotropy (FA) and non-diffusion-weighted (b0) images, thereby ignoring much of the directional information conveyed by DW-MR datasets itself. Alternatively, model-based registration algorithms have been proposed to exploit information on the preferred fiber orientation(s) at each voxel. Models such as the diffusion tensor or orientation distribution function (ODF) have been used for this purpose. Tensor-based registration methods rely on a model that does not completely capture the information contained in DW-MRIs, and largely depends on the accurate estimation of tensors. ODF-based approaches are more recent and computationally challenging, but also better describe complex fiber configurations thereby potentially improving the accuracy of DW-MRI registration. A new algorithm based on angular interpolation of the diffusion-weighted volumes was proposed for affine registration, and does not rely on any specific local diffusion model. In this work, we first extensively compare the performance of registration algorithms based on (i) angular interpolation, (ii) non-diffusion-weighted scalar volume (b0), and (iii) diffusion tensor image (DTI). Moreover, we generalize the concept of angular interpolation (AI) to non-linear image registration, and implement it in the FMRIB Software Library (FSL). We demonstrate that AI registration of DW-MRIs is a powerful alternative to volume and tensor-based approaches. In particular, we show that AI improves the registration accuracy in many cases over existing state-of-the-art algorithms, while providing registered raw DW-MRI data, which can be used for any subsequent analysis.

  3. In vivo spatial correlation between (18)F-BPA and (18)F-FDG uptakes in head and neck cancer.

    PubMed

    Kobayashi, Kazuma; Kurihara, Hiroaki; Watanabe, Yoshiaki; Murakami, Naoya; Inaba, Koji; Nakamura, Satoshi; Wakita, Akihisa; Okamoto, Hiroyuki; Umezawa, Rei; Takahashi, Kana; Igaki, Hiroshi; Ito, Yoshinori; Yoshimoto, Seiichi; Shigematsu, Naoyuki; Itami, Jun

    2016-09-01

    Borono-2-(18)F-fluoro-phenylalanine ((18)F-BPA) has been used to estimate the therapeutic effects of boron neutron capture therapy (BNCT), while (18)F-fluorodeoxyglucose ((18)F-FDG) is the most commonly used positron emission tomography (PET) radiopharmaceutical in a routine clinical use. The aim of the present study was to evaluate spatial correlation between (18)F-BPA and (18)F-FDG uptakes using a deformable image registration-based technique. Ten patients with head and neck cancer were recruited from January 2014 to December 2014. All patients underwent whole-body (18)F-BPA PET/computed tomography (CT) and (18)F-FDG PET/CT within a 2-week period. For each patient, (18)F-BPA PET/CT and (18)F-FDG PET/CT images were aligned based on a deformable image registration framework. The voxel-by-voxel spatial correlation of standardized uptake value (SUV) within the tumor was analyzed. Our image processing framework achieved accurate and validated registration results for each PET/CT image. In 9/10 patients, the spatial distribution of SUVs between (18)F-BPA and (18)F-FDG showed a significant, positive correlation in the tumor volume. Deformable image registration-based voxel-wise analysis demonstrated a spatial correlation between (18)F-BPA and (18)F-FDG uptakes in the head and neck cancer. A tumor sub-volume with a high (18)F-FDG uptake may predict high accumulation of (18)F-BPA. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Planetary Crater Detection and Registration Using Marked Point Processes, Multiple Birth and Death Algorithms, and Region-Based Analysis

    NASA Technical Reports Server (NTRS)

    Solarna, David; Moser, Gabriele; Le Moigne-Stewart, Jacqueline; Serpico, Sebastiano B.

    2017-01-01

    Because of the large variety of sensors and spacecraft collecting data, planetary science needs to integrate various multi-sensor and multi-temporal images. These multiple data represent a precious asset, as they allow the study of targets spectral responses and of changes in the surface structure; because of their variety, they also require accurate and robust registration. A new crater detection algorithm, used to extract features that will be integrated in an image registration framework, is presented. A marked point process-based method has been developed to model the spatial distribution of elliptical objects (i.e. the craters) and a birth-death Markov chain Monte Carlo method, coupled with a region-based scheme aiming at computational efficiency, is used to find the optimal configuration fitting the image. The extracted features are exploited, together with a newly defined fitness function based on a modified Hausdorff distance, by an image registration algorithm whose architecture has been designed to minimize the computational time.

  5. Video see-through augmented reality for oral and maxillofacial surgery.

    PubMed

    Wang, Junchen; Suenaga, Hideyuki; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

    2017-06-01

    Oral and maxillofacial surgery has not been benefitting from image guidance techniques owing to the limitations in image registration. A real-time markerless image registration method is proposed by integrating a shape matching method into a 2D tracking framework. The image registration is performed by matching the patient's teeth model with intraoperative video to obtain its pose. The resulting pose is used to overlay relevant models from the same CT space on the camera video for augmented reality. The proposed system was evaluated on mandible/maxilla phantoms, a volunteer and clinical data. Experimental results show that the target overlay error is about 1 mm, and the frame rate of registration update yields 3-5 frames per second with a 4 K camera. The significance of this work lies in its simplicity in clinical setting and the seamless integration into the current medical procedure with satisfactory response time and overlay accuracy. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. RAMTaB: Robust Alignment of Multi-Tag Bioimages

    PubMed Central

    Raza, Shan-e-Ahmed; Humayun, Ahmad; Abouna, Sylvie; Nattkemper, Tim W.; Epstein, David B. A.; Khan, Michael; Rajpoot, Nasir M.

    2012-01-01

    Background In recent years, new microscopic imaging techniques have evolved to allow us to visualize several different proteins (or other biomolecules) in a visual field. Analysis of protein co-localization becomes viable because molecules can interact only when they are located close to each other. We present a novel approach to align images in a multi-tag fluorescence image stack. The proposed approach is applicable to multi-tag bioimaging systems which (a) acquire fluorescence images by sequential staining and (b) simultaneously capture a phase contrast image corresponding to each of the fluorescence images. To the best of our knowledge, there is no existing method in the literature, which addresses simultaneous registration of multi-tag bioimages and selection of the reference image in order to maximize the overall overlap between the images. Methodology/Principal Findings We employ a block-based method for registration, which yields a confidence measure to indicate the accuracy of our registration results. We derive a shift metric in order to select the Reference Image with Maximal Overlap (RIMO), in turn minimizing the total amount of non-overlapping signal for a given number of tags. Experimental results show that the Robust Alignment of Multi-Tag Bioimages (RAMTaB) framework is robust to variations in contrast and illumination, yields sub-pixel accuracy, and successfully selects the reference image resulting in maximum overlap. The registration results are also shown to significantly improve any follow-up protein co-localization studies. Conclusions For the discovery of protein complexes and of functional protein networks within a cell, alignment of the tag images in a multi-tag fluorescence image stack is a key pre-processing step. The proposed framework is shown to produce accurate alignment results on both real and synthetic data. Our future work will use the aligned multi-channel fluorescence image data for normal and diseased tissue specimens to analyze molecular co-expression patterns and functional protein networks. PMID:22363510

  7. LCC-Demons: a robust and accurate symmetric diffeomorphic registration algorithm.

    PubMed

    Lorenzi, M; Ayache, N; Frisoni, G B; Pennec, X

    2013-11-01

    Non-linear registration is a key instrument for computational anatomy to study the morphology of organs and tissues. However, in order to be an effective instrument for the clinical practice, registration algorithms must be computationally efficient, accurate and most importantly robust to the multiple biases affecting medical images. In this work we propose a fast and robust registration framework based on the log-Demons diffeomorphic registration algorithm. The transformation is parameterized by stationary velocity fields (SVFs), and the similarity metric implements a symmetric local correlation coefficient (LCC). Moreover, we show how the SVF setting provides a stable and consistent numerical scheme for the computation of the Jacobian determinant and the flux of the deformation across the boundaries of a given region. Thus, it provides a robust evaluation of spatial changes. We tested the LCC-Demons in the inter-subject registration setting, by comparing with state-of-the-art registration algorithms on public available datasets, and in the intra-subject longitudinal registration problem, for the statistically powered measurements of the longitudinal atrophy in Alzheimer's disease. Experimental results show that LCC-Demons is a generic, flexible, efficient and robust algorithm for the accurate non-linear registration of images, which can find several applications in the field of medical imaging. Without any additional optimization, it solves equally well intra & inter-subject registration problems, and compares favorably to state-of-the-art methods. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Calculation of the confidence intervals for transformation parameters in the registration of medical images

    PubMed Central

    Bansal, Ravi; Staib, Lawrence H.; Laine, Andrew F.; Xu, Dongrong; Liu, Jun; Posecion, Lainie F.; Peterson, Bradley S.

    2010-01-01

    Images from different individuals typically cannot be registered precisely because anatomical features within the images differ across the people imaged and because the current methods for image registration have inherent technological limitations that interfere with perfect registration. Quantifying the inevitable error in image registration is therefore of crucial importance in assessing the effects that image misregistration may have on subsequent analyses in an imaging study. We have developed a mathematical framework for quantifying errors in registration by computing the confidence intervals of the estimated parameters (3 translations, 3 rotations, and 1 global scale) for the similarity transformation. The presence of noise in images and the variability in anatomy across individuals ensures that estimated registration parameters are always random variables. We assume a functional relation among intensities across voxels in the images, and we use the theory of nonlinear, least-squares estimation to show that the parameters are multivariate Gaussian distributed. We then use the covariance matrix of this distribution to compute the confidence intervals of the transformation parameters. These confidence intervals provide a quantitative assessment of the registration error across the images. Because transformation parameters are nonlinearly related to the coordinates of landmark points in the brain, we subsequently show that the coordinates of those landmark points are also multivariate Gaussian distributed. Using these distributions, we then compute the confidence intervals of the coordinates for landmark points in the image. Each of these confidence intervals in turn provides a quantitative assessment of the registration error at a particular landmark point. Because our method is computationally intensive, however, its current implementation is limited to assessing the error of the parameters in the similarity transformation across images. We assessed the performance of our method in computing the error in estimated similarity parameters by applying that method to real world dataset. Our results showed that the size of the confidence intervals computed using our method decreased – i.e. our confidence in the registration of images from different individuals increased – for increasing amounts of blur in the images. Moreover, the size of the confidence intervals increased for increasing amounts of noise, misregistration, and differing anatomy. Thus, our method precisely quantified confidence in the registration of images that contain varying amounts of misregistration and varying anatomy across individuals. PMID:19138877

  9. Fast algorithm for probabilistic bone edge detection (FAPBED)

    NASA Astrophysics Data System (ADS)

    Scepanovic, Danilo; Kirshtein, Joshua; Jain, Ameet K.; Taylor, Russell H.

    2005-04-01

    The registration of preoperative CT to intra-operative reality systems is a crucial step in Computer Assisted Orthopedic Surgery (CAOS). The intra-operative sensors include 3D digitizers, fiducials, X-rays and Ultrasound (US). FAPBED is designed to process CT volumes for registration to tracked US data. Tracked US is advantageous because it is real time, noninvasive, and non-ionizing, but it is also known to have inherent inaccuracies which create the need to develop a framework that is robust to various uncertainties, and can be useful in US-CT registration. Furthermore, conventional registration methods depend on accurate and absolute segmentation. Our proposed probabilistic framework addresses the segmentation-registration duality, wherein exact segmentation is not a prerequisite to achieve accurate registration. In this paper, we develop a method for fast and automatic probabilistic bone surface (edge) detection in CT images. Various features that influence the likelihood of the surface at each spatial coordinate are combined using a simple probabilistic framework, which strikes a fair balance between a high-level understanding of features in an image and the low-level number crunching of standard image processing techniques. The algorithm evaluates different features for detecting the probability of a bone surface at each voxel, and compounds the results of these methods to yield a final, low-noise, probability map of bone surfaces in the volume. Such a probability map can then be used in conjunction with a similar map from tracked intra-operative US to achieve accurate registration. Eight sample pelvic CT scans were used to extract feature parameters and validate the final probability maps. An un-optimized fully automatic Matlab code runs in five minutes per CT volume on average, and was validated by comparison against hand-segmented gold standards. The mean probability assigned to nonzero surface points was 0.8, while nonzero non-surface points had a mean value of 0.38 indicating clear identification of surface points on average. The segmentation was also sufficiently crisp, with a full width at half maximum (FWHM) value of 1.51 voxels.

  10. Method for accurate registration of tissue autofluorescence imaging data with corresponding histology: a means for enhanced tumor margin assessment

    NASA Astrophysics Data System (ADS)

    Unger, Jakob; Sun, Tianchen; Chen, Yi-Ling; Phipps, Jennifer E.; Bold, Richard J.; Darrow, Morgan A.; Ma, Kwan-Liu; Marcu, Laura

    2018-01-01

    An important step in establishing the diagnostic potential for emerging optical imaging techniques is accurate registration between imaging data and the corresponding tissue histopathology typically used as gold standard in clinical diagnostics. We present a method to precisely register data acquired with a point-scanning spectroscopic imaging technique from fresh surgical tissue specimen blocks with corresponding histological sections. Using a visible aiming beam to augment point-scanning multispectral time-resolved fluorescence spectroscopy on video images, we evaluate two different markers for the registration with histology: fiducial markers using a 405-nm CW laser and the tissue block's outer shape characteristics. We compare the registration performance with benchmark methods using either the fiducial markers or the outer shape characteristics alone to a hybrid method using both feature types. The hybrid method was found to perform best reaching an average error of 0.78±0.67 mm. This method provides a profound framework to validate diagnostical abilities of optical fiber-based techniques and furthermore enables the application of supervised machine learning techniques to automate tissue characterization.

  11. [Medical Image Registration Method Based on a Semantic Model with Directional Visual Words].

    PubMed

    Jin, Yufei; Ma, Meng; Yang, Xin

    2016-04-01

    Medical image registration is very challenging due to the various imaging modality,image quality,wide inter-patients variability,and intra-patient variability with disease progressing of medical images,with strict requirement for robustness.Inspired by semantic model,especially the recent tremendous progress in computer vision tasks under bag-of-visual-word framework,we set up a novel semantic model to match medical images.Since most of medical images have poor contrast,small dynamic range,and involving only intensities and so on,the traditional visual word models do not perform very well.To benefit from the advantages from the relative works,we proposed a novel visual word model named directional visual words,which performs better on medical images.Then we applied this model to do medical registration.In our experiment,the critical anatomical structures were first manually specified by experts.Then we adopted the directional visual word,the strategy of spatial pyramid searching from coarse to fine,and the k-means algorithm to help us locating the positions of the key structures accurately.Sequentially,we shall register corresponding images by the areas around these positions.The results of the experiments which were performed on real cardiac images showed that our method could achieve high registration accuracy in some specific areas.

  12. A multimodal 3D framework for fire characteristics estimation

    NASA Astrophysics Data System (ADS)

    Toulouse, T.; Rossi, L.; Akhloufi, M. A.; Pieri, A.; Maldague, X.

    2018-02-01

    In the last decade we have witnessed an increasing interest in using computer vision and image processing in forest fire research. Image processing techniques have been successfully used in different fire analysis areas such as early detection, monitoring, modeling and fire front characteristics estimation. While the majority of the work deals with the use of 2D visible spectrum images, recent work has introduced the use of 3D vision in this field. This work proposes a new multimodal vision framework permitting the extraction of the three-dimensional geometrical characteristics of fires captured by multiple 3D vision systems. The 3D system is a multispectral stereo system operating in both the visible and near-infrared (NIR) spectral bands. The framework supports the use of multiple stereo pairs positioned so as to capture complementary views of the fire front during its propagation. Multimodal registration is conducted using the captured views in order to build a complete 3D model of the fire front. The registration process is achieved using multisensory fusion based on visual data (2D and NIR images), GPS positions and IMU inertial data. Experiments were conducted outdoors in order to show the performance of the proposed framework. The obtained results are promising and show the potential of using the proposed framework in operational scenarios for wildland fire research and as a decision management system in fighting.

  13. Registration of multiple video images to preoperative CT for image-guided surgery

    NASA Astrophysics Data System (ADS)

    Clarkson, Matthew J.; Rueckert, Daniel; Hill, Derek L.; Hawkes, David J.

    1999-05-01

    In this paper we propose a method which uses multiple video images to establish the pose of a CT volume with respect to video camera coordinates for use in image guided surgery. The majority of neurosurgical procedures require the neurosurgeon to relate the pre-operative MR/CT data to the intra-operative scene. Registration of 2D video images to the pre-operative 3D image enables a perspective projection of the pre-operative data to be overlaid onto the video image. Our registration method is based on image intensity and uses a simple iterative optimization scheme to maximize the mutual information between a video image and a rendering from the pre-operative data. Video images are obtained from a stereo operating microscope, with a field of view of approximately 110 X 80 mm. We have extended an existing information theoretical framework for 2D-3D registration, so that multiple video images can be registered simultaneously to the pre-operative data. Experiments were performed on video and CT images of a skull phantom. We took three video images, and our algorithm registered these individually to the 3D image. The mean projection error varied between 4.33 and 9.81 millimeters (mm), and the mean 3D error varied between 4.47 and 11.92 mm. Using our novel techniques we then registered five video views simultaneously to the 3D model. This produced an accurate and robust registration with a mean projection error of 0.68 mm and a mean 3D error of 1.05 mm.

  14. Brain Atlas Fusion from High-Thickness Diagnostic Magnetic Resonance Images by Learning-Based Super-Resolution

    PubMed Central

    Zhang, Jinpeng; Zhang, Lichi; Xiang, Lei; Shao, Yeqin; Wu, Guorong; Zhou, Xiaodong; Shen, Dinggang; Wang, Qian

    2017-01-01

    It is fundamentally important to fuse the brain atlas from magnetic resonance (MR) images for many imaging-based studies. Most existing works focus on fusing the atlases from high-quality MR images. However, for low-quality diagnostic images (i.e., with high inter-slice thickness), the problem of atlas fusion has not been addressed yet. In this paper, we intend to fuse the brain atlas from the high-thickness diagnostic MR images that are prevalent for clinical routines. The main idea of our works is to extend the conventional groupwise registration by incorporating a novel super-resolution strategy. The contribution of the proposed super-resolution framework is two-fold. First, each high-thickness subject image is reconstructed to be isotropic by the patch-based sparsity learning. Then, the reconstructed isotropic image is enhanced for better quality through the random-forest-based regression model. In this way, the images obtained by the super-resolution strategy can be fused together by applying the groupwise registration method to construct the required atlas. Our experiments have shown that the proposed framework can effectively solve the problem of atlas fusion from the low-quality brain MR images. PMID:29062159

  15. Brain Atlas Fusion from High-Thickness Diagnostic Magnetic Resonance Images by Learning-Based Super-Resolution.

    PubMed

    Zhang, Jinpeng; Zhang, Lichi; Xiang, Lei; Shao, Yeqin; Wu, Guorong; Zhou, Xiaodong; Shen, Dinggang; Wang, Qian

    2017-03-01

    It is fundamentally important to fuse the brain atlas from magnetic resonance (MR) images for many imaging-based studies. Most existing works focus on fusing the atlases from high-quality MR images. However, for low-quality diagnostic images (i.e., with high inter-slice thickness), the problem of atlas fusion has not been addressed yet. In this paper, we intend to fuse the brain atlas from the high-thickness diagnostic MR images that are prevalent for clinical routines. The main idea of our works is to extend the conventional groupwise registration by incorporating a novel super-resolution strategy. The contribution of the proposed super-resolution framework is two-fold. First, each high-thickness subject image is reconstructed to be isotropic by the patch-based sparsity learning. Then, the reconstructed isotropic image is enhanced for better quality through the random-forest-based regression model. In this way, the images obtained by the super-resolution strategy can be fused together by applying the groupwise registration method to construct the required atlas. Our experiments have shown that the proposed framework can effectively solve the problem of atlas fusion from the low-quality brain MR images.

  16. A finite element method to correct deformable image registration errors in low-contrast regions

    NASA Astrophysics Data System (ADS)

    Zhong, Hualiang; Kim, Jinkoo; Li, Haisen; Nurushev, Teamour; Movsas, Benjamin; Chetty, Indrin J.

    2012-06-01

    Image-guided adaptive radiotherapy requires deformable image registration to map radiation dose back and forth between images. The purpose of this study is to develop a novel method to improve the accuracy of an intensity-based image registration algorithm in low-contrast regions. A computational framework has been developed in this study to improve the quality of the ‘demons’ registration. For each voxel in the registration's target image, the standard deviation of image intensity in a neighborhood of this voxel was calculated. A mask for high-contrast regions was generated based on their standard deviations. In the masked regions, a tetrahedral mesh was refined recursively so that a sufficient number of tetrahedral nodes in these regions can be selected as driving nodes. An elastic system driven by the displacements of the selected nodes was formulated using a finite element method (FEM) and implemented on the refined mesh. The displacements of these driving nodes were generated with the ‘demons’ algorithm. The solution of the system was derived using a conjugated gradient method, and interpolated to generate a displacement vector field for the registered images. The FEM correction method was compared with the ‘demons’ algorithm on the computed tomography (CT) images of lung and prostate patients. The performance of the FEM correction relating to the ‘demons’ registration was analyzed based on the physical property of their deformation maps, and quantitatively evaluated through a benchmark model developed specifically for this study. Compared to the benchmark model, the ‘demons’ registration has the maximum error of 1.2 cm, which can be corrected by the FEM to 0.4 cm, and the average error of the ‘demons’ registration is reduced from 0.17 to 0.11 cm. For the CT images of lung and prostate patients, the deformation maps generated by the ‘demons’ algorithm were found unrealistic at several places. In these places, the displacement differences between the ‘demons’ registrations and their FEM corrections were found in the range of 0.4 and 1.1 cm. The mesh refinement and FEM simulation were implemented in a single thread application which requires about 45 min of computation time on a 2.6 GHz computer. This study has demonstrated that the FEM can be integrated with intensity-based image registration algorithms to improve their registration accuracy, especially in low-contrast regions.

  17. Cellular neural network-based hybrid approach toward automatic image registration

    NASA Astrophysics Data System (ADS)

    Arun, Pattathal VijayaKumar; Katiyar, Sunil Kumar

    2013-01-01

    Image registration is a key component of various image processing operations that involve the analysis of different image data sets. Automatic image registration domains have witnessed the application of many intelligent methodologies over the past decade; however, inability to properly model object shape as well as contextual information has limited the attainable accuracy. A framework for accurate feature shape modeling and adaptive resampling using advanced techniques such as vector machines, cellular neural network (CNN), scale invariant feature transform (SIFT), coreset, and cellular automata is proposed. CNN has been found to be effective in improving feature matching as well as resampling stages of registration and complexity of the approach has been considerably reduced using coreset optimization. The salient features of this work are cellular neural network approach-based SIFT feature point optimization, adaptive resampling, and intelligent object modelling. Developed methodology has been compared with contemporary methods using different statistical measures. Investigations over various satellite images revealed that considerable success was achieved with the approach. This system has dynamically used spectral and spatial information for representing contextual knowledge using CNN-prolog approach. This methodology is also illustrated to be effective in providing intelligent interpretation and adaptive resampling.

  18. MIND: modality independent neighbourhood descriptor for multi-modal deformable registration.

    PubMed

    Heinrich, Mattias P; Jenkinson, Mark; Bhushan, Manav; Matin, Tahreema; Gleeson, Fergus V; Brady, Sir Michael; Schnabel, Julia A

    2012-10-01

    Deformable registration of images obtained from different modalities remains a challenging task in medical image analysis. This paper addresses this important problem and proposes a modality independent neighbourhood descriptor (MIND) for both linear and deformable multi-modal registration. Based on the similarity of small image patches within one image, it aims to extract the distinctive structure in a local neighbourhood, which is preserved across modalities. The descriptor is based on the concept of image self-similarity, which has been introduced for non-local means filtering for image denoising. It is able to distinguish between different types of features such as corners, edges and homogeneously textured regions. MIND is robust to the most considerable differences between modalities: non-functional intensity relations, image noise and non-uniform bias fields. The multi-dimensional descriptor can be efficiently computed in a dense fashion across the whole image and provides point-wise local similarity across modalities based on the absolute or squared difference between descriptors, making it applicable for a wide range of transformation models and optimisation algorithms. We use the sum of squared differences of the MIND representations of the images as a similarity metric within a symmetric non-parametric Gauss-Newton registration framework. In principle, MIND would be applicable to the registration of arbitrary modalities. In this work, we apply and validate it for the registration of clinical 3D thoracic CT scans between inhale and exhale as well as the alignment of 3D CT and MRI scans. Experimental results show the advantages of MIND over state-of-the-art techniques such as conditional mutual information and entropy images, with respect to clinically annotated landmark locations. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Evaluation of MRI and cannabinoid type 1 receptor PET templates constructed using DARTEL for spatial normalization of rat brains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kronfeld, Andrea; Müller-Forell, Wibke; Buchholz, Hans-Georg

    Purpose: Image registration is one prerequisite for the analysis of brain regions in magnetic-resonance-imaging (MRI) or positron-emission-tomography (PET) studies. Diffeomorphic anatomical registration through exponentiated Lie algebra (DARTEL) is a nonlinear, diffeomorphic algorithm for image registration and construction of image templates. The goal of this small animal study was (1) the evaluation of a MRI and calculation of several cannabinoid type 1 (CB1) receptor PET templates constructed using DARTEL and (2) the analysis of the image registration accuracy of MR and PET images to their DARTEL templates with reference to analytical and iterative PET reconstruction algorithms. Methods: Five male Sprague Dawleymore » rats were investigated for template construction using MRI and [{sup 18}F]MK-9470 PET for CB1 receptor representation. PET images were reconstructed using the algorithms filtered back-projection, ordered subset expectation maximization in 2D, and maximum a posteriori in 3D. Landmarks were defined on each MR image, and templates were constructed under different settings, i.e., based on different tissue class images [gray matter (GM), white matter (WM), and GM + WM] and regularization forms (“linear elastic energy,” “membrane energy,” and “bending energy”). Registration accuracy for MRI and PET templates was evaluated by means of the distance between landmark coordinates. Results: The best MRI template was constructed based on gray and white matter images and the regularization form linear elastic energy. In this case, most distances between landmark coordinates were <1 mm. Accordingly, MRI-based spatial normalization was most accurate, but results of the PET-based spatial normalization were quite comparable. Conclusions: Image registration using DARTEL provides a standardized and automatic framework for small animal brain data analysis. The authors were able to show that this method works with high reliability and validity. Using DARTEL templates together with nonlinear registration algorithms allows for accurate spatial normalization of combined MRI/PET or PET-only studies.« less

  20. Framework for quantitative evaluation of 3D vessel segmentation approaches using vascular phantoms in conjunction with 3D landmark localization and registration

    NASA Astrophysics Data System (ADS)

    Wörz, Stefan; Hoegen, Philipp; Liao, Wei; Müller-Eschner, Matthias; Kauczor, Hans-Ulrich; von Tengg-Kobligk, Hendrik; Rohr, Karl

    2016-03-01

    We introduce a framework for quantitative evaluation of 3D vessel segmentation approaches using vascular phantoms. Phantoms are designed using a CAD system and created with a 3D printer, and comprise realistic shapes including branches and pathologies such as abdominal aortic aneurysms (AAA). To transfer ground truth information to the 3D image coordinate system, we use a landmark-based registration scheme utilizing fiducial markers integrated in the phantom design. For accurate 3D localization of the markers we developed a novel 3D parametric intensity model that is directly fitted to the markers in the images. We also performed a quantitative evaluation of different vessel segmentation approaches for a phantom of an AAA.

  1. Facial recognition techniques applied to the automated registration of patients in the emergency treatment of head injuries.

    PubMed

    Gooroochurn, M; Kerr, D; Bouazza-Marouf, K; Ovinis, M

    2011-02-01

    This paper describes the development of a registration framework for image-guided solutions to the automation of certain routine neurosurgical procedures. The registration process aligns the pose of the patient in the preoperative space to that of the intraoperative space. Computerized tomography images are used in the preoperative (planning) stage, whilst white light (TV camera) images are used to capture the intraoperative pose. Craniofacial landmarks, rather than artificial markers, are used as the registration basis for the alignment. To create further synergy between the user and the image-guided system, automated methods for extraction of these landmarks have been developed. The results obtained from the application of a polynomial neural network classifier based on Gabor features for the detection and localization of the selected craniofacial landmarks, namely the ear tragus and eye corners in the white light modality are presented. The robustness of the classifier to variations in intensity and noise is analysed. The results show that such a classifier gives good performance for the extraction of craniofacial landmarks.

  2. A novel image registration approach via combining local features and geometric invariants

    PubMed Central

    Lu, Yan; Gao, Kun; Zhang, Tinghua; Xu, Tingfa

    2018-01-01

    Image registration is widely used in many fields, but the adaptability of the existing methods is limited. This work proposes a novel image registration method with high precision for various complex applications. In this framework, the registration problem is divided into two stages. First, we detect and describe scale-invariant feature points using modified computer vision-oriented fast and rotated brief (ORB) algorithm, and a simple method to increase the performance of feature points matching is proposed. Second, we develop a new local constraint of rough selection according to the feature distances. Evidence shows that the existing matching techniques based on image features are insufficient for the images with sparse image details. Then, we propose a novel matching algorithm via geometric constraints, and establish local feature descriptions based on geometric invariances for the selected feature points. Subsequently, a new price function is constructed to evaluate the similarities between points and obtain exact matching pairs. Finally, we employ the progressive sample consensus method to remove wrong matches and calculate the space transform parameters. Experimental results on various complex image datasets verify that the proposed method is more robust and significantly reduces the rate of false matches while retaining more high-quality feature points. PMID:29293595

  3. Joint multi-object registration and segmentation of left and right cardiac ventricles in 4D cine MRI

    NASA Astrophysics Data System (ADS)

    Ehrhardt, Jan; Kepp, Timo; Schmidt-Richberg, Alexander; Handels, Heinz

    2014-03-01

    The diagnosis of cardiac function based on cine MRI requires the segmentation of cardiac structures in the images, but the problem of automatic cardiac segmentation is still open, due to the imaging characteristics of cardiac MR images and the anatomical variability of the heart. In this paper, we present a variational framework for joint segmentation and registration of multiple structures of the heart. To enable the simultaneous segmentation and registration of multiple objects, a shape prior term is introduced into a region competition approach for multi-object level set segmentation. The proposed algorithm is applied for simultaneous segmentation of the myocardium as well as the left and right ventricular blood pool in short axis cine MRI images. Two experiments are performed: first, intra-patient 4D segmentation with a given initial segmentation for one time-point in a 4D sequence, and second, a multi-atlas segmentation strategy is applied to unseen patient data. Evaluation of segmentation accuracy is done by overlap coefficients and surface distances. An evaluation based on clinical 4D cine MRI images of 25 patients shows the benefit of the combined approach compared to sole registration and sole segmentation.

  4. Robust Nonrigid Multimodal Image Registration using Local Frequency Maps*

    PubMed Central

    Jian, Bing; Vemuri, Baba C.; Marroquin, José L.

    2008-01-01

    Automatic multi-modal image registration is central to numerous tasks in medical imaging today and has a vast range of applications e.g., image guidance, atlas construction, etc. In this paper, we present a novel multi-modal 3D non-rigid registration algorithm where in 3D images to be registered are represented by their corresponding local frequency maps efficiently computed using the Riesz transform as opposed to the popularly used Gabor filters. The non-rigid registration between these local frequency maps is formulated in a statistically robust framework involving the minimization of the integral squared error a.k.a. L2E (L2 error). This error is expressed as the squared difference between the true density of the residual (which is the squared difference between the non-rigidly transformed reference and the target local frequency representations) and a Gaussian or mixture of Gaussians density approximation of the same. The non-rigid transformation is expressed in a B-spline basis to achieve the desired smoothness in the transformation as well as computational efficiency. The key contributions of this work are (i) the use of Riesz transform to achieve better efficiency in computing the local frequency representation in comparison to Gabor filter-based approaches, (ii) new mathematical model for local-frequency based non-rigid registration, (iii) analytic computation of the gradient of the robust non-rigid registration cost function to achieve efficient and accurate registration. The proposed non-rigid L2E-based registration is a significant extension of research reported in literature to date. We present experimental results for registering several real data sets with synthetic and real non-rigid misalignments. PMID:17354721

  5. Interactive CT-Video Registration for the Continuous Guidance of Bronchoscopy

    PubMed Central

    Merritt, Scott A.; Khare, Rahul; Bascom, Rebecca

    2014-01-01

    Bronchoscopy is a major step in lung cancer staging. To perform bronchoscopy, the physician uses a procedure plan, derived from a patient’s 3D computed-tomography (CT) chest scan, to navigate the bronchoscope through the lung airways. Unfortunately, physicians vary greatly in their ability to perform bronchoscopy. As a result, image-guided bronchoscopy systems, drawing upon the concept of CT-based virtual bronchoscopy (VB), have been proposed. These systems attempt to register the bronchoscope’s live position within the chest to a CT-based virtual chest space. Recent methods, which register the bronchoscopic video to CT-based endoluminal airway renderings, show promise but do not enable continuous real-time guidance. We present a CT-video registration method inspired by computer-vision innovations in the fields of image alignment and image-based rendering. In particular, motivated by the Lucas–Kanade algorithm, we propose an inverse-compositional framework built around a gradient-based optimization procedure. We next propose an implementation of the framework suitable for image-guided bronchoscopy. Laboratory tests, involving both single frames and continuous video sequences, demonstrate the robustness and accuracy of the method. Benchmark timing tests indicate that the method can run continuously at 300 frames/s, well beyond the real-time bronchoscopic video rate of 30 frames/s. This compares extremely favorably to the ≥1 s/frame speeds of other methods and indicates the method’s potential for real-time continuous registration. A human phantom study confirms the method’s efficacy for real-time guidance in a controlled setting, and, hence, points the way toward the first interactive CT-video registration approach for image-guided bronchoscopy. Along this line, we demonstrate the method’s efficacy in a complete guidance system by presenting a clinical study involving lung cancer patients. PMID:23508260

  6. Robust 3D-2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation

    NASA Astrophysics Data System (ADS)

    Otake, Yoshito; Wang, Adam S.; Webster Stayman, J.; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Khanna, A. Jay; Gokaslan, Ziya L.; Siewerdsen, Jeffrey H.

    2013-12-01

    We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with ‘success’ defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial run) the same registration could be solved with 99.993% success in 6.3 s. The ability to register CT to fluoroscopy in a manner robust to patient deformation could be valuable in applications such as radiation therapy, interventional radiology, and an assistant to target localization (e.g., vertebral labeling) in image-guided spine surgery.

  7. Hierarchical patch-based co-registration of differently stained histopathology slides

    NASA Astrophysics Data System (ADS)

    Yigitsoy, Mehmet; Schmidt, Günter

    2017-03-01

    Over the past decades, digital pathology has emerged as an alternative way of looking at the tissue at subcellular level. It enables multiplexed analysis of different cell types at micron level. Information about cell types can be extracted by staining sections of a tissue block using different markers. However, robust fusion of structural and functional information from different stains is necessary for reproducible multiplexed analysis. Such a fusion can be obtained via image co-registration by establishing spatial correspondences between tissue sections. Spatial correspondences can then be used to transfer various statistics about cell types between sections. However, the multi-modal nature of images and sparse distribution of interesting cell types pose several challenges for the registration of differently stained tissue sections. In this work, we propose a co-registration framework that efficiently addresses such challenges. We present a hierarchical patch-based registration of intensity normalized tissue sections. Preliminary experiments demonstrate the potential of the proposed technique for the fusion of multi-modal information from differently stained digital histopathology sections.

  8. WE-AB-BRA-01: 3D-2D Image Registration for Target Localization in Spine Surgery: Comparison of Similarity Metrics Against Robustness to Content Mismatch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Silva, T; Ketcha, M; Siewerdsen, J H

    Purpose: In image-guided spine surgery, mapping 3D preoperative images to 2D intraoperative images via 3D-2D registration can provide valuable assistance in target localization. However, the presence of surgical instrumentation, hardware implants, and soft-tissue resection/displacement causes mismatches in image content, confounding existing registration methods. Manual/semi-automatic methods to mask such extraneous content is time consuming, user-dependent, error prone, and disruptive to clinical workflow. We developed and evaluated 2 novel similarity metrics within a robust registration framework to overcome such challenges in target localization. Methods: An IRB-approved retrospective study in 19 spine surgery patients included 19 preoperative 3D CT images and 50 intraoperativemore » mobile radiographs in cervical, thoracic, and lumbar spine regions. A neuroradiologist provided truth definition of vertebral positions in CT and radiography. 3D-2D registration was performed using the CMA-ES optimizer with 4 gradient-based image similarity metrics: (1) gradient information (GI); (2) gradient correlation (GC); (3) a novel variant referred to as gradient orientation (GO); and (4) a second variant referred to as truncated gradient correlation (TGC). Registration accuracy was evaluated in terms of the projection distance error (PDE) of the vertebral levels. Results: Conventional similarity metrics were susceptible to gross registration error and failure modes associated with the presence of surgical instrumentation: for GI, the median PDE and interquartile range was 33.0±43.6 mm; similarly for GC, PDE = 23.0±92.6 mm respectively. The robust metrics GO and TGC, on the other hand, demonstrated major improvement in PDE (7.6 ±9.4 mm and 8.1± 18.1 mm, respectively) and elimination of gross failure modes. Conclusion: The proposed GO and TGC similarity measures improve registration accuracy and robustness to gross failure in the presence of strong image content mismatch. Such registration capability could offer valuable assistance in target localization without disruption of clinical workflow. G. Kleinszig and S. Vogt are employees of Siemens Healthcare.« less

  9. Spectral embedding-based registration (SERg) for multimodal fusion of prostate histology and MRI

    NASA Astrophysics Data System (ADS)

    Hwuang, Eileen; Rusu, Mirabela; Karthigeyan, Sudha; Agner, Shannon C.; Sparks, Rachel; Shih, Natalie; Tomaszewski, John E.; Rosen, Mark; Feldman, Michael; Madabhushi, Anant

    2014-03-01

    Multi-modal image registration is needed to align medical images collected from different protocols or imaging sources, thereby allowing the mapping of complementary information between images. One challenge of multimodal image registration is that typical similarity measures rely on statistical correlations between image intensities to determine anatomical alignment. The use of alternate image representations could allow for mapping of intensities into a space or representation such that the multimodal images appear more similar, thus facilitating their co-registration. In this work, we present a spectral embedding based registration (SERg) method that uses non-linearly embedded representations obtained from independent components of statistical texture maps of the original images to facilitate multimodal image registration. Our methodology comprises the following main steps: 1) image-derived textural representation of the original images, 2) dimensionality reduction using independent component analysis (ICA), 3) spectral embedding to generate the alternate representations, and 4) image registration. The rationale behind our approach is that SERg yields embedded representations that can allow for very different looking images to appear more similar, thereby facilitating improved co-registration. Statistical texture features are derived from the image intensities and then reduced to a smaller set by using independent component analysis to remove redundant information. Spectral embedding generates a new representation by eigendecomposition from which only the most important eigenvectors are selected. This helps to accentuate areas of salience based on modality-invariant structural information and therefore better identifies corresponding regions in both the template and target images. The spirit behind SERg is that image registration driven by these areas of salience and correspondence should improve alignment accuracy. In this work, SERg is implemented using Demons to allow the algorithm to more effectively register multimodal images. SERg is also tested within the free-form deformation framework driven by mutual information. Nine pairs of synthetic T1-weighted to T2-weighted brain MRI were registered under the following conditions: five levels of noise (0%, 1%, 3%, 5%, and 7%) and two levels of bias field (20% and 40%) each with and without noise. We demonstrate that across all of these conditions, SERg yields a mean squared error that is 81.51% lower than that of Demons driven by MRI intensity alone. We also spatially align twenty-six ex vivo histology sections and in vivo prostate MRI in order to map the spatial extent of prostate cancer onto corresponding radiologic imaging. SERg performs better than intensity registration by decreasing the root mean squared distance of annotated landmarks in the prostate gland via both Demons algorithm and mutual information-driven free-form deformation. In both synthetic and clinical experiments, the observed improvement in alignment of the template and target images suggest the utility of parametric eigenvector representations and hence SERg for multimodal image registration.

  10. The Insight ToolKit image registration framework

    PubMed Central

    Avants, Brian B.; Tustison, Nicholas J.; Stauffer, Michael; Song, Gang; Wu, Baohua; Gee, James C.

    2014-01-01

    Publicly available scientific resources help establish evaluation standards, provide a platform for teaching and improve reproducibility. Version 4 of the Insight ToolKit (ITK4) seeks to establish new standards in publicly available image registration methodology. ITK4 makes several advances in comparison to previous versions of ITK. ITK4 supports both multivariate images and objective functions; it also unifies high-dimensional (deformation field) and low-dimensional (affine) transformations with metrics that are reusable across transform types and with composite transforms that allow arbitrary series of geometric mappings to be chained together seamlessly. Metrics and optimizers take advantage of multi-core resources, when available. Furthermore, ITK4 reduces the parameter optimization burden via principled heuristics that automatically set scaling across disparate parameter types (rotations vs. translations). A related approach also constrains steps sizes for gradient-based optimizers. The result is that tuning for different metrics and/or image pairs is rarely necessary allowing the researcher to more easily focus on design/comparison of registration strategies. In total, the ITK4 contribution is intended as a structure to support reproducible research practices, will provide a more extensive foundation against which to evaluate new work in image registration and also enable application level programmers a broad suite of tools on which to build. Finally, we contextualize this work with a reference registration evaluation study with application to pediatric brain labeling.1 PMID:24817849

  11. Directly manipulated free-form deformation image registration.

    PubMed

    Tustison, Nicholas J; Avants, Brian B; Gee, James C

    2009-03-01

    Previous contributions to both the research and open source software communities detailed a generalization of a fast scalar field fitting technique for cubic B-splines based on the work originally proposed by Lee . One advantage of our proposed generalized B-spline fitting approach is its immediate application to a class of nonrigid registration techniques frequently employed in medical image analysis. Specifically, these registration techniques fall under the rubric of free-form deformation (FFD) approaches in which the object to be registered is embedded within a B-spline object. The deformation of the B-spline object describes the transformation of the image registration solution. Representative of this class of techniques, and often cited within the relevant community, is the formulation of Rueckert who employed cubic splines with normalized mutual information to study breast deformation. Similar techniques from various groups provided incremental novelty in the form of disparate explicit regularization terms, as well as the employment of various image metrics and tailored optimization methods. For several algorithms, the underlying gradient-based optimization retained the essential characteristics of Rueckert's original contribution. The contribution which we provide in this paper is two-fold: 1) the observation that the generic FFD framework is intrinsically susceptible to problematic energy topographies and 2) that the standard gradient used in FFD image registration can be modified to a well-understood preconditioned form which substantially improves performance. This is demonstrated with theoretical discussion and comparative evaluation experimentation.

  12. A study on the theoretical and practical accuracy of conoscopic holography-based surface measurements: toward image registration in minimally invasive surgery†

    PubMed Central

    Burgner, J.; Simpson, A. L.; Fitzpatrick, J. M.; Lathrop, R. A.; Herrell, S. D.; Miga, M. I.; Webster, R. J.

    2013-01-01

    Background Registered medical images can assist with surgical navigation and enable image-guided therapy delivery. In soft tissues, surface-based registration is often used and can be facilitated by laser surface scanning. Tracked conoscopic holography (which provides distance measurements) has been recently proposed as a minimally invasive way to obtain surface scans. Moving this technique from concept to clinical use requires a rigorous accuracy evaluation, which is the purpose of our paper. Methods We adapt recent non-homogeneous and anisotropic point-based registration results to provide a theoretical framework for predicting the accuracy of tracked distance measurement systems. Experiments are conducted a complex objects of defined geometry, an anthropomorphic kidney phantom and a human cadaver kidney. Results Experiments agree with model predictions, producing point RMS errors consistently < 1 mm, surface-based registration with mean closest point error < 1 mm in the phantom and a RMS target registration error of 0.8 mm in the human cadaver kidney. Conclusions Tracked conoscopic holography is clinically viable; it enables minimally invasive surface scan accuracy comparable to current clinical methods that require open surgery. PMID:22761086

  13. WE-AB-BRA-08: Correction of Patient Motion in C-Arm Cone-Beam CT Using 3D-2D Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ouadah, S; Jacobson, M; Stayman, JW

    2016-06-15

    Purpose: Intraoperative C-arm cone-beam CT (CBCT) is subject to artifacts arising from patient motion during the fairly long (∼5–20 s) scan times. We present a fiducial free method to mitigate motion artifacts using 3D-2D image registration that simultaneously corrects residual errors in geometric calibration. Methods: A 3D-2D registration process was used to register each projection to DRRs computed from the 3D image by maximizing gradient orientation (GO) using the CMA-ES optimizer. The resulting rigid 6 DOF transforms were applied to the system projection matrices, and a 3D image was reconstructed via model-based image reconstruction (MBIR, which accommodates the resulting noncircularmore » orbit). Experiments were conducted using a Zeego robotic C-arm (20 s, 200°, 496 projections) to image a head phantom undergoing various types of motion: 1) 5° lateral motion; 2) 15° lateral motion; and 3) 5° lateral motion with 10 mm periodic inferior-superior motion. Images were reconstructed using a penalized likelihood (PL) objective function, and structural similarity (SSIM) was measured for axial slices of the reconstructed images. A motion-free image was acquired using the same protocol for comparison. Results: There was significant improvement (p < 0.001) in the SSIM of the motion-corrected (MC) images compared to uncorrected images. The SSIM in MC-PL images was >0.99, indicating near identity to the motion-free reference. The point spread function (PSF) measured from a wire in the phantom was restored to that of the reference in each case. Conclusion: The 3D-2D registration method provides a robust framework for mitigation of motion artifacts and is expected to hold for applications in the head, pelvis, and extremities with reasonably constrained operative setup. Further improvement can be achieved by incorporating multiple rigid components and non-rigid deformation within the framework. The method is highly parallelizable and could in principle be run with every acquisition. Research supported by National Institutes of Health Grant No. R01-EB-017226 and academic-industry partnership with Siemens Healthcare (AX Division, Forcheim, Germany).« less

  14. Registration of segmented histological images using thin plate splines and belief propagation

    NASA Astrophysics Data System (ADS)

    Kybic, Jan

    2014-03-01

    We register images based on their multiclass segmentations, for cases when correspondence of local features cannot be established. A discrete mutual information is used as a similarity criterion. It is evaluated at a sparse set of location on the interfaces between classes. A thin-plate spline regularization is approximated by pairwise interactions. The problem is cast into a discrete setting and solved efficiently by belief propagation. Further speedup and robustness is provided by a multiresolution framework. Preliminary experiments suggest that our method can provide similar registration quality to standard methods at a fraction of the computational cost.

  15. A practical salient region feature based 3D multi-modality registration method for medical images

    NASA Astrophysics Data System (ADS)

    Hahn, Dieter A.; Wolz, Gabriele; Sun, Yiyong; Hornegger, Joachim; Sauer, Frank; Kuwert, Torsten; Xu, Chenyang

    2006-03-01

    We present a novel representation of 3D salient region features and its integration into a hybrid rigid-body registration framework. We adopt scale, translation and rotation invariance properties of those intrinsic 3D features to estimate a transform between underlying mono- or multi-modal 3D medical images. Our method combines advantageous aspects of both feature- and intensity-based approaches and consists of three steps: an automatic extraction of a set of 3D salient region features on each image, a robust estimation of correspondences and their sub-pixel accurate refinement with outliers elimination. We propose a region-growing based approach for the extraction of 3D salient region features, a solution to the problem of feature clustering and a reduction of the correspondence search space complexity. Results of the developed algorithm are presented for both mono- and multi-modal intra-patient 3D image pairs (CT, PET and SPECT) that have been acquired for change detection, tumor localization, and time based intra-person studies. The accuracy of the method is clinically evaluated by a medical expert with an approach that measures the distance between a set of selected corresponding points consisting of both anatomical and functional structures or lesion sites. This demonstrates the robustness of the proposed method to image overlap, missing information and artefacts. We conclude by discussing potential medical applications and possibilities for integration into a non-rigid registration framework.

  16. A framework for incorporating DTI Atlas Builder registration into Tract-Based Spatial Statistics and a simulated comparison to standard TBSS.

    PubMed

    Leming, Matthew; Steiner, Rachel; Styner, Martin

    2016-02-27

    Tract-based spatial statistics (TBSS) 6 is a software pipeline widely employed in comparative analysis of the white matter integrity from diffusion tensor imaging (DTI) datasets. In this study, we seek to evaluate the relationship between different methods of atlas registration for use with TBSS and different measurements of DTI (fractional anisotropy, FA, axial diffusivity, AD, radial diffusivity, RD, and medial diffusivity, MD). To do so, we have developed a novel tool that builds on existing diffusion atlas building software, integrating it into an adapted version of TBSS called DAB-TBSS (DTI Atlas Builder-Tract-Based Spatial Statistics) by using the advanced registration offered in DTI Atlas Builder 7 . To compare the effectiveness of these two versions of TBSS, we also propose a framework for simulating population differences for diffusion tensor imaging data, providing a more substantive means of empirically comparing DTI group analysis programs such as TBSS. In this study, we used 33 diffusion tensor imaging datasets and simulated group-wise changes in this data by increasing, in three different simulations, the principal eigenvalue (directly altering AD), the second and third eigenvalues (RD), and all three eigenvalues (MD) in the genu, the right uncinate fasciculus, and the left IFO. Additionally, we assessed the benefits of comparing the tensors directly using a functional analysis of diffusion tensor tract statistics (FADTTS 10 ). Our results indicate comparable levels of FA-based detection between DAB-TBSS and TBSS, with standard TBSS registration reporting a higher rate of false positives in other measurements of DTI. Within the simulated changes investigated here, this study suggests that the use of DTI Atlas Builder's registration enhances TBSS group-based studies.

  17. Biological fiducial point based registration for multiple brain tissues reconstructed from different imaging modalities

    NASA Astrophysics Data System (ADS)

    Wu, Huiqun; Zhou, Gangping; Geng, Xingyun; Zhang, Xiaofeng; Jiang, Kui; Tang, Lemin; Zhou, Guomin; Dong, Jiancheng

    2013-10-01

    With the development of computer aided navigation system, more and more tissues shall be reconstructed to provide more useful information for surgical pathway planning. In this study, we aimed to propose a registration framework for different reconstructed tissues from multi-modalities based on some fiducial points on lateral ventricles. A male patient with brain lesion was admitted and his brain scans were performed by different modalities. Then, the different brain tissues were segmented in different modality with relevant suitable algorithms. Marching cubes were calculated for three dimensional reconstructions, and then the rendered tissues were imported to a common coordinate system for registration. Four pairs of fiducial markers were selected to calculate the rotation and translation matrix using least-square measure method. The registration results were satisfied in a glioblastoma surgery planning as it provides the spatial relationship between tumors and surrounding fibers as well as vessels. Hence, our framework is of potential value for clinicians to plan surgery.

  18. Computed tomography lung iodine contrast mapping by image registration and subtraction

    NASA Astrophysics Data System (ADS)

    Goatman, Keith; Plakas, Costas; Schuijf, Joanne; Beveridge, Erin; Prokop, Mathias

    2014-03-01

    Pulmonary embolism (PE) is a relatively common and potentially life threatening disease, affecting around 600,000 people annually in the United States alone. Prompt treatment using anticoagulants is effective and saves lives, but unnecessary treatment risks life threatening haemorrhage. The specificity of any diagnostic test for PE is therefore as important as its sensitivity. Computed tomography (CT) angiography is routinely used to diagnose PE. However, there are concerns it may over-report the condition. Additional information about the severity of an occlusion can be obtained from an iodine contrast map that represents tissue perfusion. Such maps tend to be derived from dual-energy CT acquisitions. However, they may also be calculated by subtracting pre- and post-contrast CT scans. Indeed, there are technical advantages to such a subtraction approach, including better contrast-to-noise ratio for the same radiation dose, and bone suppression. However, subtraction relies on accurate image registration. This paper presents a framework for the automatic alignment of pre- and post-contrast lung volumes prior to subtraction. The registration accuracy is evaluated for seven subjects for whom pre- and post-contrast helical CT scans were acquired using a Toshiba Aquilion ONE scanner. One hundred corresponding points were annotated on the pre- and post-contrast scans, distributed throughout the lung volume. Surface-to-surface error distances were also calculated from lung segmentations. Prior to registration the mean Euclidean landmark alignment error was 2.57mm (range 1.43-4.34 mm), and following registration the mean error was 0.54mm (range 0.44-0.64 mm). The mean surface error distance was 1.89mm before registration and 0.47mm after registration. There was a commensurate reduction in visual artefacts following registration. In conclusion, a framework for pre- and post-contrast lung registration has been developed that is sufficiently accurate for lung subtraction iodine mapping.

  19. A Finite Element Method to Correct Deformable Image Registration Errors in Low-Contrast Regions

    PubMed Central

    Zhong, Hualiang; Kim, Jinkoo; Li, Haisen; Nurushev, Teamour; Movsas, Benjamin; Chetty, Indrin J.

    2012-01-01

    Image-guided adaptive radiotherapy requires deformable image registration to map radiation dose back and forth between images. The purpose of this study is to develop a novel method to improve the accuracy of an intensity-based image registration algorithm in low-contrast regions. A computational framework has been developed in this study to improve the quality of the “demons” registration. For each voxel in the registration’s target image, the standard deviation of image intensity in a neighborhood of this voxel was calculated. A mask for high-contrast regions was generated based on their standard deviations. In the masked regions, a tetrahedral mesh was refined recursively so that a sufficient number of tetrahedral nodes in these regions can be selected as driving nodes. An elastic system driven by the displacements of the selected nodes was formulated using a finite element method (FEM) and implemented on the refined mesh. The displacements of these driving nodes were generated with the “demons” algorithm. The solution of the system was derived using a conjugated gradient method, and interpolated to generate a displacement vector field for the registered images. The FEM correction method was compared with the “demons” algorithm on the CT images of lung and prostate patients. The performance of the FEM correction relating to the “demons” registration was analyzed based on the physical property of their deformation maps, and quantitatively evaluated through a benchmark model developed specifically for this study. Compared to the benchmark model, the “demons” registration has the maximum error of 1.2 cm, which can be corrected by the FEM method to 0.4 cm, and the average error of the “demons” registration is reduced from 0.17 cm to 0.11 cm. For the CT images of lung and prostate patients, the deformation maps generated by the “demons” algorithm were found unrealistic at several places. In these places, the displacement differences between the “demons” registrations and their FEM corrections were found in the range of 0.4 cm and 1.1cm. The mesh refinement and FEM simulation were implemented in a single thread application which requires about 45 minutes of computation time on a 2.6 GH computer. This study has demonstrated that the finite element method can be integrated with intensity-based image registration algorithms to improve their registration accuracy, especially in low-contrast regions. PMID:22581269

  20. GPU accelerated generation of digitally reconstructed radiographs for 2-D/3-D image registration.

    PubMed

    Dorgham, Osama M; Laycock, Stephen D; Fisher, Mark H

    2012-09-01

    Recent advances in programming languages for graphics processing units (GPUs) provide developers with a convenient way of implementing applications which can be executed on the CPU and GPU interchangeably. GPUs are becoming relatively cheap, powerful, and widely available hardware components, which can be used to perform intensive calculations. The last decade of hardware performance developments shows that GPU-based computation is progressing significantly faster than CPU-based computation, particularly if one considers the execution of highly parallelisable algorithms. Future predictions illustrate that this trend is likely to continue. In this paper, we introduce a way of accelerating 2-D/3-D image registration by developing a hybrid system which executes on the CPU and utilizes the GPU for parallelizing the generation of digitally reconstructed radiographs (DRRs). Based on the advancements of the GPU over the CPU, it is timely to exploit the benefits of many-core GPU technology by developing algorithms for DRR generation. Although some previous work has investigated the rendering of DRRs using the GPU, this paper investigates approximations which reduce the computational overhead while still maintaining a quality consistent with that needed for 2-D/3-D registration with sufficient accuracy to be clinically acceptable in certain applications of radiation oncology. Furthermore, by comparing implementations of 2-D/3-D registration on the CPU and GPU, we investigate current performance and propose an optimal framework for PC implementations addressing the rigid registration problem. Using this framework, we are able to render DRR images from a 256×256×133 CT volume in ~24 ms using an NVidia GeForce 8800 GTX and in ~2 ms using NVidia GeForce GTX 580. In addition to applications requiring fast automatic patient setup, these levels of performance suggest image-guided radiation therapy at video frame rates is technically feasible using relatively low cost PC architecture.

  1. Multigrid optimal mass transport for image registration and morphing

    NASA Astrophysics Data System (ADS)

    Rehman, Tauseef ur; Tannenbaum, Allen

    2007-02-01

    In this paper we present a computationally efficient Optimal Mass Transport algorithm. This method is based on the Monge-Kantorovich theory and is used for computing elastic registration and warping maps in image registration and morphing applications. This is a parameter free method which utilizes all of the grayscale data in an image pair in a symmetric fashion. No landmarks need to be specified for correspondence. In our work, we demonstrate significant improvement in computation time when our algorithm is applied as compared to the originally proposed method by Haker et al [1]. The original algorithm was based on a gradient descent method for removing the curl from an initial mass preserving map regarded as 2D vector field. This involves inverting the Laplacian in each iteration which is now computed using full multigrid technique resulting in an improvement in computational time by a factor of two. Greater improvement is achieved by decimating the curl in a multi-resolutional framework. The algorithm was applied to 2D short axis cardiac MRI images and brain MRI images for testing and comparison.

  2. [Accurate 3D free-form registration between fan-beam CT and cone-beam CT].

    PubMed

    Liang, Yueqiang; Xu, Hongbing; Li, Baosheng; Li, Hongsheng; Yang, Fujun

    2012-06-01

    Because the X-ray scatters, the CT numbers in cone-beam CT cannot exactly correspond to the electron densities. This, therefore, results in registration error when the intensity-based registration algorithm is used to register planning fan-beam CT and cone-beam CT. In order to reduce the registration error, we have developed an accurate gradient-based registration algorithm. The gradient-based deformable registration problem is described as a minimization of energy functional. Through the calculus of variations and Gauss-Seidel finite difference method, we derived the iterative formula of the deformable registration. The algorithm was implemented by GPU through OpenCL framework, with which the registration time was greatly reduced. Our experimental results showed that the proposed gradient-based registration algorithm could register more accurately the clinical cone-beam CT and fan-beam CT images compared with the intensity-based algorithm. The GPU-accelerated algorithm meets the real-time requirement in the online adaptive radiotherapy.

  3. Segmentation of radiographic images under topological constraints: application to the femur.

    PubMed

    Gamage, Pavan; Xie, Sheng Quan; Delmas, Patrice; Xu, Wei Liang

    2010-09-01

    A framework for radiographic image segmentation under topological control based on two-dimensional (2D) image analysis was developed. The system is intended for use in common radiological tasks including fracture treatment analysis, osteoarthritis diagnostics and osteotomy management planning. The segmentation framework utilizes a generic three-dimensional (3D) model of the bone of interest to define the anatomical topology. Non-rigid registration is performed between the projected contours of the generic 3D model and extracted edges of the X-ray image to achieve the segmentation. For fractured bones, the segmentation requires an additional step where a region-based active contours curve evolution is performed with a level set Mumford-Shah method to obtain the fracture surface edge. The application of the segmentation framework to analysis of human femur radiographs was evaluated. The proposed system has two major innovations. First, definition of the topological constraints does not require a statistical learning process, so the method is generally applicable to a variety of bony anatomy segmentation problems. Second, the methodology is able to handle both intact and fractured bone segmentation. Testing on clinical X-ray images yielded an average root mean squared distance (between the automatically segmented femur contour and the manual segmented ground truth) of 1.10 mm with a standard deviation of 0.13 mm. The proposed point correspondence estimation algorithm was benchmarked against three state-of-the-art point matching algorithms, demonstrating successful non-rigid registration for the cases of interest. A topologically constrained automatic bone contour segmentation framework was developed and tested, providing robustness to noise, outliers, deformations and occlusions.

  4. Understanding bone responses in B-mode ultrasound images and automatic bone surface extraction using a Bayesian probabilistic framework

    NASA Astrophysics Data System (ADS)

    Jain, Ameet K.; Taylor, Russell H.

    2004-04-01

    The registration of preoperative CT to intra-operative reality systems is a crucial step in Computer Assisted Orthopedic Surgery (CAOS). The intra-operative sensors include 3D digitizers, fiducials, X-rays and Ultrasound (US). Although US has many advantages over others, tracked US for Orthopedic Surgery has been researched by only a few authors. An important factor limiting the accuracy of tracked US to CT registration (1-3mm) has been the difficulty in determining the exact location of the bone surfaces in the US images (the response could range from 2-4mm). Thus it is crucial to localize the bone surface accurately from these images. Moreover conventional US imaging systems are known to have certain inherent inaccuracies, mainly due to the fact that the imaging model is assumed planar. This creates the need to develop a bone segmentation framework that can couple information from various post-processed spatially separated US images (of the bone) to enhance the localization of the bone surface. In this paper we discuss the various reasons that cause inherent uncertainties in the bone surface localization (in B-mode US images) and suggest methods to account for these. We also develop a method for automatic bone surface detection. To do so, we account objectively for the high-level understanding of the various bone surface features visible in typical US images. A combination of these features would finally decide the surface position. We use a Bayesian probabilistic framework, which strikes a fair balance between high level understanding from features in an image and the low level number crunching of standard image processing techniques. It also provides us with a mathematical approach that facilitates combining multiple images to augment the bone surface estimate.

  5. A spline-based non-linear diffeomorphism for multimodal prostate registration.

    PubMed

    Mitra, Jhimli; Kato, Zoltan; Martí, Robert; Oliver, Arnau; Lladó, Xavier; Sidibé, Désiré; Ghose, Soumya; Vilanova, Joan C; Comet, Josep; Meriaudeau, Fabrice

    2012-08-01

    This paper presents a novel method for non-rigid registration of transrectal ultrasound and magnetic resonance prostate images based on a non-linear regularized framework of point correspondences obtained from a statistical measure of shape-contexts. The segmented prostate shapes are represented by shape-contexts and the Bhattacharyya distance between the shape representations is used to find the point correspondences between the 2D fixed and moving images. The registration method involves parametric estimation of the non-linear diffeomorphism between the multimodal images and has its basis in solving a set of non-linear equations of thin-plate splines. The solution is obtained as the least-squares solution of an over-determined system of non-linear equations constructed by integrating a set of non-linear functions over the fixed and moving images. However, this may not result in clinically acceptable transformations of the anatomical targets. Therefore, the regularized bending energy of the thin-plate splines along with the localization error of established correspondences should be included in the system of equations. The registration accuracies of the proposed method are evaluated in 20 pairs of prostate mid-gland ultrasound and magnetic resonance images. The results obtained in terms of Dice similarity coefficient show an average of 0.980±0.004, average 95% Hausdorff distance of 1.63±0.48 mm and mean target registration and target localization errors of 1.60±1.17 mm and 0.15±0.12 mm respectively. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Robust group-wise rigid registration of point sets using t-mixture model

    NASA Astrophysics Data System (ADS)

    Ravikumar, Nishant; Gooya, Ali; Frangi, Alejandro F.; Taylor, Zeike A.

    2016-03-01

    A probabilistic framework for robust, group-wise rigid alignment of point-sets using a mixture of Students t-distribution especially when the point sets are of varying lengths, are corrupted by an unknown degree of outliers or in the presence of missing data. Medical images (in particular magnetic resonance (MR) images), their segmentations and consequently point-sets generated from these are highly susceptible to corruption by outliers. This poses a problem for robust correspondence estimation and accurate alignment of shapes, necessary for training statistical shape models (SSMs). To address these issues, this study proposes to use a t-mixture model (TMM), to approximate the underlying joint probability density of a group of similar shapes and align them to a common reference frame. The heavy-tailed nature of t-distributions provides a more robust registration framework in comparison to state of the art algorithms. Significant reduction in alignment errors is achieved in the presence of outliers, using the proposed TMM-based group-wise rigid registration method, in comparison to its Gaussian mixture model (GMM) counterparts. The proposed TMM-framework is compared with a group-wise variant of the well-known Coherent Point Drift (CPD) algorithm and two other group-wise methods using GMMs, using both synthetic and real data sets. Rigid alignment errors for groups of shapes are quantified using the Hausdorff distance (HD) and quadratic surface distance (QSD) metrics.

  7. DR-TAMAS: Diffeomorphic Registration for Tensor Accurate alignMent of Anatomical Structures

    PubMed Central

    Irfanoglu, M. Okan; Nayak, Amritha; Jenkins, Jeffrey; Hutchinson, Elizabeth B.; Sadeghi, Neda; Thomas, Cibu P.; Pierpaoli, Carlo

    2016-01-01

    In this work, we propose DR-TAMAS (Diffeomorphic Registration for Tensor Accurate alignMent of Anatomical Structures), a novel framework for intersubject registration of Diffusion Tensor Imaging (DTI) data sets. This framework is optimized for brain data and its main goal is to achieve an accurate alignment of all brain structures, including white matter (WM), gray matter (GM), and spaces containing cerebrospinal fluid (CSF). Currently most DTI-based spatial normalization algorithms emphasize alignment of anisotropic structures. While some diffusion-derived metrics, such as diffusion anisotropy and tensor eigenvector orientation, are highly informative for proper alignment of WM, other tensor metrics such as the trace or mean diffusivity (MD) are fundamental for a proper alignment of GM and CSF boundaries. Moreover, it is desirable to include information from structural MRI data, e.g., T1-weighted or T2-weighted images, which are usually available together with the diffusion data. The fundamental property of DR-TAMAS is to achieve global anatomical accuracy by incorporating in its cost function the most informative metrics locally. Another important feature of DR-TAMAS is a symmetric time-varying velocity-based transformation model, which enables it to account for potentially large anatomical variability in healthy subjects and patients. The performance of DR-TAMAS is evaluated with several data sets and compared with other widely-used diffeomorphic image registration techniques employing both full tensor information and/or DTI-derived scalar maps. Our results show that the proposed method has excellent overall performance in the entire brain, while being equivalent to the best existing methods in WM. PMID:26931817

  8. DR-TAMAS: Diffeomorphic Registration for Tensor Accurate Alignment of Anatomical Structures.

    PubMed

    Irfanoglu, M Okan; Nayak, Amritha; Jenkins, Jeffrey; Hutchinson, Elizabeth B; Sadeghi, Neda; Thomas, Cibu P; Pierpaoli, Carlo

    2016-05-15

    In this work, we propose DR-TAMAS (Diffeomorphic Registration for Tensor Accurate alignMent of Anatomical Structures), a novel framework for intersubject registration of Diffusion Tensor Imaging (DTI) data sets. This framework is optimized for brain data and its main goal is to achieve an accurate alignment of all brain structures, including white matter (WM), gray matter (GM), and spaces containing cerebrospinal fluid (CSF). Currently most DTI-based spatial normalization algorithms emphasize alignment of anisotropic structures. While some diffusion-derived metrics, such as diffusion anisotropy and tensor eigenvector orientation, are highly informative for proper alignment of WM, other tensor metrics such as the trace or mean diffusivity (MD) are fundamental for a proper alignment of GM and CSF boundaries. Moreover, it is desirable to include information from structural MRI data, e.g., T1-weighted or T2-weighted images, which are usually available together with the diffusion data. The fundamental property of DR-TAMAS is to achieve global anatomical accuracy by incorporating in its cost function the most informative metrics locally. Another important feature of DR-TAMAS is a symmetric time-varying velocity-based transformation model, which enables it to account for potentially large anatomical variability in healthy subjects and patients. The performance of DR-TAMAS is evaluated with several data sets and compared with other widely-used diffeomorphic image registration techniques employing both full tensor information and/or DTI-derived scalar maps. Our results show that the proposed method has excellent overall performance in the entire brain, while being equivalent to the best existing methods in WM. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. A Statistically Representative Atlas for Mapping Neuronal Circuits in the Drosophila Adult Brain.

    PubMed

    Arganda-Carreras, Ignacio; Manoliu, Tudor; Mazuras, Nicolas; Schulze, Florian; Iglesias, Juan E; Bühler, Katja; Jenett, Arnim; Rouyer, François; Andrey, Philippe

    2018-01-01

    Imaging the expression patterns of reporter constructs is a powerful tool to dissect the neuronal circuits of perception and behavior in the adult brain of Drosophila , one of the major models for studying brain functions. To date, several Drosophila brain templates and digital atlases have been built to automatically analyze and compare collections of expression pattern images. However, there has been no systematic comparison of performances between alternative atlasing strategies and registration algorithms. Here, we objectively evaluated the performance of different strategies for building adult Drosophila brain templates and atlases. In addition, we used state-of-the-art registration algorithms to generate a new group-wise inter-sex atlas. Our results highlight the benefit of statistical atlases over individual ones and show that the newly proposed inter-sex atlas outperformed existing solutions for automated registration and annotation of expression patterns. Over 3,000 images from the Janelia Farm FlyLight collection were registered using the proposed strategy. These registered expression patterns can be searched and compared with a new version of the BrainBaseWeb system and BrainGazer software. We illustrate the validity of our methodology and brain atlas with registration-based predictions of expression patterns in a subset of clock neurons. The described registration framework should benefit to brain studies in Drosophila and other insect species.

  10. A LAGRANGIAN GAUSS-NEWTON-KRYLOV SOLVER FOR MASS- AND INTENSITY-PRESERVING DIFFEOMORPHIC IMAGE REGISTRATION.

    PubMed

    Mang, Andreas; Ruthotto, Lars

    2017-01-01

    We present an efficient solver for diffeomorphic image registration problems in the framework of Large Deformations Diffeomorphic Metric Mappings (LDDMM). We use an optimal control formulation, in which the velocity field of a hyperbolic PDE needs to be found such that the distance between the final state of the system (the transformed/transported template image) and the observation (the reference image) is minimized. Our solver supports both stationary and non-stationary (i.e., transient or time-dependent) velocity fields. As transformation models, we consider both the transport equation (assuming intensities are preserved during the deformation) and the continuity equation (assuming mass-preservation). We consider the reduced form of the optimal control problem and solve the resulting unconstrained optimization problem using a discretize-then-optimize approach. A key contribution is the elimination of the PDE constraint using a Lagrangian hyperbolic PDE solver. Lagrangian methods rely on the concept of characteristic curves. We approximate these curves using a fourth-order Runge-Kutta method. We also present an efficient algorithm for computing the derivatives of the final state of the system with respect to the velocity field. This allows us to use fast Gauss-Newton based methods. We present quickly converging iterative linear solvers using spectral preconditioners that render the overall optimization efficient and scalable. Our method is embedded into the image registration framework FAIR and, thus, supports the most commonly used similarity measures and regularization functionals. We demonstrate the potential of our new approach using several synthetic and real world test problems with up to 14.7 million degrees of freedom.

  11. A Greedy Algorithm for Brain MRI's Registration.

    PubMed

    Chesseboeuf, Clément

    2016-12-01

    This document presents a non-rigid registration algorithm for the use of brain magnetic resonance (MR) images comparison. More precisely, we want to compare pre-operative and post-operative MR images in order to assess the deformation due to a surgical removal. The proposed algorithm has been studied in Chesseboeuf et al. ((Non-rigid registration of magnetic resonance imaging of brain. IEEE, 385-390. doi: 10.1109/IPTA.2015.7367172 , 2015), following ideas of Trouvé (An infinite dimensional group approach for physics based models in patterns recognition. Technical Report DMI Ecole Normale Supérieure, Cachan, 1995), in which the author introduces the algorithm within a very general framework. Here we recalled this theory from a practical point of view. The emphasis is on illustrations and description of the numerical procedure. Our version of the algorithm is associated with a particular matching criterion. Then, a section is devoted to the description of this object. In the last section we focus on the construction of a statistical method of evaluation.

  12. GIFTed Demons: deformable image registration with local structure-preserving regularization using supervoxels for liver applications

    PubMed Central

    Gleeson, Fergus V.; Brady, Michael; Schnabel, Julia A.

    2018-01-01

    Abstract. Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset. PMID:29662918

  13. GIFTed Demons: deformable image registration with local structure-preserving regularization using supervoxels for liver applications.

    PubMed

    Papież, Bartłomiej W; Franklin, James M; Heinrich, Mattias P; Gleeson, Fergus V; Brady, Michael; Schnabel, Julia A

    2018-04-01

    Deformable image registration, a key component of motion correction in medical imaging, needs to be efficient and provides plausible spatial transformations that reliably approximate biological aspects of complex human organ motion. Standard approaches, such as Demons registration, mostly use Gaussian regularization for organ motion, which, though computationally efficient, rule out their application to intrinsically more complex organ motions, such as sliding interfaces. We propose regularization of motion based on supervoxels, which provides an integrated discontinuity preserving prior for motions, such as sliding. More precisely, we replace Gaussian smoothing by fast, structure-preserving, guided filtering to provide efficient, locally adaptive regularization of the estimated displacement field. We illustrate the approach by applying it to estimate sliding motions at lung and liver interfaces on challenging four-dimensional computed tomography (CT) and dynamic contrast-enhanced magnetic resonance imaging datasets. The results show that guided filter-based regularization improves the accuracy of lung and liver motion correction as compared to Gaussian smoothing. Furthermore, our framework achieves state-of-the-art results on a publicly available CT liver dataset.

  14. A high-level 3D visualization API for Java and ImageJ.

    PubMed

    Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin

    2010-05-21

    Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  15. The use of atlas registration and graph cuts for prostate segmentation in magnetic resonance images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korsager, Anne Sofie, E-mail: asko@hst.aau.dk; Østergaard, Lasse Riis; Fortunati, Valerio

    2015-04-15

    Purpose: An automatic method for 3D prostate segmentation in magnetic resonance (MR) images is presented for planning image-guided radiotherapy treatment of prostate cancer. Methods: A spatial prior based on intersubject atlas registration is combined with organ-specific intensity information in a graph cut segmentation framework. The segmentation is tested on 67 axial T{sub 2}-weighted MR images in a leave-one-out cross validation experiment and compared with both manual reference segmentations and with multiatlas-based segmentations using majority voting atlas fusion. The impact of atlas selection is investigated in both the traditional atlas-based segmentation and the new graph cut method that combines atlas andmore » intensity information in order to improve the segmentation accuracy. Best results were achieved using the method that combines intensity information, shape information, and atlas selection in the graph cut framework. Results: A mean Dice similarity coefficient (DSC) of 0.88 and a mean surface distance (MSD) of 1.45 mm with respect to the manual delineation were achieved. Conclusions: This approaches the interobserver DSC of 0.90 and interobserver MSD 0f 1.15 mm and is comparable to other studies performing prostate segmentation in MR.« less

  16. Cone beam CT imaging with limited angle of projections and prior knowledge for volumetric verification of non-coplanar beam radiation therapy: a proof of concept study

    NASA Astrophysics Data System (ADS)

    Meng, Bowen; Xing, Lei; Han, Bin; Koong, Albert; Chang, Daniel; Cheng, Jason; Li, Ruijiang

    2013-11-01

    Non-coplanar beams are important for treatment of both cranial and noncranial tumors. Treatment verification of such beams with couch rotation/kicks, however, is challenging, particularly for the application of cone beam CT (CBCT). In this situation, only limited and unconventional imaging angles are feasible to avoid collision between the gantry, couch, patient, and on-board imaging system. The purpose of this work is to develop a CBCT verification strategy for patients undergoing non-coplanar radiation therapy. We propose an image reconstruction scheme that integrates a prior image constrained compressed sensing (PICCS) technique with image registration. Planning CT or CBCT acquired at the neutral position is rotated and translated according to the nominal couch rotation/translation to serve as the initial prior image. Here, the nominal couch movement is chosen to have a rotational error of 5° and translational error of 8 mm from the ground truth in one or more axes or directions. The proposed reconstruction scheme alternates between two major steps. First, an image is reconstructed using the PICCS technique implemented with total-variation minimization and simultaneous algebraic reconstruction. Second, the rotational/translational setup errors are corrected and the prior image is updated by applying rigid image registration between the reconstructed image and the previous prior image. The PICCS algorithm and rigid image registration are alternated iteratively until the registration results fall below a predetermined threshold. The proposed reconstruction algorithm is evaluated with an anthropomorphic digital phantom and physical head phantom. The proposed algorithm provides useful volumetric images for patient setup using projections with an angular range as small as 60°. It reduced the translational setup errors from 8 mm to generally <1 mm and the rotational setup errors from 5° to <1°. Compared with the PICCS algorithm alone, the integration of rigid registration significantly improved the reconstructed image quality, with a reduction of mostly 2-3 folds (up to 100) in root mean square image error. The proposed algorithm provides a remedy for solving the problem of non-coplanar CBCT reconstruction from limited angle of projections by combining the PICCS technique and rigid image registration in an iterative framework. In this proof of concept study, non-coplanar beams with couch rotations of 45° can be effectively verified with the CBCT technique.

  17. Prostatome: A combined anatomical and disease based MRI atlas of the prostate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rusu, Mirabela; Madabhushi, Anant, E-mail: anant.madabhushi@case.edu; Bloch, B. Nicolas

    Purpose: In this work, the authors introduce a novel framework, the anatomically constrained registration (AnCoR) scheme and apply it to create a fused anatomic-disease atlas of the prostate which the authors refer to as the prostatome. The prostatome combines a MRI based anatomic and a histology based disease atlas. Statistical imaging atlases allow for the integration of information across multiple scales and imaging modalities into a single canonical representation, in turn enabling a fused anatomical-disease representation which may facilitate the characterization of disease appearance relative to anatomic structures. While statistical atlases have been extensively developed and studied for the brain,more » approaches that have attempted to combine pathology and imaging data for study of prostate pathology are not extant. This works seeks to address this gap. Methods: The AnCoR framework optimizes a scoring function composed of two surface (prostate and central gland) misalignment measures and one intensity-based similarity term. This ensures the correct mapping of anatomic regions into the atlas, even when regional MRI intensities are inconsistent or highly variable between subjects. The framework allows for creation of an anatomic imaging and a disease atlas, while enabling their fusion into the anatomic imaging-disease atlas. The atlas presented here was constructed using 83 subjects with biopsy confirmed cancer who had pre-operative MRI (collected at two institutions) followed by radical prostatectomy. The imaging atlas results from mapping thein vivo MRI into the canonical space, while the anatomic regions serve as domain constraints. Elastic co-registration MRI and corresponding ex vivo histology provides “ground truth” mapping of cancer extent on in vivo imaging for 23 subjects. Results: AnCoR was evaluated relative to alternative construction strategies that use either MRI intensities or the prostate surface alone for registration. The AnCoR framework yielded a central gland Dice similarity coefficient (DSC) of 90%, and prostate DSC of 88%, while the misalignment of the urethra and verumontanum was found to be 3.45 mm, and 4.73 mm, respectively, which were measured to be significantly smaller compared to the alternative strategies. As might have been anticipated from our limited cohort of biopsy confirmed cancers, the disease atlas showed that most of the tumor extent was limited to the peripheral zone. Moreover, central gland tumors were typically larger in size, possibly because they are only discernible at a much later stage. Conclusions: The authors presented the AnCoR framework to explicitly model anatomic constraints for the construction of a fused anatomic imaging-disease atlas. The framework was applied to constructing a preliminary version of an anatomic-disease atlas of the prostate, the prostatome. The prostatome could facilitate the quantitative characterization of gland morphology and imaging features of prostate cancer. These techniques, may be applied on a large sample size data set to create a fully developed prostatome that could serve as a spatial prior for targeted biopsies by urologists. Additionally, the AnCoR framework could allow for incorporation of complementary imaging and molecular data, thereby enabling their careful correlation for population based radio-omics studies.« less

  18. Automatic C-arm pose estimation via 2D/3D hybrid registration of a radiographic fiducial

    NASA Astrophysics Data System (ADS)

    Moult, E.; Burdette, E. C.; Song, D. Y.; Abolmaesumi, P.; Fichtinger, G.; Fallavollita, P.

    2011-03-01

    Motivation: In prostate brachytherapy, real-time dosimetry would be ideal to allow for rapid evaluation of the implant quality intra-operatively. However, such a mechanism requires an imaging system that is both real-time and which provides, via multiple C-arm fluoroscopy images, clear information describing the three-dimensional position of the seeds deposited within the prostate. Thus, accurate tracking of the C-arm poses proves to be of critical importance to the process. Methodology: We compute the pose of the C-arm relative to a stationary radiographic fiducial of known geometry by employing a hybrid registration framework. Firstly, by means of an ellipse segmentation algorithm and a 2D/3D feature based registration, we exploit known FTRAC geometry to recover an initial estimate of the C-arm pose. Using this estimate, we then initialize the intensity-based registration which serves to recover a refined and accurate estimation of the C-arm pose. Results: Ground-truth pose was established for each C-arm image through a published and clinically tested segmentation-based method. Using 169 clinical C-arm images and a +/-10° and +/-10 mm random perturbation of the ground-truth pose, the average rotation and translation errors were 0.68° (std = 0.06°) and 0.64 mm (std = 0.24 mm). Conclusion: Fully automated C-arm pose estimation using a 2D/3D hybrid registration scheme was found to be clinically robust based on human patient data.

  19. Investigations of image fusion

    NASA Astrophysics Data System (ADS)

    Zhang, Zhong

    1999-12-01

    The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a single image which is more suitable for the purpose of human visual perception or further image processing tasks. In this thesis, a region-based fusion algorithm using the wavelet transform is proposed. The identification of important features in each image, such as edges and regions of interest, are used to guide the fusion process. The idea of multiscale grouping is also introduced and a generic image fusion framework based on multiscale decomposition is studied. The framework includes all of the existing multiscale-decomposition- based fusion approaches we found in the literature which did not assume a statistical model for the source images. Comparisons indicate that our framework includes some new approaches which outperform the existing approaches for the cases we consider. Registration must precede our fusion algorithms. So we proposed a hybrid scheme which uses both feature-based and intensity-based methods. The idea of robust estimation of optical flow from time- varying images is employed with a coarse-to-fine multi- resolution approach and feature-based registration to overcome some of the limitations of the intensity-based schemes. Experiments show that this approach is robust and efficient. Assessing image fusion performance in a real application is a complicated issue. In this dissertation, a mixture probability density function model is used in conjunction with the Expectation- Maximization algorithm to model histograms of edge intensity. Some new techniques are proposed for estimating the quality of a noisy image of a natural scene. Such quality measures can be used to guide the fusion. Finally, we study fusion of images obtained from several copies of a new type of camera developed for video surveillance. Our techniques increase the capability and reliability of the surveillance system and provide an easy way to obtain 3-D information of objects in the space monitored by the system.

  20. Heterogeneous Optimization Framework: Reproducible Preprocessing of Multi-Spectral Clinical MRI for Neuro-Oncology Imaging Research.

    PubMed

    Milchenko, Mikhail; Snyder, Abraham Z; LaMontagne, Pamela; Shimony, Joshua S; Benzinger, Tammie L; Fouke, Sarah Jost; Marcus, Daniel S

    2016-07-01

    Neuroimaging research often relies on clinically acquired magnetic resonance imaging (MRI) datasets that can originate from multiple institutions. Such datasets are characterized by high heterogeneity of modalities and variability of sequence parameters. This heterogeneity complicates the automation of image processing tasks such as spatial co-registration and physiological or functional image analysis. Given this heterogeneity, conventional processing workflows developed for research purposes are not optimal for clinical data. In this work, we describe an approach called Heterogeneous Optimization Framework (HOF) for developing image analysis pipelines that can handle the high degree of clinical data non-uniformity. HOF provides a set of guidelines for configuration, algorithm development, deployment, interpretation of results and quality control for such pipelines. At each step, we illustrate the HOF approach using the implementation of an automated pipeline for Multimodal Glioma Analysis (MGA) as an example. The MGA pipeline computes tissue diffusion characteristics of diffusion tensor imaging (DTI) acquisitions, hemodynamic characteristics using a perfusion model of susceptibility contrast (DSC) MRI, and spatial cross-modal co-registration of available anatomical, physiological and derived patient images. Developing MGA within HOF enabled the processing of neuro-oncology MR imaging studies to be fully automated. MGA has been successfully used to analyze over 160 clinical tumor studies to date within several research projects. Introduction of the MGA pipeline improved image processing throughput and, most importantly, effectively produced co-registered datasets that were suitable for advanced analysis despite high heterogeneity in acquisition protocols.

  1. Voxel-based modeling and quantification of the proximal femur using inter-subject registration of quantitative CT images.

    PubMed

    Li, Wenjun; Kezele, Irina; Collins, D Louis; Zijdenbos, Alex; Keyak, Joyce; Kornak, John; Koyama, Alain; Saeed, Isra; Leblanc, Adrian; Harris, Tamara; Lu, Ying; Lang, Thomas

    2007-11-01

    We have developed a general framework which employs quantitative computed tomography (QCT) imaging and inter-subject image registration to model the three-dimensional structure of the hip, with the goal of quantifying changes in the spatial distribution of bone as it is affected by aging, drug treatment or mechanical unloading. We have adapted rigid and non-rigid inter-subject registration techniques to transform groups of hip QCT scans into a common reference space and to construct composite proximal femoral models. We have applied this technique to a longitudinal study of 16 astronauts who on average, incurred high losses of hip bone density during spaceflights of 4-6 months on the International Space Station (ISS). We compared the pre-flight and post-flight composite hip models, and observed the gradients of the bone loss distribution. We performed paired t-tests, on a voxel by voxel basis, corrected for multiple comparisons using false discovery rate (FDR), and observed regions inside the proximal femur that showed the most significant bone loss. To validate our registration algorithm, we selected the 16 pre-flight scans and manually marked 4 landmarks for each scan. After registration, the average distance between the mapped landmarks and the corresponding landmarks in the target scan was 2.56 mm. The average error due to manual landmark identification was 1.70 mm.

  2. A Statistically Representative Atlas for Mapping Neuronal Circuits in the Drosophila Adult Brain

    PubMed Central

    Arganda-Carreras, Ignacio; Manoliu, Tudor; Mazuras, Nicolas; Schulze, Florian; Iglesias, Juan E.; Bühler, Katja; Jenett, Arnim; Rouyer, François; Andrey, Philippe

    2018-01-01

    Imaging the expression patterns of reporter constructs is a powerful tool to dissect the neuronal circuits of perception and behavior in the adult brain of Drosophila, one of the major models for studying brain functions. To date, several Drosophila brain templates and digital atlases have been built to automatically analyze and compare collections of expression pattern images. However, there has been no systematic comparison of performances between alternative atlasing strategies and registration algorithms. Here, we objectively evaluated the performance of different strategies for building adult Drosophila brain templates and atlases. In addition, we used state-of-the-art registration algorithms to generate a new group-wise inter-sex atlas. Our results highlight the benefit of statistical atlases over individual ones and show that the newly proposed inter-sex atlas outperformed existing solutions for automated registration and annotation of expression patterns. Over 3,000 images from the Janelia Farm FlyLight collection were registered using the proposed strategy. These registered expression patterns can be searched and compared with a new version of the BrainBaseWeb system and BrainGazer software. We illustrate the validity of our methodology and brain atlas with registration-based predictions of expression patterns in a subset of clock neurons. The described registration framework should benefit to brain studies in Drosophila and other insect species. PMID:29628885

  3. Framework for 2D-3D image fusion of infrared thermography with preoperative MRI.

    PubMed

    Hoffmann, Nico; Weidner, Florian; Urban, Peter; Meyer, Tobias; Schnabel, Christian; Radev, Yordan; Schackert, Gabriele; Petersohn, Uwe; Koch, Edmund; Gumhold, Stefan; Steiner, Gerald; Kirsch, Matthias

    2017-11-27

    Multimodal medical image fusion combines information of one or more images in order to improve the diagnostic value. While previous applications mainly focus on merging images from computed tomography, magnetic resonance imaging (MRI), ultrasonic and single-photon emission computed tomography, we propose a novel approach for the registration and fusion of preoperative 3D MRI with intraoperative 2D infrared thermography. Image-guided neurosurgeries are based on neuronavigation systems, which further allow us track the position and orientation of arbitrary cameras. Hereby, we are able to relate the 2D coordinate system of the infrared camera with the 3D MRI coordinate system. The registered image data are now combined by calibration-based image fusion in order to map our intraoperative 2D thermographic images onto the respective brain surface recovered from preoperative MRI. In extensive accuracy measurements, we found that the proposed framework achieves a mean accuracy of 2.46 mm.

  4. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets.

    PubMed

    Scharfe, Michael; Pielot, Rainer; Schreiber, Falk

    2010-01-11

    Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.

  5. Predict Brain MR Image Registration via Sparse Learning of Appearance and Transformation

    PubMed Central

    Wang, Qian; Kim, Minjeong; Shi, Yonghong; Wu, Guorong; Shen, Dinggang

    2014-01-01

    We propose a new approach to register the subject image with the template by leveraging a set of intermediate images that are pre-aligned to the template. We argue that, if points in the subject and the intermediate images share similar local appearances, they may have common correspondence in the template. In this way, we learn the sparse representation of a certain subject point to reveal several similar candidate points in the intermediate images. Each selected intermediate candidate can bridge the correspondence from the subject point to the template space, thus predicting the transformation associated with the subject point at the confidence level that relates to the learned sparse coefficient. Following this strategy, we first predict transformations at selected key points, and retain multiple predictions on each key point, instead of allowing only a single correspondence. Then, by utilizing all key points and their predictions with varying confidences, we adaptively reconstruct the dense transformation field that warps the subject to the template. We further embed the prediction-reconstruction protocol above into a multi-resolution hierarchy. In the final, we refine our estimated transformation field via existing registration method in effective manners. We apply our method to registering brain MR images, and conclude that the proposed framework is competent to improve registration performances substantially. PMID:25476412

  6. Multi-Sensor Fusion of Infrared and Electro-Optic Signals for High Resolution Night Images

    PubMed Central

    Huang, Xiaopeng; Netravali, Ravi; Man, Hong; Lawrence, Victor

    2012-01-01

    Electro-optic (EO) image sensors exhibit the properties of high resolution and low noise level at daytime, but they do not work in dark environments. Infrared (IR) image sensors exhibit poor resolution and cannot separate objects with similar temperature. Therefore, we propose a novel framework of IR image enhancement based on the information (e.g., edge) from EO images, which improves the resolution of IR images and helps us distinguish objects at night. Our framework superimposing/blending the edges of the EO image onto the corresponding transformed IR image improves their resolution. In this framework, we adopt the theoretical point spread function (PSF) proposed by Hardie et al. for the IR image, which has the modulation transfer function (MTF) of a uniform detector array and the incoherent optical transfer function (OTF) of diffraction-limited optics. In addition, we design an inverse filter for the proposed PSF and use it for the IR image transformation. The framework requires four main steps: (1) inverse filter-based IR image transformation; (2) EO image edge detection; (3) registration; and (4) blending/superimposing of the obtained image pair. Simulation results show both blended and superimposed IR images, and demonstrate that blended IR images have better quality over the superimposed images. Additionally, based on the same steps, simulation result shows a blended IR image of better quality when only the original IR image is available. PMID:23112602

  7. Multi-sensor fusion of infrared and electro-optic signals for high resolution night images.

    PubMed

    Huang, Xiaopeng; Netravali, Ravi; Man, Hong; Lawrence, Victor

    2012-01-01

    Electro-optic (EO) image sensors exhibit the properties of high resolution and low noise level at daytime, but they do not work in dark environments. Infrared (IR) image sensors exhibit poor resolution and cannot separate objects with similar temperature. Therefore, we propose a novel framework of IR image enhancement based on the information (e.g., edge) from EO images, which improves the resolution of IR images and helps us distinguish objects at night. Our framework superimposing/blending the edges of the EO image onto the corresponding transformed IR image improves their resolution. In this framework, we adopt the theoretical point spread function (PSF) proposed by Hardie et al. for the IR image, which has the modulation transfer function (MTF) of a uniform detector array and the incoherent optical transfer function (OTF) of diffraction-limited optics. In addition, we design an inverse filter for the proposed PSF and use it for the IR image transformation. The framework requires four main steps: (1) inverse filter-based IR image transformation; (2) EO image edge detection; (3) registration; and (4) blending/superimposing of the obtained image pair. Simulation results show both blended and superimposed IR images, and demonstrate that blended IR images have better quality over the superimposed images. Additionally, based on the same steps, simulation result shows a blended IR image of better quality when only the original IR image is available.

  8. Accurate CT-MR image registration for deep brain stimulation: a multi-observer evaluation study

    NASA Astrophysics Data System (ADS)

    Rühaak, Jan; Derksen, Alexander; Heldmann, Stefan; Hallmann, Marc; Meine, Hans

    2015-03-01

    Since the first clinical interventions in the late 1980s, Deep Brain Stimulation (DBS) of the subthalamic nucleus has evolved into a very effective treatment option for patients with severe Parkinson's disease. DBS entails the implantation of an electrode that performs high frequency stimulations to a target area deep inside the brain. A very accurate placement of the electrode is a prerequisite for positive therapy outcome. The assessment of the intervention result is of central importance in DBS treatment and involves the registration of pre- and postinterventional scans. In this paper, we present an image processing pipeline for highly accurate registration of postoperative CT to preoperative MR. Our method consists of two steps: a fully automatic pre-alignment using a detection of the skull tip in the CT based on fuzzy connectedness, and an intensity-based rigid registration. The registration uses the Normalized Gradient Fields distance measure in a multilevel Gauss-Newton optimization framework and focuses on a region around the subthalamic nucleus in the MR. The accuracy of our method was extensively evaluated on 20 DBS datasets from clinical routine and compared with manual expert registrations. For each dataset, three independent registrations were available, thus allowing to relate algorithmic with expert performance. Our method achieved an average registration error of 0.95mm in the target region around the subthalamic nucleus as compared to an inter-observer variability of 1.12 mm. Together with the short registration time of about five seconds on average, our method forms a very attractive package that can be considered ready for clinical use.

  9. A fast alignment method for breast MRI follow-up studies using automated breast segmentation and current-prior registration

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Strehlow, Jan; Rühaak, Jan; Weiler, Florian; Diez, Yago; Gubern-Merida, Albert; Diekmann, Susanne; Laue, Hendrik; Hahn, Horst K.

    2015-03-01

    In breast cancer screening for high-risk women, follow-up magnetic resonance images (MRI) are acquired with a time interval ranging from several months up to a few years. Prior MRI studies may provide additional clinical value when examining the current one and thus have the potential to increase sensitivity and specificity of screening. To build a spatial correlation between suspicious findings in both current and prior studies, a reliable alignment method between follow-up studies is desirable. However, long time interval, different scanners and imaging protocols, and varying breast compression can result in a large deformation, which challenges the registration process. In this work, we present a fast and robust spatial alignment framework, which combines automated breast segmentation and current-prior registration techniques in a multi-level fashion. First, fully automatic breast segmentation is applied to extract the breast masks that are used to obtain an initial affine transform. Then, a non-rigid registration algorithm using normalized gradient fields as similarity measure together with curvature regularization is applied. A total of 29 subjects and 58 breast MR images were collected for performance assessment. To evaluate the global registration accuracy, the volume overlap and boundary surface distance metrics are calculated, resulting in an average Dice Similarity Coefficient (DSC) of 0.96 and root mean square distance (RMSD) of 1.64 mm. In addition, to measure local registration accuracy, for each subject a radiologist annotated 10 pairs of markers in the current and prior studies representing corresponding anatomical locations. The average distance error of marker pairs dropped from 67.37 mm to 10.86 mm after applying registration.

  10. 3D medical volume reconstruction using web services.

    PubMed

    Kooper, Rob; Shirk, Andrew; Lee, Sang-Chul; Lin, Amy; Folberg, Robert; Bajcsy, Peter

    2008-04-01

    We address the problem of 3D medical volume reconstruction using web services. The use of proposed web services is motivated by the fact that the problem of 3D medical volume reconstruction requires significant computer resources and human expertise in medical and computer science areas. Web services are implemented as an additional layer to a dataflow framework called data to knowledge. In the collaboration between UIC and NCSA, pre-processed input images at NCSA are made accessible to medical collaborators for registration. Every time UIC medical collaborators inspected images and selected corresponding features for registration, the web service at NCSA is contacted and the registration processing query is executed using the image to knowledge library of registration methods. Co-registered frames are returned for verification by medical collaborators in a new window. In this paper, we present 3D volume reconstruction problem requirements and the architecture of the developed prototype system at http://isda.ncsa.uiuc.edu/MedVolume. We also explain the tradeoffs of our system design and provide experimental data to support our system implementation. The prototype system has been used for multiple 3D volume reconstructions of blood vessels and vasculogenic mimicry patterns in histological sections of uveal melanoma studied by fluorescent confocal laser scanning microscope.

  11. Automatic registration of Iphone images to LASER point clouds of the urban structures using shape features

    NASA Astrophysics Data System (ADS)

    Sirmacek, B.; Lindenbergh, R. C.; Menenti, M.

    2013-10-01

    Fusion of 3D airborne laser (LIDAR) data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating. The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this article, we propose an approach for registering these two different data from different sensor sources. As we use iPhone camera images which are taken in front of the interested urban structure by the application user and the high resolution LIDAR point clouds of the acquired by an airborne laser sensor. After finding the photo capturing position and orientation from the iPhone photograph metafile, we automatically select the area of interest in the point cloud and transform it into a range image which has only grayscale intensity levels according to the distance from the image acquisition position. We benefit from local features for registering the iPhone image to the generated range image. In this article, we have applied the registration process based on local feature extraction and graph matching. Finally, the registration result is used for facade texture mapping on the 3D building surface mesh which is generated from the LIDAR point cloud. Our experimental results indicate possible usage of the proposed algorithm framework for 3D urban map updating and enhancing purposes.

  12. An approach for quantitative image quality analysis for CT

    NASA Astrophysics Data System (ADS)

    Rahimi, Amir; Cochran, Joe; Mooney, Doug; Regensburger, Joe

    2016-03-01

    An objective and standardized approach to assess image quality of Compute Tomography (CT) systems is required in a wide variety of imaging processes to identify CT systems appropriate for a given application. We present an overview of the framework we have developed to help standardize and to objectively assess CT image quality for different models of CT scanners used for security applications. Within this framework, we have developed methods to quantitatively measure metrics that should correlate with feature identification, detection accuracy and precision, and image registration capabilities of CT machines and to identify strengths and weaknesses in different CT imaging technologies in transportation security. To that end we have designed, developed and constructed phantoms that allow for systematic and repeatable measurements of roughly 88 image quality metrics, representing modulation transfer function, noise equivalent quanta, noise power spectra, slice sensitivity profiles, streak artifacts, CT number uniformity, CT number consistency, object length accuracy, CT number path length consistency, and object registration. Furthermore, we have developed a sophisticated MATLAB based image analysis tool kit to analyze CT generated images of phantoms and report these metrics in a format that is standardized across the considered models of CT scanners, allowing for comparative image quality analysis within a CT model or between different CT models. In addition, we have developed a modified sparse principal component analysis (SPCA) method to generate a modified set of PCA components as compared to the standard principal component analysis (PCA) with sparse loadings in conjunction with Hotelling T2 statistical analysis method to compare, qualify, and detect faults in the tested systems.

  13. MO-C-17A-11: A Segmentation and Point Matching Enhanced Deformable Image Registration Method for Dose Accumulation Between HDR CT Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhen, X; Chen, H; Zhou, L

    2014-06-15

    Purpose: To propose and validate a novel and accurate deformable image registration (DIR) scheme to facilitate dose accumulation among treatment fractions of high-dose-rate (HDR) gynecological brachytherapy. Method: We have developed a method to adapt DIR algorithms to gynecologic anatomies with HDR applicators by incorporating a segmentation step and a point-matching step into an existing DIR framework. In the segmentation step, random walks algorithm is used to accurately segment and remove the applicator region (AR) in the HDR CT image. A semi-automatic seed point generation approach is developed to obtain the incremented foreground and background point sets to feed the randommore » walks algorithm. In the subsequent point-matching step, a feature-based thin-plate spline-robust point matching (TPS-RPM) algorithm is employed for AR surface point matching. With the resulting mapping, a DVF characteristic of the deformation between the two AR surfaces is generated by B-spline approximation, which serves as the initial DVF for the following Demons DIR between the two AR-free HDR CT images. Finally, the calculated DVF via Demons combined with the initial one serve as the final DVF to map doses between HDR fractions. Results: The segmentation and registration accuracy are quantitatively assessed by nine clinical HDR cases from three gynecological cancer patients. The quantitative results as well as the visual inspection of the DIR indicate that our proposed method can suppress the interference of the applicator with the DIR algorithm, and accurately register HDR CT images as well as deform and add interfractional HDR doses. Conclusions: We have developed a novel and robust DIR scheme that can perform registration between HDR gynecological CT images and yield accurate registration results. This new DIR scheme has potential for accurate interfractional HDR dose accumulation. This work is supported in part by the National Natural ScienceFoundation of China (no 30970866 and no 81301940)« less

  14. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets

    PubMed Central

    2010-01-01

    Background Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. Results We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. Conclusions The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics. PMID:20064262

  15. Modeling 4D Pathological Changes by Leveraging Normative Models

    PubMed Central

    Wang, Bo; Prastawa, Marcel; Irimia, Andrei; Saha, Avishek; Liu, Wei; Goh, S.Y. Matthew; Vespa, Paul M.; Van Horn, John D.; Gerig, Guido

    2016-01-01

    With the increasing use of efficient multimodal 3D imaging, clinicians are able to access longitudinal imaging to stage pathological diseases, to monitor the efficacy of therapeutic interventions, or to assess and quantify rehabilitation efforts. Analysis of such four-dimensional (4D) image data presenting pathologies, including disappearing and newly appearing lesions, represents a significant challenge due to the presence of complex spatio-temporal changes. Image analysis methods for such 4D image data have to include not only a concept for joint segmentation of 3D datasets to account for inherent correlations of subject-specific repeated scans but also a mechanism to account for large deformations and the destruction and formation of lesions (e.g., edema, bleeding) due to underlying physiological processes associated with damage, intervention, and recovery. In this paper, we propose a novel framework that provides a joint segmentation-registration framework to tackle the inherent problem of image registration in the presence of objects not present in all images of the time series. Our methodology models 4D changes in pathological anatomy across time and and also provides an explicit mapping of a healthy normative template to a subject’s image data with pathologies. Since atlas-moderated segmentation methods cannot explain appearance and locality pathological structures that are not represented in the template atlas, the new framework provides different options for initialization via a supervised learning approach, iterative semisupervised active learning, and also transfer learning, which results in a fully automatic 4D segmentation method. We demonstrate the effectiveness of our novel approach with synthetic experiments and a 4D multimodal MRI dataset of severe traumatic brain injury (TBI), including validation via comparison to expert segmentations. However, the proposed methodology is generic in regard to different clinical applications requiring quantitative analysis of 4D imaging representing spatio-temporal changes of pathologies. PMID:27818606

  16. Incorporating the whole-mount prostate histology reconstruction program Histostitcher into the extensible imaging platform (XIP) framework

    NASA Astrophysics Data System (ADS)

    Toth, Robert; Chappelow, Jonathan; Vetter, Christoph; Kutter, Oliver; Russ, Christoph; Feldman, Michael; Tomaszewski, John; Shih, Natalie; Madabhushi, Anant

    2012-03-01

    There is a need for identifying quantitative imaging (e.g. MRI) signatures for prostate cancer (CaP), so that computer-aided diagnostic methods can be trained to detect disease extent in vivo. Determining CaP extent on in vivo MRI is difficult to do; however, with the availability of ex vivo surgical whole mount histological sections (WMHS) for CaP patients undergoing radical prostatectomy, co-registration methods can be applied to align and map disease extent onto pre-operative MR imaging from the post-operative histology. Yet obtaining digitized images of WHMS for co-registration with the pre-operative MRI is cumbersome since (a) most digital slide scanners are unable to accommodate the entire section, and (b) significant technical expertise is required for whole mount slide preparation. Consequently, most centers opt to construct quartered sections of each histology slice. Prior to co-registration with MRI, however, these quartered sections need to be digitally stitched together to reconstitute a digital, pseudo WMHS. Histostitcheris an interactive software program that uses semi-automatic registration tools to digitally stitch quartered sections into pseudo WMHS. Histostitcherwas originally developed using the GUI tools provided by the Matlab programming interface, but the clinical use was limited due to the inefficiency of the interface. The limitations of the Matlab based GUI include (a) an inability to edit the fiducials, (b) the rendering being extremely slow, and (c) lack of interactive and rapid visualization tools. In this work, Histostitcherhas been integrated into the eXtensible Imaging Platform (XIP TM ) framework (a set of libraries containing functionalities for analyzing and visualizing medical image data). XIP TM lends the stitching tool much greater flexibility and functionality by (a) allowing interactive and seamless navigation through the full resolution histology images, (b) the ability to easily add, edit, or remove fiducials and annotations in order to register the quadrants and map the disease extent. In this work, we showcase examples of digital stitching of quartered histological sections into pseudo-WHMS using Histostitcher via the new XIP TM interface. This tool will be particularly useful in clinical trials and large cohort studies where a quick, interactive way of digitally reconstructing pseudo WMHS is required.

  17. 2D-3D registration for brain radiation therapy using a 3D CBCT and a single limited field-of-view 2D kV radiograph

    NASA Astrophysics Data System (ADS)

    Munbodh, R.; Moseley, D. J.

    2014-03-01

    We report results of an intensity-based 2D-3D rigid registration framework for patient positioning and monitoring during brain radiotherapy. We evaluated two intensity-based similarity measures, the Pearson Correlation Coefficient (ICC) and Maximum Likelihood with Gaussian noise (MLG) derived from the statistics of transmission images. A useful image frequency band was identified from the bone-to-no-bone ratio. Validation was performed on gold-standard data consisting of 3D kV CBCT scans and 2D kV radiographs of an anthropomorphic head phantom acquired at 23 different poses with parameter variations along six degrees of freedom. At each pose, a single limited field of view kV radiograph was registered to the reference CBCT. The ground truth was determined from markers affixed to the phantom and visible in the CBCT images. The mean (and standard deviation) of the absolute errors in recovering each of the six transformation parameters along the x, y and z axes for ICC were varphix: 0.08(0.04)°, varphiy: 0.10(0.09)°, varphiz: 0.03(0.03)°, tx: 0.13(0.11) mm, ty: 0.08(0.06) mm and tz: 0.44(0.23) mm. For MLG, the corresponding results were varphix: 0.10(0.04)°, varphiy: 0.10(0.09)°, varphiz: 0.05(0.07)°, tx: 0.11(0.13) mm, ty: 0.05(0.05) mm and tz: 0.44(0.31) mm. It is feasible to accurately estimate all six transformation parameters from a 3D CBCT of the head and a single 2D kV radiograph within an intensity-based registration framework that incorporates the physics of transmission images.

  18. Development of a piecewise linear omnidirectional 3D image registration method

    NASA Astrophysics Data System (ADS)

    Bae, Hyunsoo; Kang, Wonjin; Lee, SukGyu; Kim, Youngwoo

    2016-12-01

    This paper proposes a new piecewise linear omnidirectional image registration method. The proposed method segments an image captured by multiple cameras into 2D segments defined by feature points of the image and then stitches each segment geometrically by considering the inclination of the segment in the 3D space. Depending on the intended use of image registration, the proposed method can be used to improve image registration accuracy or reduce the computation time in image registration because the trade-off between the computation time and image registration accuracy can be controlled for. In general, nonlinear image registration methods have been used in 3D omnidirectional image registration processes to reduce image distortion by camera lenses. The proposed method depends on a linear transformation process for omnidirectional image registration, and therefore it can enhance the effectiveness of the geometry recognition process, increase image registration accuracy by increasing the number of cameras or feature points of each image, increase the image registration speed by reducing the number of cameras or feature points of each image, and provide simultaneous information on shapes and colors of captured objects.

  19. Robust multi-site MR data processing: iterative optimization of bias correction, tissue classification, and registration.

    PubMed

    Young Kim, Eun; Johnson, Hans J

    2013-01-01

    A robust multi-modal tool, for automated registration, bias correction, and tissue classification, has been implemented for large-scale heterogeneous multi-site longitudinal MR data analysis. This work focused on improving the an iterative optimization framework between bias-correction, registration, and tissue classification inspired from previous work. The primary contributions are robustness improvements from incorporation of following four elements: (1) utilize multi-modal and repeated scans, (2) incorporate high-deformable registration, (3) use extended set of tissue definitions, and (4) use of multi-modal aware intensity-context priors. The benefits of these enhancements were investigated by a series of experiments with both simulated brain data set (BrainWeb) and by applying to highly-heterogeneous data from a 32 site imaging study with quality assessments through the expert visual inspection. The implementation of this tool is tailored for, but not limited to, large-scale data processing with great data variation with a flexible interface. In this paper, we describe enhancements to a joint registration, bias correction, and the tissue classification, that improve the generalizability and robustness for processing multi-modal longitudinal MR scans collected at multi-sites. The tool was evaluated by using both simulated and simulated and human subject MRI images. With these enhancements, the results showed improved robustness for large-scale heterogeneous MRI processing.

  20. SU-E-J-127: Real-Time Dosimetric Assessment for Adaptive Head-And-Neck Treatment Via A GPU-Based Deformable Image Registration Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, S; Neylon, J; Chen, A

    2014-06-01

    Purposes: To systematically monitor anatomic variations and their dosimetric consequences during head-and-neck (H'N) radiation therapy using a GPU-based deformable image registration (DIR) framework. Methods: Eleven H'N IMRT patients comprised the subject population. The daily megavoltage CT and weekly kVCT scans were acquired for each patient. The pre-treatment CTs were automatically registered with their corresponding planning CT through an in-house GPU-based DIR framework. The deformation of each contoured structure was computed to account for non-rigid change in the patient setup. The Jacobian determinant for the PTVs and critical structures was used to quantify anatomical volume changes. Dose accumulation was performed tomore » determine the actual delivered dose and dose accumulation. A landmark tool was developed to determine the uncertainty in the dose distribution due to registration error. Results: Dramatic interfraction anatomic changes leading to dosimetric variations were observed. During the treatment courses of 6–7 weeks, the parotid gland volumes changed up to 34.7%, the center-of-mass displacement of the two parotids varied in the range of 0.9–8.8mm. Mean doses were within 5% and 3% of the planned mean doses for all PTVs and CTVs, respectively. The cumulative minimum/mean/EUD doses were lower than the planned doses by 18%, 2%, and 7%, respectively for the PTV1. The ratio of the averaged cumulative cord maximum doses to the plan was 1.06±0.15. The cumulative mean doses assessed by the weekly kVCTs were significantly higher than the planned dose for the left-parotid (p=0.03) and right-parotid gland (p=0.006). The computation time was nearly real-time (∼ 45 seconds) for registering each pre-treatment CT to the planning CT and dose accumulation with registration accuracy (for kVCT) at sub-voxel level (<1.5mm). Conclusions: Real-time assessment of anatomic and dosimetric variations is feasible using the GPU-based DIR framework. Clinical implementation of this technology may enable timely plan adaption and potentially lead to improved outcome.« less

  1. Scene-based nonuniformity correction and enhancement: pixel statistics and subpixel motion.

    PubMed

    Zhao, Wenyi; Zhang, Chao

    2008-07-01

    We propose a framework for scene-based nonuniformity correction (NUC) and nonuniformity correction and enhancement (NUCE) that is required for focal-plane array-like sensors to obtain clean and enhanced-quality images. The core of the proposed framework is a novel registration-based nonuniformity correction super-resolution (NUCSR) method that is bootstrapped by statistical scene-based NUC methods. Based on a comprehensive imaging model and an accurate parametric motion estimation, we are able to remove severe/structured nonuniformity and in the presence of subpixel motion to simultaneously improve image resolution. One important feature of our NUCSR method is the adoption of a parametric motion model that allows us to (1) handle many practical scenarios where parametric motions are present and (2) carry out perfect super-resolution in principle by exploring available subpixel motions. Experiments with real data demonstrate the efficiency of the proposed NUCE framework and the effectiveness of the NUCSR method.

  2. Fully automated motion correction in first-pass myocardial perfusion MR image sequences.

    PubMed

    Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2008-11-01

    This paper presents a novel method for registration of cardiac perfusion magnetic resonance imaging (MRI). The presented method is capable of automatically registering perfusion data, using independent component analysis (ICA) to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of that ICA. This reference image is used in a two-pass registration framework. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Despite varying image quality and motion patterns in the evaluation set, validation of the method showed a reduction of the average right ventricle (LV) motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. Comparison of clinically relevant parameters computed using registered data and the manual gold standard show a good agreement. Additional tests with a simulated free-breathing protocol showed robustness against considerable deviations from a standard breathing protocol. We conclude that this fully automatic ICA-based method shows an accuracy, a robustness and a computation speed adequate for use in a clinical environment.

  3. Discriminative confidence estimation for probabilistic multi-atlas label fusion.

    PubMed

    Benkarim, Oualid M; Piella, Gemma; González Ballester, Miguel Angel; Sanroma, Gerard

    2017-12-01

    Quantitative neuroimaging analyses often rely on the accurate segmentation of anatomical brain structures. In contrast to manual segmentation, automatic methods offer reproducible outputs and provide scalability to study large databases. Among existing approaches, multi-atlas segmentation has recently shown to yield state-of-the-art performance in automatic segmentation of brain images. It consists in propagating the labelmaps from a set of atlases to the anatomy of a target image using image registration, and then fusing these multiple warped labelmaps into a consensus segmentation on the target image. Accurately estimating the contribution of each atlas labelmap to the final segmentation is a critical step for the success of multi-atlas segmentation. Common approaches to label fusion either rely on local patch similarity, probabilistic statistical frameworks or a combination of both. In this work, we propose a probabilistic label fusion framework based on atlas label confidences computed at each voxel of the structure of interest. Maximum likelihood atlas confidences are estimated using a supervised approach, explicitly modeling the relationship between local image appearances and segmentation errors produced by each of the atlases. We evaluate different spatial pooling strategies for modeling local segmentation errors. We also present a novel type of label-dependent appearance features based on atlas labelmaps that are used during confidence estimation to increase the accuracy of our label fusion. Our approach is evaluated on the segmentation of seven subcortical brain structures from the MICCAI 2013 SATA Challenge dataset and the hippocampi from the ADNI dataset. Overall, our results indicate that the proposed label fusion framework achieves superior performance to state-of-the-art approaches in the majority of the evaluated brain structures and shows more robustness to registration errors. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. FIRE: an open-software suite for real-time 2D/3D image registration for image guided radiotherapy research

    NASA Astrophysics Data System (ADS)

    Furtado, H.; Gendrin, C.; Spoerk, J.; Steiner, E.; Underwood, T.; Kuenzler, T.; Georg, D.; Birkfellner, W.

    2016-03-01

    Radiotherapy treatments have changed at a tremendously rapid pace. Dose delivered to the tumor has escalated while organs at risk (OARs) are better spared. The impact of moving tumors during dose delivery has become higher due to very steep dose gradients. Intra-fractional tumor motion has to be managed adequately to reduce errors in dose delivery. For tumors with large motion such as tumors in the lung, tracking is an approach that can reduce position uncertainty. Tumor tracking approaches range from purely image intensity based techniques to motion estimation based on surrogate tracking. Research efforts are often based on custom designed software platforms which take too much time and effort to develop. To address this challenge we have developed an open software platform especially focusing on tumor motion management. FLIRT is a freely available open-source software platform. The core method for tumor tracking is purely intensity based 2D/3D registration. The platform is written in C++ using the Qt framework for the user interface. The performance critical methods are implemented on the graphics processor using the CUDA extension. One registration can be as fast as 90ms (11Hz). This is suitable to track tumors moving due to respiration (~0.3Hz) or heartbeat (~1Hz). Apart from focusing on high performance, the platform is designed to be flexible and easy to use. Current use cases range from tracking feasibility studies, patient positioning and method validation. Such a framework has the potential of enabling the research community to rapidly perform patient studies or try new methods.

  5. PORTR: Pre-Operative and Post-Recurrence Brain Tumor Registration

    PubMed Central

    Niethammer, Marc; Akbari, Hamed; Bilello, Michel; Davatzikos, Christos; Pohl, Kilian M.

    2014-01-01

    We propose a new method for deformable registration of pre-operative and post-recurrence brain MR scans of glioma patients. Performing this type of intra-subject registration is challenging as tumor, resection, recurrence, and edema cause large deformations, missing correspondences, and inconsistent intensity profiles between the scans. To address this challenging task, our method, called PORTR, explicitly accounts for pathological information. It segments tumor, resection cavity, and recurrence based on models specific to each scan. PORTR then uses the resulting maps to exclude pathological regions from the image-based correspondence term while simultaneously measuring the overlap between the aligned tumor and resection cavity. Embedded into a symmetric registration framework, we determine the optimal solution by taking advantage of both discrete and continuous search methods. We apply our method to scans of 24 glioma patients. Both quantitative and qualitative analysis of the results clearly show that our method is superior to other state-of-the-art approaches. PMID:24595340

  6. Medical imaging and registration in computer assisted surgery.

    PubMed

    Simon, D A; Lavallée, S

    1998-09-01

    Imaging, sensing, and computing technologies that are being introduced to aid in the planning and execution of surgical procedures are providing orthopaedic surgeons with a powerful new set of tools for improving clinical accuracy, reliability, and patient outcomes while reducing costs and operating times. Current computer assisted surgery systems typically include a measurement process for collecting patient specific medical data, a decision making process for generating a surgical plan, a registration process for aligning the surgical plan to the patient, and an action process for accurately achieving the goals specified in the plan. Some of the key concepts in computer assisted surgery applied to orthopaedics with a focus on the basic framework and underlying technologies is outlined. In addition, technical challenges and future trends in the field are discussed.

  7. Registration of 3D spectral OCT volumes using 3D SIFT feature point matching

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; Garvin, Mona K.; Lee, Kyungmoo; van Ginneken, Bram; Abràmoff, Michael D.; Sonka, Milan

    2009-02-01

    The recent introduction of next generation spectral OCT scanners has enabled routine acquisition of high resolution, 3D cross-sectional volumetric images of the retina. 3D OCT is used in the detection and management of serious eye diseases such as glaucoma and age-related macular degeneration. For follow-up studies, image registration is a vital tool to enable more precise, quantitative comparison of disease states. This work presents a registration method based on a recently introduced extension of the 2D Scale-Invariant Feature Transform (SIFT) framework1 to 3D.2 The SIFT feature extractor locates minima and maxima in the difference of Gaussian scale space to find salient feature points. It then uses histograms of the local gradient directions around each found extremum in 3D to characterize them in a 4096 element feature vector. Matching points are found by comparing the distance between feature vectors. We apply this method to the rigid registration of optic nerve head- (ONH) and macula-centered 3D OCT scans of the same patient that have only limited overlap. Three OCT data set pairs with known deformation were used for quantitative assessment of the method's robustness and accuracy when deformations of rotation and scaling were considered. Three-dimensional registration accuracy of 2.0+/-3.3 voxels was observed. The accuracy was assessed as average voxel distance error in N=1572 matched locations. The registration method was applied to 12 3D OCT scans (200 x 200 x 1024 voxels) of 6 normal eyes imaged in vivo to demonstrate the clinical utility and robustness of the method in a real-world environment.

  8. Evaluation of low-dose limits in 3D-2D rigid registration for surgical guidance

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Wang, A. S.; Otake, Y.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Gallia, G. L.; Gokaslan, Z. L.; Siewerdsen, J. H.

    2014-09-01

    An algorithm for intensity-based 3D-2D registration of CT and C-arm fluoroscopy is evaluated for use in surgical guidance, specifically considering the low-dose limits of the fluoroscopic x-ray projections. The registration method is based on a framework using the covariance matrix adaptation evolution strategy (CMA-ES) to identify the 3D patient pose that maximizes the gradient information similarity metric. Registration performance was evaluated in an anthropomorphic head phantom emulating intracranial neurosurgery, using target registration error (TRE) to characterize accuracy and robustness in terms of 95% confidence upper bound in comparison to that of an infrared surgical tracking system. Three clinical scenarios were considered: (1) single-view image + guidance, wherein a single x-ray projection is used for visualization and 3D-2D guidance; (2) dual-view image + guidance, wherein one projection is acquired for visualization, combined with a second (lower-dose) projection acquired at a different C-arm angle for 3D-2D guidance; and (3) dual-view guidance, wherein both projections are acquired at low dose for the purpose of 3D-2D guidance alone (not visualization). In each case, registration accuracy was evaluated as a function of the entrance surface dose associated with the projection view(s). Results indicate that images acquired at a dose as low as 4 μGy (approximately one-tenth the dose of a typical fluoroscopic frame) were sufficient to provide TRE comparable or superior to that of conventional surgical tracking, allowing 3D-2D guidance at a level of dose that is at most 10% greater than conventional fluoroscopy (scenario #2) and potentially reducing the dose to approximately 20% of the level in a conventional fluoroscopically guided procedure (scenario #3).

  9. Concurrent Tumor Segmentation and Registration with Uncertainty-based Sparse non-Uniform Graphs

    PubMed Central

    Parisot, Sarah; Wells, William; Chemouny, Stéphane; Duffau, Hugues; Paragios, Nikos

    2014-01-01

    In this paper, we present a graph-based concurrent brain tumor segmentation and atlas to diseased patient registration framework. Both segmentation and registration problems are modeled using a unified pairwise discrete Markov Random Field model on a sparse grid superimposed to the image domain. Segmentation is addressed based on pattern classification techniques, while registration is performed by maximizing the similarity between volumes and is modular with respect to the matching criterion. The two problems are coupled by relaxing the registration term in the tumor area, corresponding to areas of high classification score and high dissimilarity between volumes. In order to overcome the main shortcomings of discrete approaches regarding appropriate sampling of the solution space as well as important memory requirements, content driven samplings of the discrete displacement set and the sparse grid are considered, based on the local segmentation and registration uncertainties recovered by the min marginal energies. State of the art results on a substantial low-grade glioma database demonstrate the potential of our method, while our proposed approach shows maintained performance and strongly reduced complexity of the model. PMID:24717540

  10. Contour-Driven Atlas-Based Segmentation

    PubMed Central

    Wachinger, Christian; Fritscher, Karl; Sharp, Greg; Golland, Polina

    2016-01-01

    We propose new methods for automatic segmentation of images based on an atlas of manually labeled scans and contours in the image. First, we introduce a Bayesian framework for creating initial label maps from manually annotated training images. Within this framework, we model various registration- and patch-based segmentation techniques by changing the deformation field prior. Second, we perform contour-driven regression on the created label maps to refine the segmentation. Image contours and image parcellations give rise to non-stationary kernel functions that model the relationship between image locations. Setting the kernel to the covariance function in a Gaussian process establishes a distribution over label maps supported by image structures. Maximum a posteriori estimation of the distribution over label maps conditioned on the outcome of the atlas-based segmentation yields the refined segmentation. We evaluate the segmentation in two clinical applications: the segmentation of parotid glands in head and neck CT scans and the segmentation of the left atrium in cardiac MR angiography images. PMID:26068202

  11. Image Quality Improvement in Adaptive Optics Scanning Laser Ophthalmoscopy Assisted Capillary Visualization Using B-spline-based Elastic Image Registration

    PubMed Central

    Uji, Akihito; Ooto, Sotaro; Hangai, Masanori; Arichika, Shigeta; Yoshimura, Nagahisa

    2013-01-01

    Purpose To investigate the effect of B-spline-based elastic image registration on adaptive optics scanning laser ophthalmoscopy (AO-SLO)-assisted capillary visualization. Methods AO-SLO videos were acquired from parafoveal areas in the eyes of healthy subjects and patients with various diseases. After nonlinear image registration, the image quality of capillary images constructed from AO-SLO videos using motion contrast enhancement was compared before and after B-spline-based elastic (nonlinear) image registration performed using ImageJ. For objective comparison of image quality, contrast-to-noise ratios (CNRS) for vessel images were calculated. For subjective comparison, experienced ophthalmologists ranked images on a 5-point scale. Results All AO-SLO videos were successfully stabilized by elastic image registration. CNR was significantly higher in capillary images stabilized by elastic image registration than in those stabilized without registration. The average ratio of CNR in images with elastic image registration to CNR in images without elastic image registration was 2.10 ± 1.73, with no significant difference in the ratio between patients and healthy subjects. Improvement of image quality was also supported by expert comparison. Conclusions Use of B-spline-based elastic image registration in AO-SLO-assisted capillary visualization was effective for enhancing image quality both objectively and subjectively. PMID:24265796

  12. 2D-3D registration for cranial radiation therapy using a 3D kV CBCT and a single limited field-of-view 2D kV radiograph.

    PubMed

    Munbodh, Reshma; Knisely, Jonathan Ps; Jaffray, David A; Moseley, Douglas J

    2018-05-01

    We present and evaluate a fully automated 2D-3D intensity-based registration framework using a single limited field-of-view (FOV) 2D kV radiograph and a 3D kV CBCT for 3D estimation of patient setup errors during brain radiotherapy. We evaluated two similarity measures, the Pearson correlation coefficient on image intensity values (ICC) and maximum likelihood measure with Gaussian noise (MLG), derived from the statistics of transmission images. Pose determination experiments were conducted on 2D kV radiographs in the anterior-posterior (AP) and left lateral (LL) views and 3D kV CBCTs of an anthropomorphic head phantom. In order to minimize radiation exposure and exclude nonrigid structures from the registration, limited FOV 2D kV radiographs were employed. A spatial frequency band useful for the 2D-3D registration was identified from the bone-to-no-bone spectral ratio (BNBSR) of digitally reconstructed radiographs (DRRs) computed from the 3D kV planning CT of the phantom. The images being registered were filtered accordingly prior to computation of the similarity measures. We evaluated the registration accuracy achievable with a single 2D kV radiograph and with the registration results from the AP and LL views combined. We also compared the performance of the 2D-3D registration solutions proposed to that of a commercial 3D-3D registration algorithm, which used the entire skull for the registration. The ground truth was determined from markers affixed to the phantom and visible in the CBCT images. The accuracy of the 2D-3D registration solutions, as quantified by the root mean squared value of the target registration error (TRE) calculated over a radius of 3 cm for all poses tested, was ICC AP : 0.56 mm, MLG AP : 0.74 mm, ICC LL : 0.57 mm, MLG LL : 0.54 mm, ICC (AP and LL combined): 0.19 mm, and MLG (AP and LL combined): 0.21 mm. The accuracy of the 3D-3D registration algorithm was 0.27 mm. There was no significant difference in mean TRE for the 2D-3D registration algorithms using a single 2D kV radiograph with similarity measure and image view point. There was no significant difference in mean TRE between ICC LL , MLG LL , ICC (AP and LL combined), MLG (AP and LL combined), and the 3D-3D registration algorithm despite the smaller FOV used for the 2D-3D registration. While submillimeter registration accuracy was obtained with both ICC and MLG using a single 2D kV radiograph, combining the results from the two projection views resulted in a significantly smaller (P≤0.05) mean TRE. Our results indicate that it is possible to achieve submillimeter registration accuracy with both ICC and MLG using either single or dual limited FOV 2D kV radiographs of the head in the AP and LL views. The registration accuracy suggests that the 2D-3D registration solutions presented are suitable for the estimation of patient setup errors not only during conventional brain radiation therapy, but also during stereotactic procedures and proton radiation therapy where tighter setup margins are required. © 2018 American Association of Physicists in Medicine.

  13. Computing global minimizers to a constrained B-spline image registration problem from optimal l1 perturbations to block match data

    PubMed Central

    Castillo, Edward; Castillo, Richard; Fuentes, David; Guerrero, Thomas

    2014-01-01

    Purpose: Block matching is a well-known strategy for estimating corresponding voxel locations between a pair of images according to an image similarity metric. Though robust to issues such as image noise and large magnitude voxel displacements, the estimated point matches are not guaranteed to be spatially accurate. However, the underlying optimization problem solved by the block matching procedure is similar in structure to the class of optimization problem associated with B-spline based registration methods. By exploiting this relationship, the authors derive a numerical method for computing a global minimizer to a constrained B-spline registration problem that incorporates the robustness of block matching with the global smoothness properties inherent to B-spline parameterization. Methods: The method reformulates the traditional B-spline registration problem as a basis pursuit problem describing the minimal l1-perturbation to block match pairs required to produce a B-spline fitting error within a given tolerance. The sparsity pattern of the optimal perturbation then defines a voxel point cloud subset on which the B-spline fit is a global minimizer to a constrained variant of the B-spline registration problem. As opposed to traditional B-spline algorithms, the optimization step involving the actual image data is addressed by block matching. Results: The performance of the method is measured in terms of spatial accuracy using ten inhale/exhale thoracic CT image pairs (available for download at www.dir-lab.com) obtained from the COPDgene dataset and corresponding sets of expert-determined landmark point pairs. The results of the validation procedure demonstrate that the method can achieve a high spatial accuracy on a significantly complex image set. Conclusions: The proposed methodology is demonstrated to achieve a high spatial accuracy and is generalizable in that in can employ any displacement field parameterization described as a least squares fit to block match generated estimates. Thus, the framework allows for a wide range of image similarity block match metric and physical modeling combinations. PMID:24694135

  14. Accelerated gradient-based free form deformable registration for online adaptive radiotherapy

    NASA Astrophysics Data System (ADS)

    Yu, Gang; Liang, Yueqiang; Yang, Guanyu; Shu, Huazhong; Li, Baosheng; Yin, Yong; Li, Dengwang

    2015-04-01

    The registration of planning fan-beam computed tomography (FBCT) and daily cone-beam CT (CBCT) is a crucial step in adaptive radiation therapy. The current intensity-based registration algorithms, such as Demons, may fail when they are used to register FBCT and CBCT, because the CT numbers in CBCT cannot exactly correspond to the electron densities. In this paper, we investigated the effects of CBCT intensity inaccuracy on the registration accuracy and developed an accurate gradient-based free form deformation algorithm (GFFD). GFFD distinguishes itself from other free form deformable registration algorithms by (a) measuring the similarity using the 3D gradient vector fields to avoid the effect of inconsistent intensities between the two modalities; (b) accommodating image sampling anisotropy using the local polynomial approximation-intersection of confidence intervals (LPA-ICI) algorithm to ensure a smooth and continuous displacement field; and (c) introducing a ‘bi-directional’ force along with an adaptive force strength adjustment to accelerate the convergence process. It is expected that such a strategy can decrease the effect of the inconsistent intensities between the two modalities, thus improving the registration accuracy and robustness. Moreover, for clinical application, the algorithm was implemented by graphics processing units (GPU) through OpenCL framework. The registration time of the GFFD algorithm for each set of CT data ranges from 8 to 13 s. The applications of on-line adaptive image-guided radiation therapy, including auto-propagation of contours, aperture-optimization and dose volume histogram (DVH) in the course of radiation therapy were also studied by in-house-developed software.

  15. An Integrated Approach to Segmentation and Nonrigid Registration for Application in Image-Guided Pelvic Radiotherapy

    PubMed Central

    Lu, Chao; Chelikani, Sudhakar; Papademetris, Xenophon; Knisely, Jonathan P.; Milosevic, Michael F.; Chen, Zhe; Jaffray, David A.; Staib, Lawrence H.; Duncan, James S.

    2011-01-01

    External beam radiotherapy (EBRT) has become the preferred options for non-surgical treatment of prostate cancer and cervix cancer. In order to deliver higher doses to cancerous regions within these pelvic structures (i.e. prostate or cervix) while maintaining or lowering the doses to surrounding non-cancerous regions, it is critical to account for setup variation, organ motion, anatomical changes due to treatment and intra-fraction motion. In previous work, manual segmentation of the soft tissues is performed and then images are registered based on the manual segmentation. In this paper, we present an integrated automatic approach to multiple organ segmentation and nonrigid constrained registration, which can achieve these two aims simultaneously. The segmentation and registration steps are both formulated using a Bayesian framework, and they constrain each other using an iterative conditional model strategy. We also propose a new strategy to assess cumulative actual dose for this novel integrated algorithm, in order to both determine whether the intended treatment is being delivered and, potentially, whether or not a plan should be adjusted for future treatment fractions. Quantitative results show that the automatic segmentation produced results that have an accuracy comparable to manual segmentation, while the registration part significantly outperforms both rigid and non-rigid registration. Clinical application and evaluation of dose delivery show the superiority of proposed method to the procedure currently used in clinical practice, i.e. manual segmentation followed by rigid registration. PMID:21646038

  16. Cost-effective surgical registration using consumer depth cameras

    NASA Astrophysics Data System (ADS)

    Potter, Michael; Yaniv, Ziv

    2016-03-01

    The high costs associated with technological innovation have been previously identified as both a major contributor to the rise of health care expenses, and as a limitation for widespread adoption of new technologies. In this work we evaluate the use of two consumer grade depth cameras, the Microsoft Kinect v1 and 3DSystems Sense, as a means for acquiring point clouds for registration. These devices have the potential to replace professional grade laser range scanning devices in medical interventions that do not require sub-millimetric registration accuracy, and may do so at a significantly reduced cost. To facilitate the use of these devices we have developed a near real-time (1-4 sec/frame) rigid registration framework combining several alignment heuristics with the Iterative Closest Point (ICP) algorithm. Using nearest neighbor registration error as our evaluation criterion we found the optimal scanning distances for the Sense and Kinect to be 50-60cm and 70-80cm respectively. When imaging a skull phantom at these distances, RMS error values of 1.35mm and 1.14mm were obtained. The registration framework was then evaluated using cranial MR scans of two subjects. For the first subject, the RMS error using the Sense was 1.28 +/- 0.01 mm. Using the Kinect this error was 1.24 +/- 0.03 mm. For the second subject, whose MR scan was significantly corrupted by metal implants, the errors increased to 1.44 +/- 0.03 mm and 1.74 +/- 0.06 mm but the system nonetheless performed within acceptable bounds.

  17. Supervised local error estimation for nonlinear image registration using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Eppenhof, Koen A. J.; Pluim, Josien P. W.

    2017-02-01

    Error estimation in medical image registration is valuable when validating, comparing, or combining registration methods. To validate a nonlinear image registration method, ideally the registration error should be known for the entire image domain. We propose a supervised method for the estimation of a registration error map for nonlinear image registration. The method is based on a convolutional neural network that estimates the norm of the residual deformation from patches around each pixel in two registered images. This norm is interpreted as the registration error, and is defined for every pixel in the image domain. The network is trained using a set of artificially deformed images. Each training example is a pair of images: the original image, and a random deformation of that image. No manually labeled ground truth error is required. At test time, only the two registered images are required as input. We train and validate the network on registrations in a set of 2D digital subtraction angiography sequences, such that errors up to eight pixels can be estimated. We show that for this range of errors the convolutional network is able to learn the registration error in pairs of 2D registered images at subpixel precision. Finally, we present a proof of principle for the extension to 3D registration problems in chest CTs, showing that the method has the potential to estimate errors in 3D registration problems.

  18. Estimation of the uncertainty of elastic image registration with the demons algorithm.

    PubMed

    Hub, M; Karger, C P

    2013-05-07

    The accuracy of elastic image registration is limited. We propose an approach to detect voxels where registration based on the demons algorithm is likely to perform inaccurately, compared to other locations of the same image. The approach is based on the assumption that the local reproducibility of the registration can be regarded as a measure of uncertainty of the image registration. The reproducibility is determined as the standard deviation of the displacement vector components obtained from multiple registrations. These registrations differ in predefined initial deformations. The proposed approach was tested with artificially deformed lung images, where the ground truth on the deformation is known. In voxels where the result of the registration was less reproducible, the registration turned out to have larger average registration errors as compared to locations of the same image, where the registration was more reproducible. The proposed method can show a clinician in which area of the image the elastic registration with the demons algorithm cannot be expected to be accurate.

  19. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection

    PubMed Central

    Ou, Yangming; Resnick, Susan M.; Gur, Ruben C.; Gur, Raquel E.; Satterthwaite, Theodore D.; Furth, Susan; Davatzikos, Christos

    2016-01-01

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328

  20. Thermal imaging as a biometrics approach to facial signature authentication.

    PubMed

    Guzman, A M; Goryawala, M; Wang, Jin; Barreto, A; Andrian, J; Rishe, N; Adjouadi, M

    2013-01-01

    A new thermal imaging framework with unique feature extraction and similarity measurements for face recognition is presented. The research premise is to design specialized algorithms that would extract vasculature information, create a thermal facial signature and identify the individual. The proposed algorithm is fully integrated and consolidates the critical steps of feature extraction through the use of morphological operators, registration using the Linear Image Registration Tool and matching through unique similarity measures designed for this task. The novel approach at developing a thermal signature template using four images taken at various instants of time ensured that unforeseen changes in the vasculature over time did not affect the biometric matching process as the authentication process relied only on consistent thermal features. Thirteen subjects were used for testing the developed technique on an in-house thermal imaging system. The matching using the similarity measures showed an average accuracy of 88.46% for skeletonized signatures and 90.39% for anisotropically diffused signatures. The highly accurate results obtained in the matching process clearly demonstrate the ability of the thermal infrared system to extend in application to other thermal imaging based systems. Empirical results applying this approach to an existing database of thermal images proves this assertion.

  1. An atlas-based multimodal registration method for 2D images with discrepancy structures.

    PubMed

    Lv, Wenchao; Chen, Houjin; Peng, Yahui; Li, Yanfeng; Li, Jupeng

    2018-06-04

    An atlas-based multimodal registration method for 2-dimension images with discrepancy structures was proposed in this paper. Atlas was utilized for complementing the discrepancy structure information in multimodal medical images. The scheme includes three steps: floating image to atlas registration, atlas to reference image registration, and field-based deformation. To evaluate the performance, a frame model, a brain model, and clinical images were employed in registration experiments. We measured the registration performance by the squared sum of intensity differences. Results indicate that this method is robust and performs better than the direct registration for multimodal images with discrepancy structures. We conclude that the proposed method is suitable for multimodal images with discrepancy structures. Graphical Abstract An Atlas-based multimodal registration method schematic diagram.

  2. Influence of magnetic field strength and image registration strategy on voxel-based morphometry in a study of Alzheimer's disease.

    PubMed

    Marchewka, Artur; Kherif, Ferath; Krueger, Gunnar; Grabowska, Anna; Frackowiak, Richard; Draganski, Bogdan

    2014-05-01

    Multi-centre data repositories like the Alzheimer's Disease Neuroimaging Initiative (ADNI) offer a unique research platform, but pose questions concerning comparability of results when using a range of imaging protocols and data processing algorithms. The variability is mainly due to the non-quantitative character of the widely used structural T1-weighted magnetic resonance (MR) images. Although the stability of the main effect of Alzheimer's disease (AD) on brain structure across platforms and field strength has been addressed in previous studies using multi-site MR images, there are only sparse empirically-based recommendations for processing and analysis of pooled multi-centre structural MR data acquired at different magnetic field strengths (MFS). Aiming to minimise potential systematic bias when using ADNI data we investigate the specific contributions of spatial registration strategies and the impact of MFS on voxel-based morphometry in AD. We perform a whole-brain analysis within the framework of Statistical Parametric Mapping, testing for main effects of various diffeomorphic spatial registration strategies, of MFS and their interaction with disease status. Beyond the confirmation of medial temporal lobe volume loss in AD, we detect a significant impact of spatial registration strategy on estimation of AD related atrophy. Additionally, we report a significant effect of MFS on the assessment of brain anatomy (i) in the cerebellum, (ii) the precentral gyrus and (iii) the thalamus bilaterally, showing no interaction with the disease status. We provide empirical evidence in support of pooling data in multi-centre VBM studies irrespective of disease status or MFS. Copyright © 2013 Wiley Periodicals, Inc.

  3. Multimodal Image Registration through Simultaneous Segmentation.

    PubMed

    Aganj, Iman; Fischl, Bruce

    2017-11-01

    Multimodal image registration facilitates the combination of complementary information from images acquired with different modalities. Most existing methods require computation of the joint histogram of the images, while some perform joint segmentation and registration in alternate iterations. In this work, we introduce a new non-information-theoretical method for pairwise multimodal image registration, in which the error of segmentation - using both images - is considered as the registration cost function. We empirically evaluate our method via rigid registration of multi-contrast brain magnetic resonance images, and demonstrate an often higher registration accuracy in the results produced by the proposed technique, compared to those by several existing methods.

  4. Image registration with uncertainty analysis

    DOEpatents

    Simonson, Katherine M [Cedar Crest, NM

    2011-03-22

    In an image registration method, edges are detected in a first image and a second image. A percentage of edge pixels in a subset of the second image that are also edges in the first image shifted by a translation is calculated. A best registration point is calculated based on a maximum percentage of edges matched. In a predefined search region, all registration points other than the best registration point are identified that are not significantly worse than the best registration point according to a predetermined statistical criterion.

  5. Partial volume correction and image analysis methods for intersubject comparison of FDG-PET studies

    NASA Astrophysics Data System (ADS)

    Yang, Jun

    2000-12-01

    Partial volume effect is an artifact mainly due to the limited imaging sensor resolution. It creates bias in the measured activity in small structures and around tissue boundaries. In brain FDG-PET studies, especially for Alzheimer's disease study where there is serious gray matter atrophy, accurate estimate of cerebral metabolic rate of glucose is even more problematic due to large amount of partial volume effect. In this dissertation, we developed a framework enabling inter-subject comparison of partial volume corrected brain FDG-PET studies. The framework is composed of the following image processing steps: (1)MRI segmentation, (2)MR-PET registration, (3)MR based PVE correction, (4)MR 3D inter-subject elastic mapping. Through simulation studies, we showed that the newly developed partial volume correction methods, either pixel based or ROI based, performed better than previous methods. By applying this framework to a real Alzheimer's disease study, we demonstrated that the partial volume corrected glucose rates vary significantly among the control, at risk and disease patient groups and this framework is a promising tool useful for assisting early identification of Alzheimer's patients.

  6. A Review on Medical Image Registration as an Optimization Problem

    PubMed Central

    Song, Guoli; Han, Jianda; Zhao, Yiwen; Wang, Zheng; Du, Huibin

    2017-01-01

    Objective: In the course of clinical treatment, several medical media are required by a phy-sician in order to provide accurate and complete information about a patient. Medical image registra-tion techniques can provide a richer diagnosis and treatment information to doctors and to provide a comprehensive reference source for the researchers involved in image registration as an optimization problem. Methods: The essence of image registration is associating two or more different images spatial asso-ciation, and getting the translation of their spatial relationship. For medical image registration, its pro-cess is not absolute. Its core purpose is finding the conversion relationship between different images. Result: The major step of image registration includes the change of geometrical dimensions, and change of the image of the combination, image similarity measure, iterative optimization and interpo-lation process. Conclusion: The contribution of this review is sort of related image registration research methods, can provide a brief reference for researchers about image registration. PMID:28845149

  7. Robust image registration for multiple exposure high dynamic range image synthesis

    NASA Astrophysics Data System (ADS)

    Yao, Susu

    2011-03-01

    Image registration is an important preprocessing technique in high dynamic range (HDR) image synthesis. This paper proposed a robust image registration method for aligning a group of low dynamic range images (LDR) that are captured with different exposure times. Illumination change and photometric distortion between two images would result in inaccurate registration. We propose to transform intensity image data into phase congruency to eliminate the effect of the changes in image brightness and use phase cross correlation in the Fourier transform domain to perform image registration. Considering the presence of non-overlapped regions due to photometric distortion, evolutionary programming is applied to search for the accurate translation parameters so that the accuracy of registration is able to be achieved at a hundredth of a pixel level. The proposed algorithm works well for under and over-exposed image registration. It has been applied to align LDR images for synthesizing high quality HDR images..

  8. Tools and Methods for the Registration and Fusion of Remotely Sensed Data

    NASA Technical Reports Server (NTRS)

    Goshtasby, Arthur Ardeshir; LeMoigne, Jacqueline

    2010-01-01

    Tools and methods for image registration were reviewed. Methods for the registration of remotely sensed data at NASA were discussed. Image fusion techniques were reviewed. Challenges in registration of remotely sensed data were discussed. Examples of image registration and image fusion were given.

  9. Combined magnetic resonance, fluorescence, and histology imaging strategy in a human breast tumor xenograft model

    PubMed Central

    Jiang, Lu; Greenwood, Tiffany R.; Amstalden van Hove, Erika R.; Chughtai, Kamila; Raman, Venu; Winnard, Paul T.; Heeren, Ron; Artemov, Dmitri; Glunde, Kristine

    2014-01-01

    Applications of molecular imaging in cancer and other diseases frequently require combining in vivo imaging modalities, such as magnetic resonance and optical imaging, with ex vivo optical, fluorescence, histology, and immunohistochemical (IHC) imaging, to investigate and relate molecular and biological processes to imaging parameters within the same region of interest. We have developed a multimodal image reconstruction and fusion framework that accurately combines in vivo magnetic resonance imaging (MRI) and magnetic resonance spectroscopic imaging (MRSI), ex vivo brightfield and fluorescence microscopic imaging, and ex vivo histology imaging. Ex vivo brightfield microscopic imaging was used as an intermediate modality to facilitate the ultimate link between ex vivo histology and in vivo MRI/MRSI. Tissue sectioning necessary for optical and histology imaging required generation of a three-dimensional (3D) reconstruction module for 2D ex vivo optical and histology imaging data. We developed an external fiducial marker based 3D reconstruction method, which was able to fuse optical brightfield and fluorescence with histology imaging data. Registration of 3D tumor shape was pursued to combine in vivo MRI/MRSI and ex vivo optical brightfield and fluorescence imaging data. This registration strategy was applied to in vivo MRI/MRSI, ex vivo optical brightfield/fluorescence, as well as histology imaging data sets obtained from human breast tumor models. 3D human breast tumor data sets were successfully reconstructed and fused with this platform. PMID:22945331

  10. Hardware implementation of hierarchical volume subdivision-based elastic registration.

    PubMed

    Dandekar, Omkar; Walimbe, Vivek; Shekhar, Raj

    2006-01-01

    Real-time, elastic and fully automated 3D image registration is critical to the efficiency and effectiveness of many image-guided diagnostic and treatment procedures relying on multimodality image fusion or serial image comparison. True, real-time performance will make many 3D image registration-based techniques clinically viable. Hierarchical volume subdivision-based image registration techniques are inherently faster than most elastic registration techniques, e.g. free-form deformation (FFD)-based techniques, and are more amenable for achieving real-time performance through hardware acceleration. Our group has previously reported an FPGA-based architecture for accelerating FFD-based image registration. In this article we show how our existing architecture can be adapted to support hierarchical volume subdivision-based image registration. A proof-of-concept implementation of the architecture achieved speedups of 100 for elastic registration against an optimized software implementation on a 3.2 GHz Pentium III Xeon workstation. Due to inherent parallel nature of the hierarchical volume subdivision-based image registration techniques further speedup can be achieved by using several computing modules in parallel.

  11. Registering and Analyzing Rat fMRI Data in the Stereotaxic Framework by Exploiting Intrinsic Anatomical Features

    PubMed Central

    Lu, Hanbing; Scholl, Clara A.; Zuo, Yantao; Demny, Steven; Rea, William; Stein, Elliot A.; Yang, Yihong

    2009-01-01

    The value of analyzing neuroimaging data on a group level has been well established in human studies. However, there is no standard procedure for registering and analyzing fMRI data into common space in rodent functional magnetic resonance imaging (fMRI) studies. An approach for performing rat imaging data analysis in the stereotaxic framework is presented. This method is rooted in the biological observation that the skull shape and size of rat brain are essentially the same as long as their weights are within certain range. Registration is performed using rigid-body transformations without scaling or shearing, preserving the unique properties of the stable shape and size inherent in rat brain structure. Also, it does not require brain tissue masking, and is not biased towards surface coil sensitivity profile. A standard rat brain atlas is used to facilitate the identification of activated areas in common space, allowing accurate region-of-interest (ROI) analysis. This technique is evaluated from a group of rats (n = 11) undergoing routine MRI scans; the registration accuracy is estimated to be within 400 μm. The analysis of fMRI data acquired with an electrical forepaw stimulation model demonstrates the utility of this technique. The method is implemented within the AFNI framework and can be readily extended to other studies. PMID:19608368

  12. Concurrent tumor segmentation and registration with uncertainty-based sparse non-uniform graphs.

    PubMed

    Parisot, Sarah; Wells, William; Chemouny, Stéphane; Duffau, Hugues; Paragios, Nikos

    2014-05-01

    In this paper, we present a graph-based concurrent brain tumor segmentation and atlas to diseased patient registration framework. Both segmentation and registration problems are modeled using a unified pairwise discrete Markov Random Field model on a sparse grid superimposed to the image domain. Segmentation is addressed based on pattern classification techniques, while registration is performed by maximizing the similarity between volumes and is modular with respect to the matching criterion. The two problems are coupled by relaxing the registration term in the tumor area, corresponding to areas of high classification score and high dissimilarity between volumes. In order to overcome the main shortcomings of discrete approaches regarding appropriate sampling of the solution space as well as important memory requirements, content driven samplings of the discrete displacement set and the sparse grid are considered, based on the local segmentation and registration uncertainties recovered by the min marginal energies. State of the art results on a substantial low-grade glioma database demonstrate the potential of our method, while our proposed approach shows maintained performance and strongly reduced complexity of the model. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. A Variational Approach to Video Registration with Subspace Constraints.

    PubMed

    Garg, Ravi; Roussos, Anastasios; Agapito, Lourdes

    2013-01-01

    This paper addresses the problem of non-rigid video registration, or the computation of optical flow from a reference frame to each of the subsequent images in a sequence, when the camera views deformable objects. We exploit the high correlation between 2D trajectories of different points on the same non-rigid surface by assuming that the displacement of any point throughout the sequence can be expressed in a compact way as a linear combination of a low-rank motion basis. This subspace constraint effectively acts as a trajectory regularization term leading to temporally consistent optical flow. We formulate it as a robust soft constraint within a variational framework by penalizing flow fields that lie outside the low-rank manifold. The resulting energy functional can be decoupled into the optimization of the brightness constancy and spatial regularization terms, leading to an efficient optimization scheme. Additionally, we propose a novel optimization scheme for the case of vector valued images, based on the dualization of the data term. This allows us to extend our approach to deal with colour images which results in significant improvements on the registration results. Finally, we provide a new benchmark dataset, based on motion capture data of a flag waving in the wind, with dense ground truth optical flow for evaluation of multi-frame optical flow algorithms for non-rigid surfaces. Our experiments show that our proposed approach outperforms state of the art optical flow and dense non-rigid registration algorithms.

  14. Joint T1 and brain fiber log-demons registration using currents to model geometry.

    PubMed

    Siless, Viviana; Glaunès, Joan; Guevara, Pamela; Mangin, Jean-François; Poupon, Cyril; Le Bihan, Denis; Thirion, Bertrand; Fillard, Pierre

    2012-01-01

    We present an extension of the diffeomorphic Geometric Demons algorithm which combines the iconic registration with geometric constraints. Our algorithm works in the log-domain space, so that one can efficiently compute the deformation field of the geometry. We represent the shape of objects of interest in the space of currents which is sensitive to both location and geometric structure of objects. Currents provides a distance between geometric structures that can be defined without specifying explicit point-to-point correspondences. We demonstrate this framework by registering simultaneously T1 images and 65 fiber bundles consistently extracted in 12 subjects and compare it against non-linear T1, tensor, and multi-modal T1 + Fractional Anisotropy (FA) registration algorithms. Results show the superiority of the Log-domain Geometric Demons over their purely iconic counterparts.

  15. A survey of medical image registration - under review.

    PubMed

    Viergever, Max A; Maintz, J B Antoine; Klein, Stefan; Murphy, Keelin; Staring, Marius; Pluim, Josien P W

    2016-10-01

    A retrospective view on the past two decades of the field of medical image registration is presented, guided by the article "A survey of medical image registration" (Maintz and Viergever, 1998). It shows that the classification of the field introduced in that article is still usable, although some modifications to do justice to advances in the field would be due. The main changes over the last twenty years are the shift from extrinsic to intrinsic registration, the primacy of intensity-based registration, the breakthrough of nonlinear registration, the progress of inter-subject registration, and the availability of generic image registration software packages. Two problems that were called urgent already 20 years ago, are even more urgent nowadays: Validation of registration methods, and translation of results of image registration research to clinical practice. It may be concluded that the field of medical image registration has evolved, but still is in need of further development in various aspects. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Automatic image fusion of real-time ultrasound with computed tomography images: a prospective comparison between two auto-registration methods.

    PubMed

    Cha, Dong Ik; Lee, Min Woo; Kim, Ah Yeong; Kang, Tae Wook; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga

    2017-11-01

    Background A major drawback of conventional manual image fusion is that the process may be complex, especially for less-experienced operators. Recently, two automatic image fusion techniques called Positioning and Sweeping auto-registration have been developed. Purpose To compare the accuracy and required time for image fusion of real-time ultrasonography (US) and computed tomography (CT) images between Positioning and Sweeping auto-registration. Material and Methods Eighteen consecutive patients referred for planning US for radiofrequency ablation or biopsy for focal hepatic lesions were enrolled. Image fusion using both auto-registration methods was performed for each patient. Registration error, time required for image fusion, and number of point locks used were compared using the Wilcoxon signed rank test. Results Image fusion was successful in all patients. Positioning auto-registration was significantly faster than Sweeping auto-registration for both initial (median, 11 s [range, 3-16 s] vs. 32 s [range, 21-38 s]; P < 0.001] and complete (median, 34.0 s [range, 26-66 s] vs. 47.5 s [range, 32-90]; P = 0.001] image fusion. Registration error of Positioning auto-registration was significantly higher for initial image fusion (median, 38.8 mm [range, 16.0-84.6 mm] vs. 18.2 mm [6.7-73.4 mm]; P = 0.029), but not for complete image fusion (median, 4.75 mm [range, 1.7-9.9 mm] vs. 5.8 mm [range, 2.0-13.0 mm]; P = 0.338]. Number of point locks required to refine the initially fused images was significantly higher with Positioning auto-registration (median, 2 [range, 2-3] vs. 1 [range, 1-2]; P = 0.012]. Conclusion Positioning auto-registration offers faster image fusion between real-time US and pre-procedural CT images than Sweeping auto-registration. The final registration error is similar between the two methods.

  17. Registration of Panoramic/Fish-Eye Image Sequence and LiDAR Points Using Skyline Features

    PubMed Central

    Zhu, Ningning; Jia, Yonghong; Ji, Shunping

    2018-01-01

    We propose utilizing a rigorous registration model and a skyline-based method for automatic registration of LiDAR points and a sequence of panoramic/fish-eye images in a mobile mapping system (MMS). This method can automatically optimize original registration parameters and avoid the use of manual interventions in control point-based registration methods. First, the rigorous registration model between the LiDAR points and the panoramic/fish-eye image was built. Second, skyline pixels from panoramic/fish-eye images and skyline points from the MMS’s LiDAR points were extracted, relying on the difference in the pixel values and the registration model, respectively. Third, a brute force optimization method was used to search for optimal matching parameters between skyline pixels and skyline points. In the experiments, the original registration method and the control point registration method were used to compare the accuracy of our method with a sequence of panoramic/fish-eye images. The result showed: (1) the panoramic/fish-eye image registration model is effective and can achieve high-precision registration of the image and the MMS’s LiDAR points; (2) the skyline-based registration method can automatically optimize the initial attitude parameters, realizing a high-precision registration of a panoramic/fish-eye image and the MMS’s LiDAR points; and (3) the attitude correction values of the sequences of panoramic/fish-eye images are different, and the values must be solved one by one. PMID:29883431

  18. A Framework for a WAP-Based Course Registration System

    ERIC Educational Resources Information Center

    AL-Bastaki, Yousif; Al-Ajeeli, Abid

    2005-01-01

    This paper describes a WAP-based course registration system designed and implemented to facilitating the process of students' registration at Bahrain University. The framework will support many opportunities for applying WAP based technology to many services such as wireless commerce, cashless payment... and location-based services. The paper…

  19. 3D-2D registration in endovascular image-guided surgery: evaluation of state-of-the-art methods on cerebral angiograms.

    PubMed

    Mitrović, Uroš; Likar, Boštjan; Pernuš, Franjo; Špiclin, Žiga

    2018-02-01

    Image guidance for minimally invasive surgery is based on spatial co-registration and fusion of 3D pre-interventional images and treatment plans with the 2D live intra-interventional images. The spatial co-registration or 3D-2D registration is the key enabling technology; however, the performance of state-of-the-art automated methods is rather unclear as they have not been assessed under the same test conditions. Herein we perform a quantitative and comparative evaluation of ten state-of-the-art methods for 3D-2D registration on a public dataset of clinical angiograms. Image database consisted of 3D and 2D angiograms of 25 patients undergoing treatment for cerebral aneurysms or arteriovenous malformations. On each of the datasets, highly accurate "gold-standard" registrations of 3D and 2D images were established based on patient-attached fiducial markers. The database was used to rigorously evaluate ten state-of-the-art 3D-2D registration methods, namely two intensity-, two gradient-, three feature-based and three hybrid methods, both for registration of 3D pre-interventional image to monoplane or biplane 2D images. Intensity-based methods were most accurate in all tests (0.3 mm). One of the hybrid methods was most robust with 98.75% of successful registrations (SR) and capture range of 18 mm for registrations of 3D to biplane 2D angiograms. In general, registration accuracy was similar whether registration of 3D image was performed onto mono- or biplanar 2D images; however, the SR was substantially lower in case of 3D to monoplane 2D registration. Two feature-based and two hybrid methods had clinically feasible execution times in the order of a second. Performance of methods seems to fall below expectations in terms of robustness in case of registration of 3D to monoplane 2D images, while translation into clinical image guidance systems seems readily feasible for methods that perform registration of the 3D pre-interventional image onto biplanar intra-interventional 2D images.

  20. Hierarchical and successive approximate registration of the non-rigid medical image based on thin-plate splines

    NASA Astrophysics Data System (ADS)

    Hu, Jinyan; Li, Li; Yang, Yunfeng

    2017-06-01

    The hierarchical and successive approximate registration method of non-rigid medical image based on the thin-plate splines is proposed in the paper. There are two major novelties in the proposed method. First, the hierarchical registration based on Wavelet transform is used. The approximate image of Wavelet transform is selected as the registered object. Second, the successive approximation registration method is used to accomplish the non-rigid medical images registration, i.e. the local regions of the couple images are registered roughly based on the thin-plate splines, then, the current rough registration result is selected as the object to be registered in the following registration procedure. Experiments show that the proposed method is effective in the registration process of the non-rigid medical images.

  1. Influence of image registration on apparent diffusion coefficient images computed from free-breathing diffusion MR images of the abdomen.

    PubMed

    Guyader, Jean-Marie; Bernardin, Livia; Douglas, Naomi H M; Poot, Dirk H J; Niessen, Wiro J; Klein, Stefan

    2015-08-01

    To evaluate the influence of image registration on apparent diffusion coefficient (ADC) images obtained from abdominal free-breathing diffusion-weighted MR images (DW-MRIs). A comprehensive pipeline based on automatic three-dimensional nonrigid image registrations is developed to compensate for misalignments in DW-MRI datasets obtained from five healthy subjects scanned twice. Motion is corrected both within each image and between images in a time series. ADC distributions are compared with and without registration in two abdominal volumes of interest (VOIs). The effects of interpolations and Gaussian blurring as alternative strategies to reduce motion artifacts are also investigated. Among the four considered scenarios (no processing, interpolation, blurring and registration), registration yields the best alignment scores. Median ADCs vary according to the chosen scenario: for the considered datasets, ADCs obtained without processing are 30% higher than with registration. Registration improves voxelwise reproducibility at least by a factor of 2 and decreases uncertainty (Fréchet-Cramér-Rao lower bound). Registration provides similar improvements in reproducibility and uncertainty as acquiring four times more data. Patient motion during image acquisition leads to misaligned DW-MRIs and inaccurate ADCs, which can be addressed using automatic registration. © 2014 Wiley Periodicals, Inc.

  2. Error estimation of deformable image registration of pulmonary CT scans using convolutional neural networks.

    PubMed

    Eppenhof, Koen A J; Pluim, Josien P W

    2018-04-01

    Error estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors.

  3. Intrasubject multimodal groupwise registration with the conditional template entropy.

    PubMed

    Polfliet, Mathias; Klein, Stefan; Huizinga, Wyke; Paulides, Margarethus M; Niessen, Wiro J; Vandemeulebroucke, Jef

    2018-05-01

    Image registration is an important task in medical image analysis. Whereas most methods are designed for the registration of two images (pairwise registration), there is an increasing interest in simultaneously aligning more than two images using groupwise registration. Multimodal registration in a groupwise setting remains difficult, due to the lack of generally applicable similarity metrics. In this work, a novel similarity metric for such groupwise registration problems is proposed. The metric calculates the sum of the conditional entropy between each image in the group and a representative template image constructed iteratively using principal component analysis. The proposed metric is validated in extensive experiments on synthetic and intrasubject clinical image data. These experiments showed equivalent or improved registration accuracy compared to other state-of-the-art (dis)similarity metrics and improved transformation consistency compared to pairwise mutual information. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  4. Automatic image registration performance for two different CBCT systems; variation with imaging dose

    NASA Astrophysics Data System (ADS)

    Barber, J.; Sykes, J. R.; Holloway, L.; Thwaites, D. I.

    2014-03-01

    The performance of an automatic image registration algorithm was compared on image sets collected with two commercial CBCT systems, and the relationship with imaging dose was explored. CBCT images of a CIRS Virtually Human Male Pelvis phantom (VHMP) were collected on Varian TrueBeam/OBI and Elekta Synergy/XVI linear accelerators, across a range of mAs settings. Each CBCT image was registered 100 times, with random initial offsets introduced. Image registration was performed using the grey value correlation ratio algorithm in the Elekta XVI software, to a mask of the prostate volume with 5 mm expansion. Residual registration errors were calculated after correcting for the initial introduced phantom set-up error. Registration performance with the OBI images was similar to that of XVI. There was a clear dependence on imaging dose for the XVI images with residual errors increasing below 4mGy. It was not possible to acquire images with doses lower than ~5mGy with the OBI system and no evidence of reduced performance was observed at this dose. Registration failures (maximum target registration error > 3.6 mm on the surface of a 30mm sphere) occurred in 5% to 9% of registrations except for the lowest dose XVI scan (31%). The uncertainty in automatic image registration with both OBI and XVI images was found to be adequate for clinical use within a normal range of acquisition settings.

  5. A 3D global-to-local deformable mesh model based registration and anatomy-constrained segmentation method for image guided prostate radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Jinghao; Kim, Sung; Jabbour, Salma

    2010-03-15

    Purpose: In the external beam radiation treatment of prostate cancers, successful implementation of adaptive radiotherapy and conformal radiation dose delivery is highly dependent on precise and expeditious segmentation and registration of the prostate volume between the simulation and the treatment images. The purpose of this study is to develop a novel, fast, and accurate segmentation and registration method to increase the computational efficiency to meet the restricted clinical treatment time requirement in image guided radiotherapy. Methods: The method developed in this study used soft tissues to capture the transformation between the 3D planning CT (pCT) images and 3D cone-beam CTmore » (CBCT) treatment images. The method incorporated a global-to-local deformable mesh model based registration framework as well as an automatic anatomy-constrained robust active shape model (ACRASM) based segmentation algorithm in the 3D CBCT images. The global registration was based on the mutual information method, and the local registration was to minimize the Euclidian distance of the corresponding nodal points from the global transformation of deformable mesh models, which implicitly used the information of the segmented target volume. The method was applied on six data sets of prostate cancer patients. Target volumes delineated by the same radiation oncologist on the pCT and CBCT were chosen as the benchmarks and were compared to the segmented and registered results. The distance-based and the volume-based estimators were used to quantitatively evaluate the results of segmentation and registration. Results: The ACRASM segmentation algorithm was compared to the original active shape model (ASM) algorithm by evaluating the values of the distance-based estimators. With respect to the corresponding benchmarks, the mean distance ranged from -0.85 to 0.84 mm for ACRASM and from -1.44 to 1.17 mm for ASM. The mean absolute distance ranged from 1.77 to 3.07 mm for ACRASM and from 2.45 to 6.54 mm for ASM. The volume overlap ratio ranged from 79% to 91% for ACRASM and from 44% to 80% for ASM. These data demonstrated that the segmentation results of ACRASM were in better agreement with the corresponding benchmarks than those of ASM. The developed registration algorithm was quantitatively evaluated by comparing the registered target volumes from the pCT to the benchmarks on the CBCT. The mean distance and the root mean square error ranged from 0.38 to 2.2 mm and from 0.45 to 2.36 mm, respectively, between the CBCT images and the registered pCT. The mean overlap ratio of the prostate volumes ranged from 85.2% to 95% after registration. The average time of the ACRASM-based segmentation was under 1 min. The average time of the global transformation was from 2 to 4 min on two 3D volumes and the average time of the local transformation was from 20 to 34 s on two deformable superquadrics mesh models. Conclusions: A novel and fast segmentation and deformable registration method was developed to capture the transformation between the planning and treatment images for external beam radiotherapy of prostate cancers. This method increases the computational efficiency and may provide foundation to achieve real time adaptive radiotherapy.« less

  6. TU-B-19A-01: Image Registration II: TG132-Quality Assurance for Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brock, K; Mutic, S

    2014-06-15

    AAPM Task Group 132 was charged with a review of the current approaches and solutions for image registration in radiotherapy and to provide recommendations for quality assurance and quality control of these clinical processes. As the results of image registration are always used as the input of another process for planning or delivery, it is important for the user to understand and document the uncertainty associate with the algorithm in general and the Result of a specific registration. The recommendations of this task group, which at the time of abstract submission are currently being reviewed by the AAPM, include themore » following components. The user should understand the basic image registration techniques and methods of visualizing image fusion. The disclosure of basic components of the image registration by commercial vendors is critical in this respect. The physicists should perform end-to-end tests of imaging, registration, and planning/treatment systems if image registration is performed on a stand-alone system. A comprehensive commissioning process should be performed and documented by the physicist prior to clinical use of the system. As documentation is important to the safe implementation of this process, a request and report system should be integrated into the clinical workflow. Finally, a patient specific QA practice should be established for efficient evaluation of image registration results. The implementation of these recommendations will be described and illustrated during this educational session. Learning Objectives: Highlight the importance of understanding the image registration techniques used in their clinic. Describe the end-to-end tests needed for stand-alone registration systems. Illustrate a comprehensive commissioning program using both phantom data and clinical images. Describe a request and report system to ensure communication and documentation. Demonstrate an clinically-efficient patient QA practice for efficient evaluation of image registration.« less

  7. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ingram, S; Rao, A; Wendt, R

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the cameramore » by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination.« less

  8. Fractal analysis of INSAR and correlation with graph-cut based image registration for coastline deformation analysis: post seismic hazard assessment of the 2011 Tohoku earthquake region

    NASA Astrophysics Data System (ADS)

    Dutta, P. K.; Mishra, O. P.

    2012-04-01

    Satellite imagery for 2011 earthquake off the Pacific coast of Tohoku has provided an opportunity to conduct image transformation analyses by employing multi-temporal images retrieval techniques. In this study, we used a new image segmentation algorithm to image coastline deformation by adopting graph cut energy minimization framework. Comprehensive analysis of available INSAR images using coastline deformation analysis helped extract disaster information of the affected region of the 2011 Tohoku tsunamigenic earthquake source zone. We attempted to correlate fractal analysis of seismic clustering behavior with image processing analogies and our observations suggest that increase in fractal dimension distribution is associated with clustering of events that may determine the level of devastation of the region. The implementation of graph cut based image registration technique helps us to detect the devastation across the coastline of Tohoku through change of intensity of pixels that carries out regional segmentation for the change in coastal boundary after the tsunami. The study applies transformation parameters on remotely sensed images by manually segmenting the image to recovering translation parameter from two images that differ by rotation. Based on the satellite image analysis through image segmentation, it is found that the area of 0.997 sq km for the Honshu region was a maximum damage zone localized in the coastal belt of NE Japan forearc region. The analysis helps infer using matlab that the proposed graph cut algorithm is robust and more accurate than other image registration methods. The analysis shows that the method can give a realistic estimate for recovered deformation fields in pixels corresponding to coastline change which may help formulate the strategy for assessment during post disaster need assessment scenario for the coastal belts associated with damages due to strong shaking and tsunamis in the world under disaster risk mitigation programs.

  9. Image registration with auto-mapped control volumes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreibmann, Eduard; Xing Lei

    2006-04-15

    Many image registration algorithms rely on the use of homologous control points on the two input image sets to be registered. In reality, the interactive identification of the control points on both images is tedious, difficult, and often a source of error. We propose a two-step algorithm to automatically identify homologous regions that are used as a priori information during the image registration procedure. First, a number of small control volumes having distinct anatomical features are identified on the model image in a somewhat arbitrary fashion. Instead of attempting to find their correspondences in the reference image through user interaction,more » in the proposed method, each of the control regions is mapped to the corresponding part of the reference image by using an automated image registration algorithm. A normalized cross-correlation (NCC) function or mutual information was used as the auto-mapping metric and a limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm (L-BFGS) was employed to optimize the function to find the optimal mapping. For rigid registration, the transformation parameters of the system are obtained by averaging that derived from the individual control volumes. In our deformable calculation, the mapped control volumes are treated as the nodes or control points with known positions on the two images. If the number of control volumes is not enough to cover the whole image to be registered, additional nodes are placed on the model image and then located on the reference image in a manner similar to the conventional BSpline deformable calculation. For deformable registration, the established correspondence by the auto-mapped control volumes provides valuable guidance for the registration calculation and greatly reduces the dimensionality of the problem. The performance of the two-step registrations was applied to three rigid registration cases (two PET-CT registrations and a brain MRI-CT registration) and one deformable registration of inhale and exhale phases of a lung 4D CT. Algorithm convergence was confirmed by starting the registration calculations from a large number of initial transformation parameters. An accuracy of {approx}2 mm was achieved for both deformable and rigid registration. The proposed image registration method greatly reduces the complexity involved in the determination of homologous control points and allows us to minimize the subjectivity and uncertainty associated with the current manual interactive approach. Patient studies have indicated that the two-step registration technique is fast, reliable, and provides a valuable tool to facilitate both rigid and nonrigid image registrations.« less

  10. Image Registration for Stability Testing of MEMS

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; LeMoigne, Jacqueline; Blake, Peter N.; Morey, Peter A.; Landsman, Wayne B.; Chambers, Victor J.; Moseley, Samuel H.

    2011-01-01

    Image registration, or alignment of two or more images covering the same scenes or objects, is of great interest in many disciplines such as remote sensing, medical imaging. astronomy, and computer vision. In this paper, we introduce a new application of image registration algorithms. We demonstrate how through a wavelet based image registration algorithm, engineers can evaluate stability of Micro-Electro-Mechanical Systems (MEMS). In particular, we applied image registration algorithms to assess alignment stability of the MicroShutters Subsystem (MSS) of the Near Infrared Spectrograph (NIRSpec) instrument of the James Webb Space Telescope (JWST). This work introduces a new methodology for evaluating stability of MEMS devices to engineers as well as a new application of image registration algorithms to computer scientists.

  11. Simultaneous 3D–2D image registration and C-arm calibration: Application to endovascular image-guided interventions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitrović, Uroš; Pernuš, Franjo; Likar, Boštjan

    Purpose: Three-dimensional to two-dimensional (3D–2D) image registration is a key to fusion and simultaneous visualization of valuable information contained in 3D pre-interventional and 2D intra-interventional images with the final goal of image guidance of a procedure. In this paper, the authors focus on 3D–2D image registration within the context of intracranial endovascular image-guided interventions (EIGIs), where the 3D and 2D images are generally acquired with the same C-arm system. The accuracy and robustness of any 3D–2D registration method, to be used in a clinical setting, is influenced by (1) the method itself, (2) uncertainty of initial pose of the 3Dmore » image from which registration starts, (3) uncertainty of C-arm’s geometry and pose, and (4) the number of 2D intra-interventional images used for registration, which is generally one and at most two. The study of these influences requires rigorous and objective validation of any 3D–2D registration method against a highly accurate reference or “gold standard” registration, performed on clinical image datasets acquired in the context of the intervention. Methods: The registration process is split into two sequential, i.e., initial and final, registration stages. The initial stage is either machine-based or template matching. The latter aims to reduce possibly large in-plane translation errors by matching a projection of the 3D vessel model and 2D image. In the final registration stage, four state-of-the-art intrinsic image-based 3D–2D registration methods, which involve simultaneous refinement of rigid-body and C-arm parameters, are evaluated. For objective validation, the authors acquired an image database of 15 patients undergoing cerebral EIGI, for which accurate gold standard registrations were established by fiducial marker coregistration. Results: Based on target registration error, the obtained success rates of 3D to a single 2D image registration after initial machine-based and template matching and final registration involving C-arm calibration were 36%, 73%, and 93%, respectively, while registration accuracy of 0.59 mm was the best after final registration. By compensating in-plane translation errors by initial template matching, the success rates achieved after the final stage improved consistently for all methods, especially if C-arm calibration was performed simultaneously with the 3D–2D image registration. Conclusions: Because the tested methods perform simultaneous C-arm calibration and 3D–2D registration based solely on anatomical information, they have a high potential for automation and thus for an immediate integration into current interventional workflow. One of the authors’ main contributions is also comprehensive and representative validation performed under realistic conditions as encountered during cerebral EIGI.« less

  12. Statistical modeling of 4D respiratory lung motion using diffeomorphic image registration.

    PubMed

    Ehrhardt, Jan; Werner, René; Schmidt-Richberg, Alexander; Handels, Heinz

    2011-02-01

    Modeling of respiratory motion has become increasingly important in various applications of medical imaging (e.g., radiation therapy of lung cancer). Current modeling approaches are usually confined to intra-patient registration of 3D image data representing the individual patient's anatomy at different breathing phases. We propose an approach to generate a mean motion model of the lung based on thoracic 4D computed tomography (CT) data of different patients to extend the motion modeling capabilities. Our modeling process consists of three steps: an intra-subject registration to generate subject-specific motion models, the generation of an average shape and intensity atlas of the lung as anatomical reference frame, and the registration of the subject-specific motion models to the atlas in order to build a statistical 4D mean motion model (4D-MMM). Furthermore, we present methods to adapt the 4D mean motion model to a patient-specific lung geometry. In all steps, a symmetric diffeomorphic nonlinear intensity-based registration method was employed. The Log-Euclidean framework was used to compute statistics on the diffeomorphic transformations. The presented methods are then used to build a mean motion model of respiratory lung motion using thoracic 4D CT data sets of 17 patients. We evaluate the model by applying it for estimating respiratory motion of ten lung cancer patients. The prediction is evaluated with respect to landmark and tumor motion, and the quantitative analysis results in a mean target registration error (TRE) of 3.3 ±1.6 mm if lung dynamics are not impaired by large lung tumors or other lung disorders (e.g., emphysema). With regard to lung tumor motion, we show that prediction accuracy is independent of tumor size and tumor motion amplitude in the considered data set. However, tumors adhering to non-lung structures degrade local lung dynamics significantly and the model-based prediction accuracy is lower in these cases. The statistical respiratory motion model is capable of providing valuable prior knowledge in many fields of applications. We present two examples of possible applications in radiation therapy and image guided diagnosis.

  13. An Automatic Multi-Target Independent Analysis Framework for Non-Planar Infrared-Visible Registration.

    PubMed

    Sun, Xinglong; Xu, Tingfa; Zhang, Jizhou; Zhao, Zishu; Li, Yuankun

    2017-07-26

    In this paper, we propose a novel automatic multi-target registration framework for non-planar infrared-visible videos. Previous approaches usually analyzed multiple targets together and then estimated a global homography for the whole scene, however, these cannot achieve precise multi-target registration when the scenes are non-planar. Our framework is devoted to solving the problem using feature matching and multi-target tracking. The key idea is to analyze and register each target independently. We present a fast and robust feature matching strategy, where only the features on the corresponding foreground pairs are matched. Besides, new reservoirs based on the Gaussian criterion are created for all targets, and a multi-target tracking method is adopted to determine the relationships between the reservoirs and foreground blobs. With the matches in the corresponding reservoir, the homography of each target is computed according to its moving state. We tested our framework on both public near-planar and non-planar datasets. The results demonstrate that the proposed framework outperforms the state-of-the-art global registration method and the manual global registration matrix in all tested datasets.

  14. SU-E-J-29: Automatic Image Registration Performance of Three IGRT Systems for Prostate Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barber, J; University of Sydney, Sydney, NSW; Sykes, J

    Purpose: To compare the performance of an automatic image registration algorithm on image sets collected on three commercial image guidance systems, and explore its relationship with imaging parameters such as dose and sharpness. Methods: Images of a CIRS Virtually Human Male Pelvis phantom (VHMP) were collected on the CBCT systems of Varian TrueBeam/OBI and Elekta Synergy/XVI linear accelerators, across a range of mAs settings; and MVCT on a Tomotherapy Hi-ART accelerator with a range of pitch. Using the 6D correlation ratio algorithm of XVI, each image was registered to a mask of the prostate volume with a 5 mm expansion.more » Registrations were repeated 100 times, with random initial offsets introduced to simulate daily matching. Residual registration errors were calculated by correcting for the initial phantom set-up error. Automatic registration was also repeated after reconstructing images with different sharpness filters. Results: All three systems showed good registration performance, with residual translations <0.5mm (1σ) for typical clinical dose and reconstruction settings. Residual rotational error had larger range, with 0.8°, 1.2° and 1.9° for 1σ in XVI, OBI and Tomotherapy respectively. The registration accuracy of XVI images showed a strong dependence on imaging dose, particularly below 4mGy. No evidence of reduced performance was observed at the lowest dose settings for OBI and Tomotherapy, but these were above 4mGy. Registration failures (maximum target registration error > 3.6 mm on the surface of a 30mm sphere) occurred in 5% to 10% of registrations. Changing the sharpness of image reconstruction had no significant effect on registration performance. Conclusions: Using the present automatic image registration algorithm, all IGRT systems tested provided satisfactory registrations for clinical use, within a normal range of acquisition settings.« less

  15. Comparison of subpixel image registration algorithms

    NASA Astrophysics Data System (ADS)

    Boye, R. R.; Nelson, C. L.

    2009-02-01

    Research into the use of multiframe superresolution has led to the development of algorithms for providing images with enhanced resolution using several lower resolution copies. An integral component of these algorithms is the determination of the registration of each of the low resolution images to a reference image. Without this information, no resolution enhancement can be attained. We have endeavored to find a suitable method for registering severely undersampled images by comparing several approaches. To test the algorithms, an ideal image is input to a simulated image formation program, creating several undersampled images with known geometric transformations. The registration algorithms are then applied to the set of low resolution images and the estimated registration parameters compared to the actual values. This investigation is limited to monochromatic images (extension to color images is not difficult) and only considers global geometric transformations. Each registration approach will be reviewed and evaluated with respect to the accuracy of the estimated registration parameters as well as the computational complexity required. In addition, the effects of image content, specifically spatial frequency content, as well as the immunity of the registration algorithms to noise will be discussed.

  16. Image Registration: A Necessary Evil

    NASA Technical Reports Server (NTRS)

    Bell, James; McLachlan, Blair; Hermstad, Dexter; Trosin, Jeff; George, Michael W. (Technical Monitor)

    1995-01-01

    Registration of test and reference images is a key component of nearly all PSP data reduction techniques. This is done to ensure that a test image pixel viewing a particular point on the model is ratioed by the reference image pixel which views the same point. Typically registration is needed to account for model motion due to differing airloads when the wind-off and wind-on images are taken. Registration is also necessary when two cameras are used for simultaneous acquisition of data from a dual-frequency paint. This presentation will discuss the advantages and disadvantages of several different image registration techniques. In order to do so, it is necessary to propose both an accuracy requirement for image registration and a means for measuring the accuracy of a particular technique. High contrast regions in the unregistered images are most sensitive to registration errors, and it is proposed that these regions be used to establish the error limits for registration. Once this is done, the actual registration error can be determined by locating corresponding points on the test and reference images, and determining how well a particular registration technique matches them. An example of this procedure is shown for three transforms used to register images of a semispan model. Thirty control points were located on the model. A subset of the points were used to determine the coefficients of each registration transform, and the error with which each transform aligned the remaining points was determined. The results indicate the general superiority of a third-order polynomial over other candidate transforms, as well as showing how registration accuracy varies with number of control points. Finally, it is proposed that image registration may eventually be done away with completely. As more accurate image resection techniques and more detailed model surface grids become available, it will be possible to map raw image data onto the model surface accurately. Intensity ratio data can then be obtained by a "model surface ratio," rather than an image ratio. The problems and advantages of this technique will be discussed.

  17. Patient-Specific Simulation of Cardiac Blood Flow From High-Resolution Computed Tomography.

    PubMed

    Lantz, Jonas; Henriksson, Lilian; Persson, Anders; Karlsson, Matts; Ebbers, Tino

    2016-12-01

    Cardiac hemodynamics can be computed from medical imaging data, and results could potentially aid in cardiac diagnosis and treatment optimization. However, simulations are often based on simplified geometries, ignoring features such as papillary muscles and trabeculae due to their complex shape, limitations in image acquisitions, and challenges in computational modeling. This severely hampers the use of computational fluid dynamics in clinical practice. The overall aim of this study was to develop a novel numerical framework that incorporated these geometrical features. The model included the left atrium, ventricle, ascending aorta, and heart valves. The framework used image registration to obtain patient-specific wall motion, automatic remeshing to handle topological changes due to the complex trabeculae motion, and a fast interpolation routine to obtain intermediate meshes during the simulations. Velocity fields and residence time were evaluated, and they indicated that papillary muscles and trabeculae strongly interacted with the blood, which could not be observed in a simplified model. The framework resulted in a model with outstanding geometrical detail, demonstrating the feasibility as well as the importance of a framework that is capable of simulating blood flow in physiologically realistic hearts.

  18. Registration of T2-weighted and diffusion-weighted MR images of the prostate: comparison between manual and landmark-based methods

    NASA Astrophysics Data System (ADS)

    Peng, Yahui; Jiang, Yulei; Soylu, Fatma N.; Tomek, Mark; Sensakovic, William; Oto, Aytekin

    2012-02-01

    Quantitative analysis of multi-parametric magnetic resonance (MR) images of the prostate, including T2-weighted (T2w) and diffusion-weighted (DW) images, requires accurate image registration. We compared two registration methods between T2w and DW images. We collected pre-operative MR images of 124 prostate cancer patients (68 patients scanned with a GE scanner and 56 with Philips scanners). A landmark-based rigid registration was done based on six prostate landmarks in both T2w and DW images identified by a radiologist. Independently, a researcher manually registered the same images. A radiologist visually evaluated the registration results by using a 5-point ordinal scale of 1 (worst) to 5 (best). The Wilcoxon signed-rank test was used to determine whether the radiologist's ratings of the results of the two registration methods were significantly different. Results demonstrated that both methods were accurate: the average ratings were 4.2, 3.3, and 3.8 for GE, Philips, and all images, respectively, for the landmark-based method; and 4.6, 3.7, and 4.2, respectively, for the manual method. The manual registration results were more accurate than the landmark-based registration results (p < 0.0001 for GE, Philips, and all images). Therefore, the manual method produces more accurate registration between T2w and DW images than the landmark-based method.

  19. Range image registration based on hash map and moth-flame optimization

    NASA Astrophysics Data System (ADS)

    Zou, Li; Ge, Baozhen; Chen, Lei

    2018-03-01

    Over the past decade, evolutionary algorithms (EAs) have been introduced to solve range image registration problems because of their robustness and high precision. However, EA-based range image registration algorithms are time-consuming. To reduce the computational time, an EA-based range image registration algorithm using hash map and moth-flame optimization is proposed. In this registration algorithm, a hash map is used to avoid over-exploitation in registration process. Additionally, we present a search equation that is better at exploration and a restart mechanism to avoid being trapped in local minima. We compare the proposed registration algorithm with the registration algorithms using moth-flame optimization and several state-of-the-art EA-based registration algorithms. The experimental results show that the proposed algorithm has a lower computational cost than other algorithms and achieves similar registration precision.

  20. Backward Registration Based Aspect Ratio Similarity (ARS) for Image Retargeting Quality Assessment.

    PubMed

    Zhang, Yabin; Fang, Yuming; Lin, Weisi; Zhang, Xinfeng; Li, Leida

    2016-06-28

    During the past few years, there have been various kinds of content-aware image retargeting operators proposed for image resizing. However, the lack of effective objective retargeting quality assessment metrics limits the further development of image retargeting techniques. Different from traditional Image Quality Assessment (IQA) metrics, the quality degradation during image retargeting is caused by artificial retargeting modifications, and the difficulty for Image Retargeting Quality Assessment (IRQA) lies in the alternation of the image resolution and content, which makes it impossible to directly evaluate the quality degradation like traditional IQA. In this paper, we interpret the image retargeting in a unified framework of resampling grid generation and forward resampling. We show that the geometric change estimation is an efficient way to clarify the relationship between the images. We formulate the geometric change estimation as a Backward Registration problem with Markov Random Field (MRF) and provide an effective solution. The geometric change aims to provide the evidence about how the original image is resized into the target image. Under the guidance of the geometric change, we develop a novel Aspect Ratio Similarity metric (ARS) to evaluate the visual quality of retargeted images by exploiting the local block changes with a visual importance pooling strategy. Experimental results on the publicly available MIT RetargetMe and CUHK datasets demonstrate that the proposed ARS can predict more accurate visual quality of retargeted images compared with state-of-the-art IRQA metrics.

  1. Multi-atlas learner fusion: An efficient segmentation approach for large-scale data.

    PubMed

    Asman, Andrew J; Huo, Yuankai; Plassard, Andrew J; Landman, Bennett A

    2015-12-01

    We propose multi-atlas learner fusion (MLF), a framework for rapidly and accurately replicating the highly accurate, yet computationally expensive, multi-atlas segmentation framework based on fusing local learners. In the largest whole-brain multi-atlas study yet reported, multi-atlas segmentations are estimated for a training set of 3464 MR brain images. Using these multi-atlas estimates we (1) estimate a low-dimensional representation for selecting locally appropriate example images, and (2) build AdaBoost learners that map a weak initial segmentation to the multi-atlas segmentation result. Thus, to segment a new target image we project the image into the low-dimensional space, construct a weak initial segmentation, and fuse the trained, locally selected, learners. The MLF framework cuts the runtime on a modern computer from 36 h down to 3-8 min - a 270× speedup - by completely bypassing the need for deformable atlas-target registrations. Additionally, we (1) describe a technique for optimizing the weak initial segmentation and the AdaBoost learning parameters, (2) quantify the ability to replicate the multi-atlas result with mean accuracies approaching the multi-atlas intra-subject reproducibility on a testing set of 380 images, (3) demonstrate significant increases in the reproducibility of intra-subject segmentations when compared to a state-of-the-art multi-atlas framework on a separate reproducibility dataset, (4) show that under the MLF framework the large-scale data model significantly improve the segmentation over the small-scale model under the MLF framework, and (5) indicate that the MLF framework has comparable performance as state-of-the-art multi-atlas segmentation algorithms without using non-local information. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Image Registration of High-Resolution Uav Data: the New Hypare Algorithm

    NASA Astrophysics Data System (ADS)

    Bahr, T.; Jin, X.; Lasica, R.; Giessel, D.

    2013-08-01

    Unmanned aerial vehicles play an important role in the present-day civilian and military intelligence. Equipped with a variety of sensors, such as SAR imaging modes, E/O- and IR sensor technology, they are due to their agility suitable for many applications. Hence, the necessity arises to use fusion technologies and to develop them continuously. Here an exact image-to-image registration is essential. It serves as the basis for important image processing operations such as georeferencing, change detection, and data fusion. Therefore we developed the Hybrid Powered Auto-Registration Engine (HyPARE). HyPARE combines all available spatial reference information with a number of image registration approaches to improve the accuracy, performance, and automation of tie point generation and image registration. We demonstrate this approach by the registration of 39 still images from a high-resolution image stream, acquired with a Aeryon Photo3S™ camera on an Aeryon Scout micro-UAV™.

  3. A prospective comparison between auto-registration and manual registration of real-time ultrasound with MR images for percutaneous ablation or biopsy of hepatic lesions.

    PubMed

    Cha, Dong Ik; Lee, Min Woo; Song, Kyoung Doo; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga

    2017-06-01

    To compare the accuracy and required time for image fusion of real-time ultrasound (US) with pre-procedural magnetic resonance (MR) images between positioning auto-registration and manual registration for percutaneous radiofrequency ablation or biopsy of hepatic lesions. This prospective study was approved by the institutional review board, and all patients gave written informed consent. Twenty-two patients (male/female, n = 18/n = 4; age, 61.0 ± 7.7 years) who were referred for planning US to assess the feasibility of radiofrequency ablation (n = 21) or biopsy (n = 1) for focal hepatic lesions were included. One experienced radiologist performed the two types of image fusion methods in each patient. The performance of auto-registration and manual registration was evaluated. The accuracy of the two methods, based on measuring registration error, and the time required for image fusion for both methods were recorded using in-house software and respectively compared using the Wilcoxon signed rank test. Image fusion was successful in all patients. The registration error was not significantly different between the two methods (auto-registration: median, 3.75 mm; range, 1.0-15.8 mm vs. manual registration: median, 2.95 mm; range, 1.2-12.5 mm, p = 0.242). The time required for image fusion was significantly shorter with auto-registration than with manual registration (median, 28.5 s; range, 18-47 s, vs. median, 36.5 s; range, 14-105 s, p = 0.026). Positioning auto-registration showed promising results compared with manual registration, with similar accuracy and even shorter registration time.

  4. Automatic selection of landmarks in T1-weighted head MRI with regression forests for image registration initialization

    NASA Astrophysics Data System (ADS)

    Wang, Jianing; Liu, Yuan; Noble, Jack H.; Dawant, Benoit M.

    2017-02-01

    Medical image registration establishes a correspondence between images of biological structures and it is at the core of many applications. Commonly used deformable image registration methods are dependent on a good preregistration initialization. The initialization can be performed by localizing homologous landmarks and calculating a point-based transformation between the images. The selection of landmarks is however important. In this work, we present a learning-based method to automatically find a set of robust landmarks in 3D MR image volumes of the head to initialize non-rigid transformations. To validate our method, these selected landmarks are localized in unknown image volumes and they are used to compute a smoothing thin-plate splines transformation that registers the atlas to the volumes. The transformed atlas image is then used as the preregistration initialization of an intensity-based non-rigid registration algorithm. We show that the registration accuracy of this algorithm is statistically significantly improved when using the presented registration initialization over a standard intensity-based affine registration.

  5. Medical image registration based on normalized multidimensional mutual information

    NASA Astrophysics Data System (ADS)

    Li, Qi; Ji, Hongbing; Tong, Ming

    2009-10-01

    Registration of medical images is an essential research topic in medical image processing and applications, and especially a preliminary and key step for multimodality image fusion. This paper offers a solution to medical image registration based on normalized multi-dimensional mutual information. Firstly, affine transformation with translational and rotational parameters is applied to the floating image. Then ordinal features are extracted by ordinal filters with different orientations to represent spatial information in medical images. Integrating ordinal features with pixel intensities, the normalized multi-dimensional mutual information is defined as similarity criterion to register multimodality images. Finally the immune algorithm is used to search registration parameters. The experimental results demonstrate the effectiveness of the proposed registration scheme.

  6. Assessing the intrinsic precision of 3D/3D rigid image registration results for patient setup in the absence of a ground truth.

    PubMed

    Wu, Jian; Murphy, Martin J

    2010-06-01

    To assess the precision and robustness of patient setup corrections computed from 3D/3D rigid registration methods using image intensity, when no ground truth validation is possible. Fifteen pairs of male pelvic CTs were rigidly registered using four different in-house registration methods. Registration results were compared for different resolutions and image content by varying the image down-sampling ratio and by thresholding out soft tissue to isolate bony landmarks. Intrinsic registration precision was investigated by comparing the different methods and by reversing the source and the target roles of the two images being registered. The translational reversibility errors for successful registrations ranged from 0.0 to 1.69 mm. Rotations were less than 1 degrees. Mutual information failed in most registrations that used only bony landmarks. The magnitude of the reversibility error was strongly correlated with the success/ failure of each algorithm to find the global minimum. Rigid image registrations have an intrinsic uncertainty and robustness that depends on the imaging modality, the registration algorithm, the image resolution, and the image content. In the absence of an absolute ground truth, the variation in the shifts calculated by several different methods provides a useful estimate of that uncertainty. The difference observed by reversing the source and target images can be used as an indication of robust convergence.

  7. Automatic three-dimensional registration of intra-vascular optical coherence tomography images for the clinical evaluation of stent implantation over time

    NASA Astrophysics Data System (ADS)

    Ughi, Giovanni J.; Adriaenssens, Tom; Larsson, Matilda; Dubois, Christophe; Sinnaeve, Peter; Coosemans, Mark; Desmet, Walter; D'hooghe, Jan

    2012-01-01

    In the last decade a large number of new intracoronary devices (i.e. drug-eluting stents, DES) have been developed to reduce the risks related to bare metal stent (BMS) implantation. The use of this new generation of DES has been shown to substantially reduce, compared with BMS, the occurrence of restenosis and recurrent ischemia that would necessitate a second revascularization procedure. Nevertheless, safety issues on the use of DES persist and full understanding of mechanisms of adverse clinical events is still a matter of concern and debate. Intravascular Optical Coherence Tomography (IV-OCT) is an imaging technique able to visualize the microstructure of blood vessels with an axial resolution <20 μm. Due to its very high spatial resolution, it enables detailed in-vivo assessment of implanted devices and vessel wall. Currently, the aim of several major clinical trials is to observe and quantify the vessel response to DES implantation over time. However, image analysis is currently performed manually and corresponding images, belonging to different IV-OCT acquisitions, can only be matched through a very labor intensive and subjective procedure. The aim of this study is to develop and validate a new methodology for the automatic registration of IV-OCT datasets on an image level. Hereto, we propose a landmark based rigid registration method exploiting the metallic stent framework as a feature. Such a tool would provide a better understanding of the behavior of different intracoronary devices in-vivo, giving unique insights about vessel pathophysiology and performance of new generation of intracoronary devices and different drugs.

  8. Image Registration Workshop Proceedings

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline (Editor)

    1997-01-01

    Automatic image registration has often been considered as a preliminary step for higher-level processing, such as object recognition or data fusion. But with the unprecedented amounts of data which are being and will continue to be generated by newly developed sensors, the very topic of automatic image registration has become and important research topic. This workshop presents a collection of very high quality work which has been grouped in four main areas: (1) theoretical aspects of image registration; (2) applications to satellite imagery; (3) applications to medical imagery; and (4) image registration for computer vision research.

  9. Local-search based prediction of medical image registration error

    NASA Astrophysics Data System (ADS)

    Saygili, Görkem

    2018-03-01

    Medical image registration is a crucial task in many different medical imaging applications. Hence, considerable amount of work has been published recently that aim to predict the error in a registration without any human effort. If provided, these error predictions can be used as a feedback to the registration algorithm to further improve its performance. Recent methods generally start with extracting image-based and deformation-based features, then apply feature pooling and finally train a Random Forest (RF) regressor to predict the real registration error. Image-based features can be calculated after applying a single registration but provide limited accuracy whereas deformation-based features such as variation of deformation vector field may require up to 20 registrations which is a considerably high time-consuming task. This paper proposes to use extracted features from a local search algorithm as image-based features to estimate the error of a registration. The proposed method comprises a local search algorithm to find corresponding voxels between registered image pairs and based on the amount of shifts and stereo confidence measures, it predicts the amount of registration error in millimetres densely using a RF regressor. Compared to other algorithms in the literature, the proposed algorithm does not require multiple registrations, can be efficiently implemented on a Graphical Processing Unit (GPU) and can still provide highly accurate error predictions in existence of large registration error. Experimental results with real registrations on a public dataset indicate a substantially high accuracy achieved by using features from the local search algorithm.

  10. Patient-Specific Modeling of Intraventricular Hemodynamics

    NASA Astrophysics Data System (ADS)

    Vedula, Vijay; Marsden, Alison

    2017-11-01

    Heart disease is the one of the leading causes of death in the world. Apart from malfunctions in electrophysiology and myocardial mechanics, abnormal hemodynamics is a major factor attributed to heart disease across all ages. Computer simulations offer an efficient means to accurately reproduce in vivo flow conditions and also make predictions of post-operative outcomes and disease progression. We present an experimentally validated computational framework for performing patient-specific modeling of intraventricular hemodynamics. Our modeling framework employs the SimVascular open source software to build an anatomic model and employs robust image registration methods to extract ventricular motion from the image data. We then employ a stabilized finite element solver to simulate blood flow in the ventricles, solving the Navier-Stokes equations in arbitrary Lagrangian-Eulerian (ALE) coordinates by prescribing the wall motion extracted during registration. We model the fluid-structure interaction effects of the cardiac valves using an immersed boundary method and discuss the potential application of this methodology in single ventricle physiology and trans-catheter aortic valve replacement (TAVR). This research is supported in part by the Stanford Child Health Research Institute and the Stanford NIH-NCATS-CTSA through Grant UL1 TR001085 and partly through NIH NHLBI R01 Grant 5R01HL129727-02.

  11. Framework for 3D histologic reconstruction and fusion with in vivo MRI: Preliminary results of characterizing pulmonary inflammation in a mouse model.

    PubMed

    Rusu, Mirabela; Golden, Thea; Wang, Haibo; Gow, Andrew; Madabhushi, Anant

    2015-08-01

    Pulmonary inflammation is associated with a variety of diseases. Assessing pulmonary inflammation on in vivo imaging may facilitate the early detection and treatment of lung diseases. Although routinely used in thoracic imaging, computed tomography has thus far not been compellingly shown to characterize inflammation in vivo. Alternatively, magnetic resonance imaging (MRI) is a nonionizing radiation technique to better visualize and characterize pulmonary tissue. Prior to routine adoption of MRI for early characterization of inflammation in humans, a rigorous and quantitative characterization of the utility of MRI to identify inflammation is required. Such characterization may be achieved by considering ex vivo histology as the ground truth, since it enables the definitive spatial assessment of inflammation. In this study, the authors introduce a novel framework to integrate 2D histology, ex vivo and in vivo imaging to enable the mapping of the extent of disease from ex vivo histology onto in vivo imaging, with the goal of facilitating computerized feature analysis and interrogation of disease appearance on in vivo imaging. The authors' framework was evaluated in a preclinical preliminary study aimed to identify computer extracted features on in vivo MRI associated with chronic pulmonary inflammation. The authors' image analytics framework first involves reconstructing the histologic volume in 3D from individual histology slices. Second, the authors map the disease ground truth onto in vivo MRI via coregistration with 3D histology using the ex vivo lung MRI as a conduit. Finally, computerized feature analysis of the disease extent is performed to identify candidate in vivo imaging signatures of disease presence and extent. The authors evaluated the framework by assessing the quality of the 3D histology reconstruction and the histology-MRI fusion, in the context of an initial use case involving characterization of chronic inflammation in a mouse model. The authors' evaluation considered three mice, two with an inflammation phenotype and one control. The authors' iterative 3D histology reconstruction yielded a 70.1% ± 2.7% overlap with the ex vivo MRI volume. Across a total of 17 anatomic landmarks manually delineated at the division of airways, the target registration error between the ex vivo MRI and 3D histology reconstruction was 0.85 ± 0.44 mm, suggesting that a good alignment of the ex vivo 3D histology and ex vivo MRI had been achieved. The 3D histology-in vivo MRI coregistered volumes resulted in an overlap of 73.7% ± 0.9%. Preliminary computerized feature analysis was performed on an additional four control mice, for a total of seven mice considered in this study. Gabor texture filters appeared to best capture differences between the inflamed and noninflamed regions on MRI. The authors' 3D histology reconstruction and multimodal registration framework were successfully employed to reconstruct the histology volume of the lung and fuse it with in vivo MRI to create a ground truth map for inflammation on in vivo MRI. The analytic platform presented here lays the framework for a rigorous validation of the identified imaging features for chronic lung inflammation on MRI in a large prospective cohort.

  12. Hybrid registration of PET/CT in thoracic region with pre-filtering PET sinogram

    NASA Astrophysics Data System (ADS)

    Mokri, S. S.; Saripan, M. I.; Marhaban, M. H.; Nordin, A. J.; Hashim, S.

    2015-11-01

    The integration of physiological (PET) and anatomical (CT) images in cancer delineation requires an accurate spatial registration technique. Although hybrid PET/CT scanner is used to co-register these images, significant misregistrations exist due to patient and respiratory/cardiac motions. This paper proposes a hybrid feature-intensity based registration technique for hybrid PET/CT scanner. First, simulated PET sinogram was filtered with a 3D hybrid mean-median before reconstructing the image. The features were then derived from the segmented structures (lung, heart and tumor) from both images. The registration was performed based on modified multi-modality demon registration with multiresolution scheme. Apart from visual observations improvements, the proposed registration technique increased the normalized mutual information index (NMI) between the PET/CT images after registration. All nine tested datasets show marked improvements in mutual information (MI) index than free form deformation (FFD) registration technique with the highest MI increase is 25%.

  13. SU-F-J-34: Automatic Target-Based Patient Positioning Framework for Image-Guided Radiotherapy in Prostate Cancer Treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sasahara, M; Arimura, H; Hirose, T

    Purpose: Current image-guided radiotherapy (IGRT) procedure is bonebased patient positioning, followed by subjective manual correction using cone beam computed tomography (CBCT). This procedure might cause the misalignment of the patient positioning. Automatic target-based patient positioning systems achieve the better reproducibility of patient setup. Our aim of this study was to develop an automatic target-based patient positioning framework for IGRT with CBCT images in prostate cancer treatment. Methods: Seventy-three CBCT images of 10 patients and 24 planning CT images with digital imaging and communications in medicine for radiotherapy (DICOM-RT) structures were used for this study. Our proposed framework started from themore » generation of probabilistic atlases of bone and prostate from 24 planning CT images and prostate contours, which were made in the treatment planning. Next, the gray-scale histograms of CBCT values within CTV regions in the planning CT images were obtained as the occurrence probability of the CBCT values. Then, CBCT images were registered to the atlases using a rigid registration with mutual information. Finally, prostate regions were estimated by applying the Bayesian inference to CBCT images with the probabilistic atlases and CBCT value occurrence probability. The proposed framework was evaluated by calculating the Euclidean distance of errors between two centroids of prostate regions determined by our method and ground truths of manual delineations by a radiation oncologist and a medical physicist on CBCT images for 10 patients. Results: The average Euclidean distance between the centroids of extracted prostate regions determined by our proposed method and ground truths was 4.4 mm. The average errors for each direction were 1.8 mm in anteroposterior direction, 0.6 mm in lateral direction and 2.1 mm in craniocaudal direction. Conclusion: Our proposed framework based on probabilistic atlases and Bayesian inference might be feasible to automatically determine prostate regions on CBCT images.« less

  14. Deformable and rigid registration of MRI and microPET images for photodynamic therapy of cancer in mice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fei Baowei; Wang Hesheng; Muzic, Raymond F. Jr.

    2006-03-15

    We are investigating imaging techniques to study the tumor response to photodynamic therapy (PDT). Positron emission tomography (PET) can provide physiological and functional information. High-resolution magnetic resonance imaging (MRI) can provide anatomical and morphological changes. Image registration can combine MRI and PET images for improved tumor monitoring. In this study, we acquired high-resolution MRI and microPET {sup 18}F-fluorodeoxyglucose (FDG) images from C3H mice with RIF-1 tumors that were treated with Pc 4-based PDT. We developed two registration methods for this application. For registration of the whole mouse body, we used an automatic three-dimensional, normalized mutual information algorithm. For tumor registration,more » we developed a finite element model (FEM)-based deformable registration scheme. To assess the quality of whole body registration, we performed slice-by-slice review of both image volumes; manually segmented feature organs, such as the left and right kidneys and the bladder, in each slice; and computed the distance between corresponding centroids. Over 40 volume registration experiments were performed with MRI and microPET images. The distance between corresponding centroids of organs was 1.5{+-}0.4 mm which is about 2 pixels of microPET images. The mean volume overlap ratios for tumors were 94.7% and 86.3% for the deformable and rigid registration methods, respectively. Registration of high-resolution MRI and microPET images combines anatomical and functional information of the tumors and provides a useful tool for evaluating photodynamic therapy.« less

  15. Open-source image registration for MRI-TRUS fusion-guided prostate interventions.

    PubMed

    Fedorov, Andriy; Khallaghi, Siavash; Sánchez, C Antonio; Lasso, Andras; Fels, Sidney; Tuncali, Kemal; Sugar, Emily Neubauer; Kapur, Tina; Zhang, Chenxi; Wells, William; Nguyen, Paul L; Abolmaesumi, Purang; Tempany, Clare

    2015-06-01

    We propose two software tools for non-rigid registration of MRI and transrectal ultrasound (TRUS) images of the prostate. Our ultimate goal is to develop an open-source solution to support MRI-TRUS fusion image guidance of prostate interventions, such as targeted biopsy for prostate cancer detection and focal therapy. It is widely hypothesized that image registration is an essential component in such systems. The two non-rigid registration methods are: (1) a deformable registration of the prostate segmentation distance maps with B-spline regularization and (2) a finite element-based deformable registration of the segmentation surfaces in the presence of partial data. We evaluate the methods retrospectively using clinical patient image data collected during standard clinical procedures. Computation time and Target Registration Error (TRE) calculated at the expert-identified anatomical landmarks were used as quantitative measures for the evaluation. The presented image registration tools were capable of completing deformable registration computation within 5 min. Average TRE was approximately 3 mm for both methods, which is comparable with the slice thickness in our MRI data. Both tools are available under nonrestrictive open-source license. We release open-source tools that may be used for registration during MRI-TRUS-guided prostate interventions. Our tools implement novel registration approaches and produce acceptable registration results. We believe these tools will lower the barriers in development and deployment of interventional research solutions and facilitate comparison with similar tools.

  16. Comparison of an adaptive local thresholding method on CBCT and µCT endodontic images

    NASA Astrophysics Data System (ADS)

    Michetti, Jérôme; Basarab, Adrian; Diemer, Franck; Kouame, Denis

    2018-01-01

    Root canal segmentation on cone beam computed tomography (CBCT) images is difficult because of the noise level, resolution limitations, beam hardening and dental morphological variations. An image processing framework, based on an adaptive local threshold method, was evaluated on CBCT images acquired on extracted teeth. A comparison with high quality segmented endodontic images on micro computed tomography (µCT) images acquired from the same teeth was carried out using a dedicated registration process. Each segmented tooth was evaluated according to volume and root canal sections through the area and the Feret’s diameter. The proposed method is shown to overcome the limitations of CBCT and to provide an automated and adaptive complete endodontic segmentation. Despite a slight underestimation (-4, 08%), the local threshold segmentation method based on edge-detection was shown to be fast and accurate. Strong correlations between CBCT and µCT segmentations were found both for the root canal area and diameter (respectively 0.98 and 0.88). Our findings suggest that combining CBCT imaging with this image processing framework may benefit experimental endodontology, teaching and could represent a first development step towards the clinical use of endodontic CBCT segmentation during pulp cavity treatment.

  17. Infrared thermal facial image sequence registration analysis and verification

    NASA Astrophysics Data System (ADS)

    Chen, Chieh-Li; Jian, Bo-Lin

    2015-03-01

    To study the emotional responses of subjects to the International Affective Picture System (IAPS), infrared thermal facial image sequence is preprocessed for registration before further analysis such that the variance caused by minor and irregular subject movements is reduced. Without affecting the comfort level and inducing minimal harm, this study proposes an infrared thermal facial image sequence registration process that will reduce the deviations caused by the unconscious head shaking of the subjects. A fixed image for registration is produced through the localization of the centroid of the eye region as well as image translation and rotation processes. Thermal image sequencing will then be automatically registered using the two-stage genetic algorithm proposed. The deviation before and after image registration will be demonstrated by image quality indices. The results show that the infrared thermal image sequence registration process proposed in this study is effective in localizing facial images accurately, which will be beneficial to the correlation analysis of psychological information related to the facial area.

  18. Microscopic neural image registration based on the structure of mitochondria

    NASA Astrophysics Data System (ADS)

    Cao, Huiwen; Han, Hua; Rao, Qiang; Xiao, Chi; Chen, Xi

    2017-02-01

    Microscopic image registration is a key component of the neural structure reconstruction with serial sections of neural tissue. The goal of microscopic neural image registration is to recover the 3D continuity and geometrical properties of specimen. During image registration, various distortions need to be corrected, including image rotation, translation, tissue deformation et.al, which come from the procedure of sample cutting, staining and imaging. Furthermore, there is only certain similarity between adjacent sections, and the degree of similarity depends on local structure of the tissue and the thickness of the sections. These factors make the microscopic neural image registration a challenging problem. To tackle the difficulty of corresponding landmarks extraction, we introduce a novel image registration method for Scanning Electron Microscopy (SEM) images of serial neural tissue sections based on the structure of mitochondria. The ellipsoidal shape of mitochondria ensures that the same mitochondria has similar shape between adjacent sections, and its characteristic of broad distribution in the neural tissue guarantees that landmarks based on the mitochondria distributed widely in the image. The proposed image registration method contains three parts: landmarks extraction between adjacent sections, corresponding landmarks matching and image deformation based on the correspondences. We demonstrate the performance of our method with SEM images of drosophila brain.

  19. Optimizing image registration and infarct definition in stroke research.

    PubMed

    Harston, George W J; Minks, David; Sheerin, Fintan; Payne, Stephen J; Chappell, Michael; Jezzard, Peter; Jenkinson, Mark; Kennedy, James

    2017-03-01

    Accurate representation of final infarct volume is essential for assessing the efficacy of stroke interventions in imaging-based studies. This study defines the impact of image registration methods used at different timepoints following stroke, and the implications for infarct definition in stroke research. Patients presenting with acute ischemic stroke were imaged serially using magnetic resonance imaging. Infarct volume was defined manually using four metrics: 24-h b1000 imaging; 1-week and 1-month T2-weighted FLAIR; and automatically using predefined thresholds of ADC at 24 h. Infarct overlap statistics and volumes were compared across timepoints following both rigid body and nonlinear image registration to the presenting MRI. The effect of nonlinear registration on a hypothetical trial sample size was calculated. Thirty-seven patients were included. Nonlinear registration improved infarct overlap statistics and consistency of total infarct volumes across timepoints, and reduced infarct volumes by 4.0 mL (13.1%) and 7.1 mL (18.2%) at 24 h and 1 week, respectively, compared to rigid body registration. Infarct volume at 24 h, defined using a predetermined ADC threshold, was less sensitive to infarction than b1000 imaging. 1-week T2-weighted FLAIR imaging was the most accurate representation of final infarct volume. Nonlinear registration reduced hypothetical trial sample size, independent of infarct volume, by an average of 13%. Nonlinear image registration may offer the opportunity of improving the accuracy of infarct definition in serial imaging studies compared to rigid body registration, helping to overcome the challenges of anatomical distortions at subacute timepoints, and reducing sample size for imaging-based clinical trials.

  20. Transformation diffusion reconstruction of three-dimensional histology volumes from two-dimensional image stacks.

    PubMed

    Casero, Ramón; Siedlecka, Urszula; Jones, Elizabeth S; Gruscheski, Lena; Gibb, Matthew; Schneider, Jürgen E; Kohl, Peter; Grau, Vicente

    2017-05-01

    Traditional histology is the gold standard for tissue studies, but it is intrinsically reliant on two-dimensional (2D) images. Study of volumetric tissue samples such as whole hearts produces a stack of misaligned and distorted 2D images that need to be reconstructed to recover a congruent volume with the original sample's shape. In this paper, we develop a mathematical framework called Transformation Diffusion (TD) for stack alignment refinement as a solution to the heat diffusion equation. This general framework does not require contour segmentation, is independent of the registration method used, and is trivially parallelizable. After the first stack sweep, we also replace registration operations by operations in the space of transformations, several orders of magnitude faster and less memory-consuming. Implementing TD with operations in the space of transformations produces our Transformation Diffusion Reconstruction (TDR) algorithm, applicable to general transformations that are closed under inversion and composition. In particular, we provide formulas for translation and affine transformations. We also propose an Approximated TDR (ATDR) algorithm that extends the same principles to tensor-product B-spline transformations. Using TDR and ATDR, we reconstruct a full mouse heart at pixel size 0.92µm×0.92µm, cut 10µm thick, spaced 20µm (84G). Our algorithms employ only local information from transformations between neighboring slices, but the TD framework allows theoretical analysis of the refinement as applying a global Gaussian low-pass filter to the unknown stack misalignments. We also show that reconstruction without an external reference produces large shape artifacts in a cardiac specimen while still optimizing slice-to-slice alignment. To overcome this problem, we use a pre-cutting blockface imaging process previously developed by our group that takes advantage of Brewster's angle and a polarizer to capture the outline of only the topmost layer of wax in the block containing embedded tissue for histological sectioning. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  1. Geometric registration of remotely sensed data with SAMIR

    NASA Astrophysics Data System (ADS)

    Gianinetto, Marco; Barazzetti, Luigi; Dini, Luigi; Fusiello, Andrea; Toldo, Roberto

    2015-06-01

    The commercial market offers several software packages for the registration of remotely sensed data through standard one-to-one image matching. Although very rapid and simple, this strategy does not take into consideration all the interconnections among the images of a multi-temporal data set. This paper presents a new scientific software, called Satellite Automatic Multi-Image Registration (SAMIR), able to extend the traditional registration approach towards multi-image global processing. Tests carried out with high-resolution optical (IKONOS) and high-resolution radar (COSMO-SkyMed) data showed that SAMIR can improve the registration phase with a more rigorous and robust workflow without initial approximations, user's interaction or limitation in spatial/spectral data size. The validation highlighted a sub-pixel accuracy in image co-registration for the considered imaging technologies, including optical and radar imagery.

  2. Groupwise Image Registration Guided by a Dynamic Digraph of Images.

    PubMed

    Tang, Zhenyu; Fan, Yong

    2016-04-01

    For groupwise image registration, graph theoretic methods have been adopted for discovering the manifold of images to be registered so that accurate registration of images to a group center image can be achieved by aligning similar images that are linked by the shortest graph paths. However, the image similarity measures adopted to build a graph of images in the extant methods are essentially pairwise measures, not effective for capturing the groupwise similarity among multiple images. To overcome this problem, we present a groupwise image similarity measure that is built on sparse coding for characterizing image similarity among all input images and build a directed graph (digraph) of images so that similar images are connected by the shortest paths of the digraph. Following the shortest paths determined according to the digraph, images are registered to a group center image in an iterative manner by decomposing a large anatomical deformation field required to register an image to the group center image into a series of small ones between similar images. During the iterative image registration, the digraph of images evolves dynamically at each iteration step to pursue an accurate estimation of the image manifold. Moreover, an adaptive dictionary strategy is adopted in the groupwise image similarity measure to ensure fast convergence of the iterative registration procedure. The proposed method has been validated based on both simulated and real brain images, and experiment results have demonstrated that our method was more effective for learning the manifold of input images and achieved higher registration accuracy than state-of-the-art groupwise image registration methods.

  3. Learning-based deformable image registration for infant MR images in the first year of life.

    PubMed

    Hu, Shunbo; Wei, Lifang; Gao, Yaozong; Guo, Yanrong; Wu, Guorong; Shen, Dinggang

    2017-01-01

    Many brain development studies have been devoted to investigate dynamic structural and functional changes in the first year of life. To quantitatively measure brain development in such a dynamic period, accurate image registration for different infant subjects with possible large age gap is of high demand. Although many state-of-the-art image registration methods have been proposed for young and elderly brain images, very few registration methods work for infant brain images acquired in the first year of life, because of (a) large anatomical changes due to fast brain development and (b) dynamic appearance changes due to white-matter myelination. To address these two difficulties, we propose a learning-based registration method to not only align the anatomical structures but also alleviate the appearance differences between two arbitrary infant MR images (with large age gap) by leveraging the regression forest to predict both the initial displacement vector and appearance changes. Specifically, in the training stage, two regression models are trained separately, with (a) one model learning the relationship between local image appearance (of one development phase) and its displacement toward the template (of another development phase) and (b) another model learning the local appearance changes between the two brain development phases. Then, in the testing stage, to register a new infant image to the template, we first predict both its voxel-wise displacement and appearance changes by the two learned regression models. Since such initializations can alleviate significant appearance and shape differences between new infant image and the template, it is easy to just use a conventional registration method to refine the remaining registration. We apply our proposed registration method to align 24 infant subjects at five different time points (i.e., 2-week-old, 3-month-old, 6-month-old, 9-month-old, and 12-month-old), and achieve more accurate and robust registration results, compared to the state-of-the-art registration methods. The proposed learning-based registration method addresses the challenging task of registering infant brain images and achieves higher registration accuracy compared with other counterpart registration methods. © 2016 American Association of Physicists in Medicine.

  4. Edge-based correlation image registration for multispectral imaging

    DOEpatents

    Nandy, Prabal [Albuquerque, NM

    2009-11-17

    Registration information for images of a common target obtained from a plurality of different spectral bands can be obtained by combining edge detection and phase correlation. The images are edge-filtered, and pairs of the edge-filtered images are then phase correlated to produce phase correlation images. The registration information can be determined based on these phase correlation images.

  5. Parallel image registration with a thin client interface

    NASA Astrophysics Data System (ADS)

    Saiprasad, Ganesh; Lo, Yi-Jung; Plishker, William; Lei, Peng; Ahmad, Tabassum; Shekhar, Raj

    2010-03-01

    Despite its high significance, the clinical utilization of image registration remains limited because of its lengthy execution time and a lack of easy access. The focus of this work was twofold. First, we accelerated our course-to-fine, volume subdivision-based image registration algorithm by a novel parallel implementation that maintains the accuracy of our uniprocessor implementation. Second, we developed a thin-client computing model with a user-friendly interface to perform rigid and nonrigid image registration. Our novel parallel computing model uses the message passing interface model on a 32-core cluster. The results show that, compared with the uniprocessor implementation, the parallel implementation of our image registration algorithm is approximately 5 times faster for rigid image registration and approximately 9 times faster for nonrigid registration for the images used. To test the viability of such systems for clinical use, we developed a thin client in the form of a plug-in in OsiriX, a well-known open source PACS workstation and DICOM viewer, and used it for two applications. The first application registered the baseline and follow-up MR brain images, whose subtraction was used to track progression of multiple sclerosis. The second application registered pretreatment PET and intratreatment CT of radiofrequency ablation patients to demonstrate a new capability of multimodality imaging guidance. The registration acceleration coupled with the remote implementation using a thin client should ultimately increase accuracy, speed, and access of image registration-based interpretations in a number of diagnostic and interventional applications.

  6. Three-dimensional nonrigid landmark-based magnetic resonance to transrectal ultrasound registration for image-guided prostate biopsy.

    PubMed

    Sun, Yue; Qiu, Wu; Yuan, Jing; Romagnoli, Cesare; Fenster, Aaron

    2015-04-01

    Registration of three-dimensional (3-D) magnetic resonance (MR) to 3-D transrectal ultrasound (TRUS) prostate images is an important step in the planning and guidance of 3-D TRUS guided prostate biopsy. In order to accurately and efficiently perform the registration, a nonrigid landmark-based registration method is required to account for the different deformations of the prostate when using these two modalities. We describe a nonrigid landmark-based method for registration of 3-D TRUS to MR prostate images. The landmark-based registration method first makes use of an initial rigid registration of 3-D MR to 3-D TRUS images using six manually placed approximately corresponding landmarks in each image. Following manual initialization, the two prostate surfaces are segmented from 3-D MR and TRUS images and then nonrigidly registered using the following steps: (1) rotationally reslicing corresponding segmented prostate surfaces from both 3-D MR and TRUS images around a specified axis, (2) an approach to find point correspondences on the surfaces of the segmented surfaces, and (3) deformation of the surface of the prostate in the MR image to match the surface of the prostate in the 3-D TRUS image and the interior using a thin-plate spline algorithm. The registration accuracy was evaluated using 17 patient prostate MR and 3-D TRUS images by measuring the target registration error (TRE). Experimental results showed that the proposed method yielded an overall mean TRE of [Formula: see text] for the rigid registration and [Formula: see text] for the nonrigid registration, which is favorably comparable to a clinical requirement for an error of less than 2.5 mm. A landmark-based nonrigid 3-D MR-TRUS registration approach is proposed, which takes into account the correspondences on the prostate surface, inside the prostate, as well as the centroid of the prostate. Experimental results indicate that the proposed method yields clinically sufficient accuracy.

  7. The role of image registration in brain mapping

    PubMed Central

    Toga, A.W.; Thompson, P.M.

    2008-01-01

    Image registration is a key step in a great variety of biomedical imaging applications. It provides the ability to geometrically align one dataset with another, and is a prerequisite for all imaging applications that compare datasets across subjects, imaging modalities, or across time. Registration algorithms also enable the pooling and comparison of experimental findings across laboratories, the construction of population-based brain atlases, and the creation of systems to detect group patterns in structural and functional imaging data. We review the major types of registration approaches used in brain imaging today. We focus on their conceptual basis, the underlying mathematics, and their strengths and weaknesses in different contexts. We describe the major goals of registration, including data fusion, quantification of change, automated image segmentation and labeling, shape measurement, and pathology detection. We indicate that registration algorithms have great potential when used in conjunction with a digital brain atlas, which acts as a reference system in which brain images can be compared for statistical analysis. The resulting armory of registration approaches is fundamental to medical image analysis, and in a brain mapping context provides a means to elucidate clinical, demographic, or functional trends in the anatomy or physiology of the brain. PMID:19890483

  8. Multi-modal Registration for Correlative Microscopy using Image Analogies

    PubMed Central

    Cao, Tian; Zach, Christopher; Modla, Shannon; Powell, Debbie; Czymmek, Kirk; Niethammer, Marc

    2014-01-01

    Correlative microscopy is a methodology combining the functionality of light microscopy with the high resolution of electron microscopy and other microscopy technologies for the same biological specimen. In this paper, we propose an image registration method for correlative microscopy, which is challenging due to the distinct appearance of biological structures when imaged with different modalities. Our method is based on image analogies and allows to transform images of a given modality into the appearance-space of another modality. Hence, the registration between two different types of microscopy images can be transformed to a mono-modality image registration. We use a sparse representation model to obtain image analogies. The method makes use of corresponding image training patches of two different imaging modalities to learn a dictionary capturing appearance relations. We test our approach on backscattered electron (BSE) scanning electron microscopy (SEM)/confocal and transmission electron microscopy (TEM)/confocal images. We perform rigid, affine, and deformable registration via B-splines and show improvements over direct registration using both mutual information and sum of squared differences similarity measures to account for differences in image appearance. PMID:24387943

  9. Optimal atlas construction through hierarchical image registration

    NASA Astrophysics Data System (ADS)

    Grevera, George J.; Udupa, Jayaram K.; Odhner, Dewey; Torigian, Drew A.

    2016-03-01

    Atlases (digital or otherwise) are common in medicine. However, there is no standard framework for creating them from medical images. One traditional approach is to pick a representative subject and then proceed to label structures/regions of interest in this image. Another is to create a "mean" or average subject. Atlases may also contain more than a single representative (e.g., the Visible Human contains both a male and a female data set). Other criteria besides gender may be used as well, and the atlas may contain many examples for a given criterion. In this work, we propose that atlases be created in an optimal manner using a well-established graph theoretic approach using a min spanning tree (or more generally, a collection of them). The resulting atlases may contain many examples for a given criterion. In fact, our framework allows for the addition of new subjects to the atlas to allow it to evolve over time. Furthermore, one can apply segmentation methods to the graph (e.g., graph-cut, fuzzy connectedness, or cluster analysis) which allow it to be separated into "sub-atlases" as it evolves. We demonstrate our method by applying it to 50 3D CT data sets of the chest region, and by comparing it to a number of traditional methods using measures such as Mean Squared Difference, Mattes Mutual Information, and Correlation, and for rigid registration. Our results demonstrate that optimal atlases can be constructed in this manner and outperform other methods of construction using freely available software.

  10. DIRBoost-an algorithm for boosting deformable image registration: application to lung CT intra-subject registration.

    PubMed

    Muenzing, Sascha E A; van Ginneken, Bram; Viergever, Max A; Pluim, Josien P W

    2014-04-01

    We introduce a boosting algorithm to improve on existing methods for deformable image registration (DIR). The proposed DIRBoost algorithm is inspired by the theory on hypothesis boosting, well known in the field of machine learning. DIRBoost utilizes a method for automatic registration error detection to obtain estimates of local registration quality. All areas detected as erroneously registered are subjected to boosting, i.e. undergo iterative registrations by employing boosting masks on both the fixed and moving image. We validated the DIRBoost algorithm on three different DIR methods (ANTS gSyn, NiftyReg, and DROP) on three independent reference datasets of pulmonary image scan pairs. DIRBoost reduced registration errors significantly and consistently on all reference datasets for each DIR algorithm, yielding an improvement of the registration accuracy by 5-34% depending on the dataset and the registration algorithm employed. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Real-time automatic registration in optical surgical navigation

    NASA Astrophysics Data System (ADS)

    Lin, Qinyong; Yang, Rongqian; Cai, Ken; Si, Xuan; Chen, Xiuwen; Wu, Xiaoming

    2016-05-01

    An image-guided surgical navigation system requires the improvement of the patient-to-image registration time to enhance the convenience of the registration procedure. A critical step in achieving this aim is performing a fully automatic patient-to-image registration. This study reports on a design of custom fiducial markers and the performance of a real-time automatic patient-to-image registration method using these markers on the basis of an optical tracking system for rigid anatomy. The custom fiducial markers are designed to be automatically localized in both patient and image spaces. An automatic localization method is performed by registering a point cloud sampled from the three dimensional (3D) pedestal model surface of a fiducial marker to each pedestal of fiducial markers searched in image space. A head phantom is constructed to estimate the performance of the real-time automatic registration method under four fiducial configurations. The head phantom experimental results demonstrate that the real-time automatic registration method is more convenient, rapid, and accurate than the manual method. The time required for each registration is approximately 0.1 s. The automatic localization method precisely localizes the fiducial markers in image space. The averaged target registration error for the four configurations is approximately 0.7 mm. The automatic registration performance is independent of the positions relative to the tracking system and the movement of the patient during the operation.

  12. Demons deformable registration for CBCT-guided procedures in the head and neck: Convergence and accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nithiananthan, S.; Brock, K. K.; Daly, M. J.

    2009-10-15

    Purpose: The accuracy and convergence behavior of a variant of the Demons deformable registration algorithm were investigated for use in cone-beam CT (CBCT)-guided procedures of the head and neck. Online use of deformable registration for guidance of therapeutic procedures such as image-guided surgery or radiation therapy places trade-offs on accuracy and computational expense. This work describes a convergence criterion for Demons registration developed to balance these demands; the accuracy of a multiscale Demons implementation using this convergence criterion is quantified in CBCT images of the head and neck. Methods: Using an open-source ''symmetric'' Demons registration algorithm, a convergence criterion basedmore » on the change in the deformation field between iterations was developed to advance among multiple levels of a multiscale image pyramid in a manner that optimized accuracy and computation time. The convergence criterion was optimized in cadaver studies involving CBCT images acquired using a surgical C-arm prototype modified for 3D intraoperative imaging. CBCT-to-CBCT registration was performed and accuracy was quantified in terms of the normalized cross-correlation (NCC) and target registration error (TRE). The accuracy and robustness of the algorithm were then tested in clinical CBCT images of ten patients undergoing radiation therapy of the head and neck. Results: The cadaver model allowed optimization of the convergence factor and initial measurements of registration accuracy: Demons registration exhibited TRE=(0.8{+-}0.3) mm and NCC=0.99 in the cadaveric head compared to TRE=(2.6{+-}1.0) mm and NCC=0.93 with rigid registration. Similarly for the patient data, Demons registration gave mean TRE=(1.6{+-}0.9) mm compared to rigid registration TRE=(3.6{+-}1.9) mm, suggesting registration accuracy at or near the voxel size of the patient images (1x1x2 mm{sup 3}). The multiscale implementation based on optimal convergence criteria completed registration in 52 s for the cadaveric head and in an average time of 270 s for the larger FOV patient images. Conclusions: Appropriate selection of convergence and multiscale parameters in Demons registration was shown to reduce computational expense without sacrificing registration performance. For intraoperative CBCT imaging with deformable registration, the ability to perform accurate registration within the stringent time requirements of the operating environment could offer a useful clinical tool allowing integration of preoperative information while accurately reflecting changes in the patient anatomy. Similarly for CBCT-guided radiation therapy, fast accurate deformable registration could further augment high-precision treatment strategies.« less

  13. Demons deformable registration for CBCT-guided procedures in the head and neck: convergence and accuracy.

    PubMed

    Nithiananthan, S; Brock, K K; Daly, M J; Chan, H; Irish, J C; Siewerdsen, J H

    2009-10-01

    The accuracy and convergence behavior of a variant of the Demons deformable registration algorithm were investigated for use in cone-beam CT (CBCT)-guided procedures of the head and neck. Online use of deformable registration for guidance of therapeutic procedures such as image-guided surgery or radiation therapy places trade-offs on accuracy and computational expense. This work describes a convergence criterion for Demons registration developed to balance these demands; the accuracy of a multiscale Demons implementation using this convergence criterion is quantified in CBCT images of the head and neck. Using an open-source "symmetric" Demons registration algorithm, a convergence criterion based on the change in the deformation field between iterations was developed to advance among multiple levels of a multiscale image pyramid in a manner that optimized accuracy and computation time. The convergence criterion was optimized in cadaver studies involving CBCT images acquired using a surgical C-arm prototype modified for 3D intraoperative imaging. CBCT-to-CBCT registration was performed and accuracy was quantified in terms of the normalized cross-correlation (NCC) and target registration error (TRE). The accuracy and robustness of the algorithm were then tested in clinical CBCT images of ten patients undergoing radiation therapy of the head and neck. The cadaver model allowed optimization of the convergence factor and initial measurements of registration accuracy: Demons registration exhibited TRE=(0.8+/-0.3) mm and NCC =0.99 in the cadaveric head compared to TRE=(2.6+/-1.0) mm and NCC=0.93 with rigid registration. Similarly for the patient data, Demons registration gave mean TRE=(1.6+/-0.9) mm compared to rigid registration TRE=(3.6+/-1.9) mm, suggesting registration accuracy at or near the voxel size of the patient images (1 x 1 x 2 mm3). The multiscale implementation based on optimal convergence criteria completed registration in 52 s for the cadaveric head and in an average time of 270 s for the larger FOV patient images. Appropriate selection of convergence and multiscale parameters in Demons registration was shown to reduce computational expense without sacrificing registration performance. For intraoperative CBCT imaging with deformable registration, the ability to perform accurate registration within the stringent time requirements of the operating environment could offer a useful clinical tool allowing integration of preoperative information while accurately reflecting changes in the patient anatomy. Similarly for CBCT-guided radiation therapy, fast accurate deformable registration could further augment high-precision treatment strategies.

  14. Demons deformable registration for CBCT-guided procedures in the head and neck: Convergence and accuracy

    PubMed Central

    Nithiananthan, S.; Brock, K. K.; Daly, M. J.; Chan, H.; Irish, J. C.; Siewerdsen, J. H.

    2009-01-01

    Purpose: The accuracy and convergence behavior of a variant of the Demons deformable registration algorithm were investigated for use in cone-beam CT (CBCT)-guided procedures of the head and neck. Online use of deformable registration for guidance of therapeutic procedures such as image-guided surgery or radiation therapy places trade-offs on accuracy and computational expense. This work describes a convergence criterion for Demons registration developed to balance these demands; the accuracy of a multiscale Demons implementation using this convergence criterion is quantified in CBCT images of the head and neck. Methods: Using an open-source “symmetric” Demons registration algorithm, a convergence criterion based on the change in the deformation field between iterations was developed to advance among multiple levels of a multiscale image pyramid in a manner that optimized accuracy and computation time. The convergence criterion was optimized in cadaver studies involving CBCT images acquired using a surgical C-arm prototype modified for 3D intraoperative imaging. CBCT-to-CBCT registration was performed and accuracy was quantified in terms of the normalized cross-correlation (NCC) and target registration error (TRE). The accuracy and robustness of the algorithm were then tested in clinical CBCT images of ten patients undergoing radiation therapy of the head and neck. Results: The cadaver model allowed optimization of the convergence factor and initial measurements of registration accuracy: Demons registration exhibited TRE=(0.8±0.3) mm and NCC=0.99 in the cadaveric head compared to TRE=(2.6±1.0) mm and NCC=0.93 with rigid registration. Similarly for the patient data, Demons registration gave mean TRE=(1.6±0.9) mm compared to rigid registration TRE=(3.6±1.9) mm, suggesting registration accuracy at or near the voxel size of the patient images (1×1×2 mm3). The multiscale implementation based on optimal convergence criteria completed registration in 52 s for the cadaveric head and in an average time of 270 s for the larger FOV patient images. Conclusions: Appropriate selection of convergence and multiscale parameters in Demons registration was shown to reduce computational expense without sacrificing registration performance. For intraoperative CBCT imaging with deformable registration, the ability to perform accurate registration within the stringent time requirements of the operating environment could offer a useful clinical tool allowing integration of preoperative information while accurately reflecting changes in the patient anatomy. Similarly for CBCT-guided radiation therapy, fast accurate deformable registration could further augment high-precision treatment strategies. PMID:19928106

  15. SU-E-J-89: Comparative Analysis of MIM and Velocity’s Image Deformation Algorithm Using Simulated KV-CBCT Images for Quality Assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cline, K; Narayanasamy, G; Obediat, M

    Purpose: Deformable image registration (DIR) is used routinely in the clinic without a formalized quality assurance (QA) process. Using simulated deformations to digitally deform images in a known way and comparing to DIR algorithm predictions is a powerful technique for DIR QA. This technique must also simulate realistic image noise and artifacts, especially between modalities. This study developed an algorithm to create simulated daily kV cone-beam computed-tomography (CBCT) images from CT images for DIR QA between these modalities. Methods: A Catphan and physical head-and-neck phantom, with known deformations, were used. CT and kV-CBCT images of the Catphan were utilized tomore » characterize the changes in Hounsfield units, noise, and image cupping that occur between these imaging modalities. The algorithm then imprinted these changes onto a CT image of the deformed head-and-neck phantom, thereby creating a simulated-CBCT image. CT and kV-CBCT images of the undeformed and deformed head-and-neck phantom were also acquired. The Velocity and MIM DIR algorithms were applied between the undeformed CT image and each of the deformed CT, CBCT, and simulated-CBCT images to obtain predicted deformations. The error between the known and predicted deformations was used as a metric to evaluate the quality of the simulated-CBCT image. Ideally, the simulated-CBCT image registration would produce the same accuracy as the deformed CBCT image registration. Results: For Velocity, the mean error was 1.4 mm for the CT-CT registration, 1.7 mm for the CT-CBCT registration, and 1.4 mm for the CT-simulated-CBCT registration. These same numbers were 1.5, 4.5, and 5.9 mm, respectively, for MIM. Conclusion: All cases produced similar accuracy for Velocity. MIM produced similar values of accuracy for CT-CT registration, but was not as accurate for CT-CBCT registrations. The MIM simulated-CBCT registration followed this same trend, but overestimated MIM DIR errors relative to the CT-CBCT registration.« less

  16. Automatic elastic image registration by interpolation of 3D rotations and translations from discrete rigid-body transformations.

    PubMed

    Walimbe, Vivek; Shekhar, Raj

    2006-12-01

    We present an algorithm for automatic elastic registration of three-dimensional (3D) medical images. Our algorithm initially recovers the global spatial mismatch between the reference and floating images, followed by hierarchical octree-based subdivision of the reference image and independent registration of the floating image with the individual subvolumes of the reference image at each hierarchical level. Global as well as local registrations use the six-parameter full rigid-body transformation model and are based on maximization of normalized mutual information (NMI). To ensure robustness of the subvolume registration with low voxel counts, we calculate NMI using a combination of current and prior mutual histograms. To generate a smooth deformation field, we perform direct interpolation of six-parameter rigid-body subvolume transformations obtained at the last subdivision level. Our interpolation scheme involves scalar interpolation of the 3D translations and quaternion interpolation of the 3D rotational pose. We analyzed the performance of our algorithm through experiments involving registration of synthetically deformed computed tomography (CT) images. Our algorithm is general and can be applied to image pairs of any two modalities of most organs. We have demonstrated successful registration of clinical whole-body CT and positron emission tomography (PET) images using this algorithm. The registration accuracy for this application was evaluated, based on validation using expert-identified anatomical landmarks in 15 CT-PET image pairs. The algorithm's performance was comparable to the average accuracy observed for three expert-determined registrations in the same 15 image pairs.

  17. Nonrigid registration of carotid ultrasound and MR images using a "twisting and bending" model

    NASA Astrophysics Data System (ADS)

    Nanayakkara, Nuwan D.; Chiu, Bernard; Samani, Abbas; Spence, J. David; Parraga, Grace; Samarabandu, Jagath; Fenster, Aaron

    2008-03-01

    Atherosclerosis at the carotid bifurcation resulting in cerebral emboli is a major cause of ischemic stroke. Most strokes associated with carotid atherosclerosis can be prevented by lifestyle/dietary changes and pharmacological treatments if identified early by monitoring carotid plaque changes. Plaque composition information from magnetic resonance (MR) carotid images and dynamic characteristics information from 3D ultrasound (US) are necessary for developing and validating US imaging tools to identify vulnerable carotid plaques. Combining these images requires nonrigid registration to correct the non-linear miss-alignments caused by relative twisting and bending in the neck due to different head positions during the two image acquisitions sessions. The high degree of freedom and large number of parameters associated with existing nonrigid image registration methods causes several problems including unnatural plaque morphology alteration, computational complexity, and low reliability. Our approach was to model the normal movement of the neck using a "twisting and bending model" with only six parameters for nonrigid registration. We evaluated our registration technique using intra-subject in-vivo 3D US and 3D MR carotid images acquired on the same day. We calculated the Mean Registration Error (MRE) between the segmented vessel surfaces in the target image and the registered image using a distance-based error metric after applying our "twisting bending model" based nonrigid registration algorithm. We achieved an average registration error of 1.33+/-0.41mm using our nonrigid registration technique. Visual inspection of segmented vessel surfaces also showed a substantial improvement of alignment with our non-rigid registration technique.

  18. MO-FG-204-02: Reference Image Selection in the Presence of Multiple Scan Realizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruan, D; Dou, T; Thomas, D

    Purpose: Fusing information from multiple correlated realizations (e.g., 4DCT) can improve image quality. This process often involves ill-conditioned and asymmetric nonlinear registration and the proper selection of a reference image is important. This work proposes to examine post-registration variation indirectly for such selection, and develops further insights to reduce the number of cross-registrations needed. Methods: We consider each individual scan as a noisy point in the vicinity of an image manifold, related by motion. Nonrigid registration “transports” a scan along the manifold to the reference neighborhood, and the residual is a surrogate for local variation. To test this conjecture, 10more » thoracic scans from the same session were reconstructed from a recently developed low-dose helical 4DCT protocol. Pairwise registration was repeated bi-directionally (81 times) and fusion was performed with each candidate reference. The fused image quality was assessed with SNR and CNR. Registration residuals in SSD, harmonic energy, and deformation Jacobian behavior were examined. The semi-symmetry is further utilized to reduce the number of registration needed. Results: The comparison of image quality between single image and fused ones identified reduction of local intensity variance as the major contributor of image quality, boosting SNR and CNR by 5 to 7 folds. This observation further suggests the criticality of good agreement across post-registration images. Triangle inequality on the SSD metric provides a proficient upper-bound and surrogate on such disagreement. Empirical observation also confirms that fused images with high residual SSD have lower SNR and CNR than the ones with low or intermediate SSDs. Registration SSD is structurally close enough to symmetry for reduced computation. Conclusion: Registration residual is shown to be a good predictor of post-fusion image quality and can be used to identify good reference centers. Semi-symmetry of the registration residual further reduces computation cost. Supported by in part by NIH R01 CA096679.« less

  19. Gaussian Process Interpolation for Uncertainty Estimation in Image Registration

    PubMed Central

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

    2014-01-01

    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

  20. Registration of PET and CT images based on multiresolution gradient of mutual information demons algorithm for positioning esophageal cancer patients.

    PubMed

    Jin, Shuo; Li, Dengwang; Wang, Hongjun; Yin, Yong

    2013-01-07

    Accurate registration of 18F-FDG PET (positron emission tomography) and CT (computed tomography) images has important clinical significance in radiation oncology. PET and CT images are acquired from (18)F-FDG PET/CT scanner, but the two acquisition processes are separate and take a long time. As a result, there are position errors in global and deformable errors in local caused by respiratory movement or organ peristalsis. The purpose of this work was to implement and validate a deformable CT to PET image registration method in esophageal cancer to eventually facilitate accurate positioning the tumor target on CT, and improve the accuracy of radiation therapy. Global registration was firstly utilized to preprocess position errors between PET and CT images, achieving the purpose of aligning these two images on the whole. Demons algorithm, based on optical flow field, has the features of fast process speed and high accuracy, and the gradient of mutual information-based demons (GMI demons) algorithm adds an additional external force based on the gradient of mutual information (GMI) between two images, which is suitable for multimodality images registration. In this paper, GMI demons algorithm was used to achieve local deformable registration of PET and CT images, which can effectively reduce errors between internal organs. In addition, to speed up the registration process, maintain its robustness, and avoid the local extremum, multiresolution image pyramid structure was used before deformable registration. By quantitatively and qualitatively analyzing cases with esophageal cancer, the registration scheme proposed in this paper can improve registration accuracy and speed, which is helpful for precisely positioning tumor target and developing the radiation treatment planning in clinical radiation therapy application.

  1. Registration of PET and CT images based on multiresolution gradient of mutual information demons algorithm for positioning esophageal cancer patients

    PubMed Central

    Jin, Shuo; Li, Dengwang; Yin, Yong

    2013-01-01

    Accurate registration of  18F−FDG PET (positron emission tomography) and CT (computed tomography) images has important clinical significance in radiation oncology. PET and CT images are acquired from  18F−FDG PET/CT scanner, but the two acquisition processes are separate and take a long time. As a result, there are position errors in global and deformable errors in local caused by respiratory movement or organ peristalsis. The purpose of this work was to implement and validate a deformable CT to PET image registration method in esophageal cancer to eventually facilitate accurate positioning the tumor target on CT, and improve the accuracy of radiation therapy. Global registration was firstly utilized to preprocess position errors between PET and CT images, achieving the purpose of aligning these two images on the whole. Demons algorithm, based on optical flow field, has the features of fast process speed and high accuracy, and the gradient of mutual information‐based demons (GMI demons) algorithm adds an additional external force based on the gradient of mutual information (GMI) between two images, which is suitable for multimodality images registration. In this paper, GMI demons algorithm was used to achieve local deformable registration of PET and CT images, which can effectively reduce errors between internal organs. In addition, to speed up the registration process, maintain its robustness, and avoid the local extremum, multiresolution image pyramid structure was used before deformable registration. By quantitatively and qualitatively analyzing cases with esophageal cancer, the registration scheme proposed in this paper can improve registration accuracy and speed, which is helpful for precisely positioning tumor target and developing the radiation treatment planning in clinical radiation therapy application. PACS numbers: 87.57.nj, 87.57.Q‐, 87.57.uk PMID:23318381

  2. A fast image registration approach of neural activities in light-sheet fluorescence microscopy images

    NASA Astrophysics Data System (ADS)

    Meng, Hui; Hui, Hui; Hu, Chaoen; Yang, Xin; Tian, Jie

    2017-03-01

    The ability of fast and single-neuron resolution imaging of neural activities enables light-sheet fluorescence microscopy (LSFM) as a powerful imaging technique in functional neural connection applications. The state-of-art LSFM imaging system can record the neuronal activities of entire brain for small animal, such as zebrafish or C. elegans at single-neuron resolution. However, the stimulated and spontaneous movements in animal brain result in inconsistent neuron positions during recording process. It is time consuming to register the acquired large-scale images with conventional method. In this work, we address the problem of fast registration of neural positions in stacks of LSFM images. This is necessary to register brain structures and activities. To achieve fast registration of neural activities, we present a rigid registration architecture by implementation of Graphics Processing Unit (GPU). In this approach, the image stacks were preprocessed on GPU by mean stretching to reduce the computation effort. The present image was registered to the previous image stack that considered as reference. A fast Fourier transform (FFT) algorithm was used for calculating the shift of the image stack. The calculations for image registration were performed in different threads while the preparation functionality was refactored and called only once by the master thread. We implemented our registration algorithm on NVIDIA Quadro K4200 GPU under Compute Unified Device Architecture (CUDA) programming environment. The experimental results showed that the registration computation can speed-up to 550ms for a full high-resolution brain image. Our approach also has potential to be used for other dynamic image registrations in biomedical applications.

  3. a Band Selection Method for High Precision Registration of Hyperspectral Image

    NASA Astrophysics Data System (ADS)

    Yang, H.; Li, X.

    2018-04-01

    During the registration of hyperspectral images and high spatial resolution images, too much bands in a hyperspectral image make it difficult to select bands with good registration performance. Terrible bands are possible to reduce matching speed and accuracy. To solve this problem, an algorithm based on Cram'er-Rao lower bound theory is proposed to select good matching bands in this paper. The algorithm applies the Cram'er-Rao lower bound theory to the study of registration accuracy, and selects good matching bands by CRLB parameters. Experiments show that the algorithm in this paper can choose good matching bands and provide better data for the registration of hyperspectral image and high spatial resolution image.

  4. Topology preserving non-rigid image registration using time-varying elasticity model for MRI brain volumes.

    PubMed

    Ahmad, Sahar; Khan, Muhammad Faisal

    2015-12-01

    In this paper, we present a new non-rigid image registration method that imposes a topology preservation constraint on the deformation. We propose to incorporate the time varying elasticity model into the deformable image matching procedure and constrain the Jacobian determinant of the transformation over the entire image domain. The motion of elastic bodies is governed by a hyperbolic partial differential equation, generally termed as elastodynamics wave equation, which we propose to use as a deformation model. We carried out clinical image registration experiments on 3D magnetic resonance brain scans from IBSR database. The results of the proposed registration approach in terms of Kappa index and relative overlap computed over the subcortical structures were compared against the existing topology preserving non-rigid image registration methods and non topology preserving variant of our proposed registration scheme. The Jacobian determinant maps obtained with our proposed registration method were qualitatively and quantitatively analyzed. The results demonstrated that the proposed scheme provides good registration accuracy with smooth transformations, thereby guaranteeing the preservation of topology. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. MRI - 3D Ultrasound - X-ray Image Fusion with Electromagnetic Tracking for Transendocardial Therapeutic Injections: In-vitro Validation and In-vivo Feasibility

    PubMed Central

    Hatt, Charles R.; Jain, Ameet K.; Parthasarathy, Vijay; Lang, Andrew; Raval, Amish N.

    2014-01-01

    Myocardial infarction (MI) is one of the leading causes of death in the world. Small animal studies have shown that stem-cell therapy offers dramatic functional improvement post-MI. An endomyocardial catheter injection approach to therapeutic agent delivery has been proposed to improve efficacy through increased cell retention. Accurate targeting is critical for reaching areas of greatest therapeutic potential while avoiding a life-threatening myocardial perforation. Multimodal image fusion has been proposed as a way to improve these procedures by augmenting traditional intra-operative imaging modalities with high resolution pre-procedural images. Previous approaches have suffered from a lack of real-time tissue imaging and dependence on X-ray imaging to track devices, leading to increased ionizing radiation dose. In this paper, we present a new image fusion system for catheter-based targeted delivery of therapeutic agents. The system registers real-time 3D echocardiography, magnetic resonance, X-ray, and electromagnetic sensor tracking within a single flexible framework. All system calibrations and registrations were validated and found to have target registration errors less than 5 mm in the worst case. Injection accuracy was validated in a motion enabled cardiac injection phantom, where targeting accuracy ranged from 0.57 to 3.81 mm. Clinical feasibility was demonstrated with in-vivo swine experiments, where injections were successfully made into targeted regions of the heart. PMID:23561056

  6. Optimized SIFTFlow for registration of whole-mount histology to reference optical images

    PubMed Central

    Shojaii, Rushin; Martel, Anne L.

    2016-01-01

    Abstract. The registration of two-dimensional histology images to reference images from other modalities is an important preprocessing step in the reconstruction of three-dimensional histology volumes. This is a challenging problem because of the differences in the appearances of histology images and other modalities, and the presence of large nonrigid deformations which occur during slide preparation. This paper shows the feasibility of using densely sampled scale-invariant feature transform (SIFT) features and a SIFTFlow deformable registration algorithm for coregistering whole-mount histology images with blockface optical images. We present a method for jointly optimizing the regularization parameters used by the SIFTFlow objective function and use it to determine the most appropriate values for the registration of breast lumpectomy specimens. We demonstrate that tuning the regularization parameters results in significant improvements in accuracy and we also show that SIFTFlow outperforms a previously described edge-based registration method. The accuracy of the histology images to blockface images registration using the optimized SIFTFlow method was assessed using an independent test set of images from five different lumpectomy specimens and the mean registration error was 0.32±0.22  mm. PMID:27774494

  7. Non-rigid CT/CBCT to CBCT registration for online external beam radiotherapy guidance

    NASA Astrophysics Data System (ADS)

    Zachiu, Cornel; de Senneville, Baudouin Denis; Tijssen, Rob H. N.; Kotte, Alexis N. T. J.; Houweling, Antonetta C.; Kerkmeijer, Linda G. W.; Lagendijk, Jan J. W.; Moonen, Chrit T. W.; Ries, Mario

    2018-01-01

    Image-guided external beam radiotherapy (EBRT) allows radiation dose deposition with a high degree of accuracy and precision. Guidance is usually achieved by estimating the displacements, via image registration, between cone beam computed tomography (CBCT) and computed tomography (CT) images acquired at different stages of the therapy. The resulting displacements are then used to reposition the patient such that the location of the tumor at the time of treatment matches its position during planning. Moreover, ongoing research aims to use CBCT-CT image registration for online plan adaptation. However, CBCT images are usually acquired using a small number of x-ray projections and/or low beam intensities. This often leads to the images being subject to low contrast, low signal-to-noise ratio and artifacts, which ends-up hampering the image registration process. Previous studies addressed this by integrating additional image processing steps into the registration procedure. However, these steps are usually designed for particular image acquisition schemes, therefore limiting their use on a case-by-case basis. In the current study we address CT to CBCT and CBCT to CBCT registration by the means of the recently proposed EVolution registration algorithm. Contrary to previous approaches, EVolution does not require the integration of additional image processing steps in the registration scheme. Moreover, the algorithm requires a low number of input parameters, is easily parallelizable and provides an elastic deformation on a point-by-point basis. Results have shown that relative to a pure CT-based registration, the intrinsic artifacts present in typical CBCT images only have a sub-millimeter impact on the accuracy and precision of the estimated deformation. In addition, the algorithm has low computational requirements, which are compatible with online image-based guidance of EBRT treatments.

  8. Integral-geometry characterization of photobiomodulation effects on retinal vessel morphology

    PubMed Central

    Barbosa, Marconi; Natoli, Riccardo; Valter, Kriztina; Provis, Jan; Maddess, Ted

    2014-01-01

    The morphological characterization of quasi-planar structures represented by gray-scale images is challenging when object identification is sub-optimal due to registration artifacts. We propose two alternative procedures that enhances object identification in the integral-geometry morphological image analysis (MIA) framework. The first variant streamlines the framework by introducing an active contours segmentation process whose time step is recycled as a multi-scale parameter. In the second variant, we used the refined object identification produced in the first variant to perform the standard MIA with exact dilation radius as multi-scale parameter. Using this enhanced MIA we quantify the extent of vaso-obliteration in oxygen-induced retinopathic vascular growth, the preventative effect (by photobiomodulation) of exposure during tissue development to near-infrared light (NIR, 670 nm), and the lack of adverse effects due to exposure to NIR light. PMID:25071966

  9. A Multistage Approach for Image Registration.

    PubMed

    Bowen, Francis; Hu, Jianghai; Du, Eliza Yingzi

    2016-09-01

    Successful image registration is an important step for object recognition, target detection, remote sensing, multimodal content fusion, scene blending, and disaster assessment and management. The geometric and photometric variations between images adversely affect the ability for an algorithm to estimate the transformation parameters that relate the two images. Local deformations, lighting conditions, object obstructions, and perspective differences all contribute to the challenges faced by traditional registration techniques. In this paper, a novel multistage registration approach is proposed that is resilient to view point differences, image content variations, and lighting conditions. Robust registration is realized through the utilization of a novel region descriptor which couples with the spatial and texture characteristics of invariant feature points. The proposed region descriptor is exploited in a multistage approach. A multistage process allows the utilization of the graph-based descriptor in many scenarios thus allowing the algorithm to be applied to a broader set of images. Each successive stage of the registration technique is evaluated through an effective similarity metric which determines subsequent action. The registration of aerial and street view images from pre- and post-disaster provide strong evidence that the proposed method estimates more accurate global transformation parameters than traditional feature-based methods. Experimental results show the robustness and accuracy of the proposed multistage image registration methodology.

  10. A fast and fully automatic registration approach based on point features for multi-source remote-sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Le; Zhang, Dengrong; Holden, Eun-Jung

    2008-07-01

    Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.

  11. Non-rigid registration between 3D ultrasound and CT images of the liver based on intensity and gradient information

    NASA Astrophysics Data System (ADS)

    Lee, Duhgoon; Nam, Woo Hyun; Lee, Jae Young; Ra, Jong Beom

    2011-01-01

    In order to utilize both ultrasound (US) and computed tomography (CT) images of the liver concurrently for medical applications such as diagnosis and image-guided intervention, non-rigid registration between these two types of images is an essential step, as local deformation between US and CT images exists due to the different respiratory phases involved and due to the probe pressure that occurs in US imaging. This paper introduces a voxel-based non-rigid registration algorithm between the 3D B-mode US and CT images of the liver. In the proposed algorithm, to improve the registration accuracy, we utilize the surface information of the liver and gallbladder in addition to the information of the vessels inside the liver. For an effective correlation between US and CT images, we treat those anatomical regions separately according to their characteristics in US and CT images. Based on a novel objective function using a 3D joint histogram of the intensity and gradient information, vessel-based non-rigid registration is followed by surface-based non-rigid registration in sequence, which improves the registration accuracy. The proposed algorithm is tested for ten clinical datasets and quantitative evaluations are conducted. Experimental results show that the registration error between anatomical features of US and CT images is less than 2 mm on average, even with local deformation due to different respiratory phases and probe pressure. In addition, the lesion registration error is less than 3 mm on average with a maximum of 4.5 mm that is considered acceptable for clinical applications.

  12. Evaluation of 4-dimensional Computed Tomography to 4-dimensional Cone-Beam Computed Tomography Deformable Image Registration for Lung Cancer Adaptive Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balik, Salim; Weiss, Elisabeth; Jan, Nuzhat

    2013-06-01

    Purpose: To evaluate 2 deformable image registration (DIR) algorithms for the purpose of contour mapping to support image-guided adaptive radiation therapy with 4-dimensional cone-beam CT (4DCBCT). Methods and Materials: One planning 4D fan-beam CT (4DFBCT) and 7 weekly 4DCBCT scans were acquired for 10 locally advanced non-small cell lung cancer patients. The gross tumor volume was delineated by a physician in all 4D images. End-of-inspiration phase planning 4DFBCT was registered to the corresponding phase in weekly 4DCBCT images for day-to-day registrations. For phase-to-phase registration, the end-of-inspiration phase from each 4D image was registered to the end-of-expiration phase. Two DIR algorithms—smallmore » deformation inverse consistent linear elastic (SICLE) and Insight Toolkit diffeomorphic demons (DEMONS)—were evaluated. Physician-delineated contours were compared with the warped contours by using the Dice similarity coefficient (DSC), average symmetric distance, and false-positive and false-negative indices. The DIR results are compared with rigid registration of tumor. Results: For day-to-day registrations, the mean DSC was 0.75 ± 0.09 with SICLE, 0.70 ± 0.12 with DEMONS, 0.66 ± 0.12 with rigid-tumor registration, and 0.60 ± 0.14 with rigid-bone registration. Results were comparable to intraobserver variability calculated from phase-to-phase registrations as well as measured interobserver variation for 1 patient. SICLE and DEMONS, when compared with rigid-bone (4.1 mm) and rigid-tumor (3.6 mm) registration, respectively reduced the average symmetric distance to 2.6 and 3.3 mm. On average, SICLE and DEMONS increased the DSC to 0.80 and 0.79, respectively, compared with rigid-tumor (0.78) registrations for 4DCBCT phase-to-phase registrations. Conclusions: Deformable image registration achieved comparable accuracy to reported interobserver delineation variability and higher accuracy than rigid-tumor registration. Deformable image registration performance varied with the algorithm and the patient.« less

  13. Accurate registration of temporal CT images for pulmonary nodules detection

    NASA Astrophysics Data System (ADS)

    Yan, Jichao; Jiang, Luan; Li, Qiang

    2017-02-01

    Interpretation of temporal CT images could help the radiologists to detect some subtle interval changes in the sequential examinations. The purpose of this study was to develop a fully automated scheme for accurate registration of temporal CT images for pulmonary nodule detection. Our method consisted of three major registration steps. Firstly, affine transformation was applied in the segmented lung region to obtain global coarse registration images. Secondly, B-splines based free-form deformation (FFD) was used to refine the coarse registration images. Thirdly, Demons algorithm was performed to align the feature points extracted from the registered images in the second step and the reference images. Our database consisted of 91 temporal CT cases obtained from Beijing 301 Hospital and Shanghai Changzheng Hospital. The preliminary results showed that approximately 96.7% cases could obtain accurate registration based on subjective observation. The subtraction images of the reference images and the rigid and non-rigid registered images could effectively remove the normal structures (i.e. blood vessels) and retain the abnormalities (i.e. pulmonary nodules). This would be useful for the screening of lung cancer in our future study.

  14. Framework for Processing Videos in the Presence of Spatially Varying Motion Blur

    DTIC Science & Technology

    2014-04-18

    international journals. Expected impact The related problems of image restoration, registration, dehazing, and superresolution , all in the presence of blurring...real-time, it can be very valuable for applications involving aerial surveillance. Our work on superresolution will be especially valuable while...unified approach to superresolution and multichannel blind decon- volution,” Trans. Img. Proc., vol. 16, no. 9, pp. 2322–2332, Sept. 2007. 5, 5.2.1

  15. SU-E-J-248: Comparative Study of Two Image Registration for Image-Guided Radiation Therapy in Esophageal Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shang, K; Wang, J; Liu, D

    2014-06-01

    Purpose: Image-guided radiation therapy (IGRT) is one of the major treatment of esophageal cancer. Gray value registration and bone registration are two kinds of image registration, the purpose of this work is to compare which one is more suitable for esophageal cancer patients. Methods: Twenty three esophageal patients were treated by Elekta Synergy, CBCT images were acquired and automatically registered to planning kilovoltage CT scans according to gray value or bone registration. The setup errors were measured in the X, Y and Z axis, respectively. Two kinds of setup errors were analysed by matching T test statistical method. Results: Fourmore » hundred and five groups of CBCT images were available and the systematic and random setup errors (cm) in X, Y, Z directions were 0.35, 0.63, 0.29 and 0.31, 0.53, 0.21 with gray value registration, while 0.37, 0.64, 0.26 and 0.32, 0.55, 0.20 with bone registration, respectively. Compared with bone registration and gray value registration, the setup errors in X and Z axis have significant differences. In Y axis, both measurement comparison results of T value is 0.256 (P value > 0.05); In X axis, the T value is 5.287(P value < 0.05); In Z axis, the T value is −5.138 (P value < 0.05). Conclusion: Gray value registration is recommended in image-guided radiotherapy for esophageal cancer and the other thoracic tumors. Manual registration could be applied when it is necessary. Bone registration is more suitable for the head tumor and pelvic tumor department where composed of redundant interconnected and immobile bone tissue.« less

  16. Intensity-Based Registration for Lung Motion Estimation

    NASA Astrophysics Data System (ADS)

    Cao, Kunlin; Ding, Kai; Amelon, Ryan E.; Du, Kaifang; Reinhardt, Joseph M.; Raghavan, Madhavan L.; Christensen, Gary E.

    Image registration plays an important role within pulmonary image analysis. The task of registration is to find the spatial mapping that brings two images into alignment. Registration algorithms designed for matching 4D lung scans or two 3D scans acquired at different inflation levels can catch the temporal changes in position and shape of the region of interest. Accurate registration is critical to post-analysis of lung mechanics and motion estimation. In this chapter, we discuss lung-specific adaptations of intensity-based registration methods for 3D/4D lung images and review approaches for assessing registration accuracy. Then we introduce methods for estimating tissue motion and studying lung mechanics. Finally, we discuss methods for assessing and quantifying specific volume change, specific ventilation, strain/ stretch information and lobar sliding.

  17. An Improved InSAR Image Co-Registration Method for Pairs with Relatively Big Distortions or Large Incoherent Areas

    PubMed Central

    Chen, Zhenwei; Zhang, Lei; Zhang, Guo

    2016-01-01

    Co-registration is one of the most important steps in interferometric synthetic aperture radar (InSAR) data processing. The standard offset-measurement method based on cross-correlating uniformly distributed patches takes no account of specific geometric transformation between images or characteristics of ground scatterers. Hence, it is inefficient and difficult to obtain satisfying co-registration results for image pairs with relatively big distortion or large incoherent areas. Given this, an improved co-registration strategy is proposed in this paper which takes both the geometric features and image content into consideration. Firstly, some geometric transformations including scale, flip, rotation, and shear between images were eliminated based on the geometrical information, and the initial co-registration polynomial was obtained. Then the registration points were automatically detected by integrating the signal-to-clutter-ratio (SCR) thresholds and the amplitude information, and a further co-registration process was performed to refine the polynomial. Several comparison experiments were carried out using 2 TerraSAR-X data from the Hong Kong airport and 21 PALSAR data from the Donghai Bridge. Experiment results demonstrate that the proposed method brings accuracy and efficiency improvements for co-registration and processing abilities in the cases of big distortion between images or large incoherent areas in the images. For most co-registrations, the proposed method can enhance the reliability and applicability of co-registration and thus promote the automation to a higher level. PMID:27649207

  18. An Improved InSAR Image Co-Registration Method for Pairs with Relatively Big Distortions or Large Incoherent Areas.

    PubMed

    Chen, Zhenwei; Zhang, Lei; Zhang, Guo

    2016-09-17

    Co-registration is one of the most important steps in interferometric synthetic aperture radar (InSAR) data processing. The standard offset-measurement method based on cross-correlating uniformly distributed patches takes no account of specific geometric transformation between images or characteristics of ground scatterers. Hence, it is inefficient and difficult to obtain satisfying co-registration results for image pairs with relatively big distortion or large incoherent areas. Given this, an improved co-registration strategy is proposed in this paper which takes both the geometric features and image content into consideration. Firstly, some geometric transformations including scale, flip, rotation, and shear between images were eliminated based on the geometrical information, and the initial co-registration polynomial was obtained. Then the registration points were automatically detected by integrating the signal-to-clutter-ratio (SCR) thresholds and the amplitude information, and a further co-registration process was performed to refine the polynomial. Several comparison experiments were carried out using 2 TerraSAR-X data from the Hong Kong airport and 21 PALSAR data from the Donghai Bridge. Experiment results demonstrate that the proposed method brings accuracy and efficiency improvements for co-registration and processing abilities in the cases of big distortion between images or large incoherent areas in the images. For most co-registrations, the proposed method can enhance the reliability and applicability of co-registration and thus promote the automation to a higher level.

  19. WE-AB-BRA-12: Virtual Endoscope Tracking for Endoscopy-CT Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ingram, W; Rao, A; Wendt, R

    Purpose: The use of endoscopy in radiotherapy will remain limited until we can register endoscopic video to CT using standard clinical equipment. In this phantom study we tested a registration method using virtual endoscopy to measure CT-space positions from endoscopic video. Methods: Our phantom is a contorted clay cylinder with 2-mm-diameter markers in the luminal surface. These markers are visible on both CT and endoscopic video. Virtual endoscope images were rendered from a polygonal mesh created by segmenting the phantom’s luminal surface on CT. We tested registration accuracy by tracking the endoscope’s 6-degree-of-freedom coordinates frame-to-frame in a video recorded asmore » it moved through the phantom, and using these coordinates to measure CT-space positions of markers visible in the final frame. To track the endoscope we used the Nelder-Mead method to search for coordinates that render the virtual frame most similar to the next recorded frame. We measured the endoscope’s initial-frame coordinates using a set of visible markers, and for image similarity we used a combination of mutual information and gradient alignment. CT-space marker positions were measured by projecting their final-frame pixel addresses through the virtual endoscope to intersect with the mesh. Registration error was quantified as the distance between this intersection and the marker’s manually-selected CT-space position. Results: Tracking succeeded for 6 of 8 videos, for which the mean registration error was 4.8±3.5mm (24 measurements total). The mean error in the axial direction (3.1±3.3mm) was larger than in the sagittal or coronal directions (2.0±2.3mm, 1.7±1.6mm). In the other 2 videos, the virtual endoscope got stuck in a false minimum. Conclusion: Our method can successfully track the position and orientation of an endoscope, and it provides accurate spatial mapping from endoscopic video to CT. This method will serve as a foundation for an endoscopy-CT registration framework that is clinically valuable and requires no specialized equipment.« less

  20. A GPU-based symmetric non-rigid image registration method in human lung.

    PubMed

    Haghighi, Babak; D Ellingwood, Nathan; Yin, Youbing; Hoffman, Eric A; Lin, Ching-Long

    2018-03-01

    Quantitative computed tomography (QCT) of the lungs plays an increasing role in identifying sub-phenotypes of pathologies previously lumped into broad categories such as chronic obstructive pulmonary disease and asthma. Methods for image matching and linking multiple lung volumes have proven useful in linking structure to function and in the identification of regional longitudinal changes. Here, we seek to improve the accuracy of image matching via the use of a symmetric multi-level non-rigid registration employing an inverse consistent (IC) transformation whereby images are registered both in the forward and reverse directions. To develop the symmetric method, two similarity measures, the sum of squared intensity difference (SSD) and the sum of squared tissue volume difference (SSTVD), were used. The method is based on a novel generic mathematical framework to include forward and backward transformations, simultaneously, eliminating the need to compute the inverse transformation. Two implementations were used to assess the proposed method: a two-dimensional (2-D) implementation using synthetic examples with SSD, and a multi-core CPU and graphics processing unit (GPU) implementation with SSTVD for three-dimensional (3-D) human lung datasets (six normal adults studied at total lung capacity (TLC) and functional residual capacity (FRC)). Success was evaluated in terms of the IC transformation consistency serving to link TLC to FRC. 2-D registration on synthetic images, using both symmetric and non-symmetric SSD methods, and comparison of displacement fields showed that the symmetric method gave a symmetrical grid shape and reduced IC errors, with the mean values of IC errors decreased by 37%. Results for both symmetric and non-symmetric transformations of human datasets showed that the symmetric method gave better results for IC errors in all cases, with mean values of IC errors for the symmetric method lower than the non-symmetric methods using both SSD and SSTVD. The GPU version demonstrated an average of 43 times speedup and ~5.2 times speedup over the single-threaded and 12-threaded CPU versions, respectively. Run times with the GPU were as fast as 2 min. The symmetric method improved the inverse consistency, aiding the use of image registration in the QCT-based evaluation of the lung.

  1. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Li; Gao, Yaozong; Shi, Feng

    Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. Methods: To segmentmore » CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into amaximum a posteriori probability-based convex segmentation framework for accurate segmentation. Results: The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. Conclusions: The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT segmentation based on 15 patients.« less

  2. A hybrid multimodal non-rigid registration of MR images based on diffeomorphic demons.

    PubMed

    Lu, Huanxiang; Cattin, Philippe C; Reyes, Mauricio

    2010-01-01

    In this paper we present a novel hybrid approach for multimodal medical image registration based on diffeomorphic demons. Diffeomorphic demons have proven to be a robust and efficient way for intensity-based image registration. A very recent extension even allows to use mutual information (MI) as a similarity measure to registration multimodal images. However, due to the intensity correspondence uncertainty existing in some anatomical parts, it is difficult for a purely intensity-based algorithm to solve the registration problem. Therefore, we propose to combine the resulting transformations from both intensity-based and landmark-based methods for multimodal non-rigid registration based on diffeomorphic demons. Several experiments on different types of MR images were conducted, for which we show that a better anatomical correspondence between the images can be obtained using the hybrid approach than using either intensity information or landmarks alone.

  3. Image registration assessment in radiotherapy image guidance based on control chart monitoring.

    PubMed

    Xia, Wenyao; Breen, Stephen L

    2018-04-01

    Image guidance with cone beam computed tomography in radiotherapy can guarantee the precision and accuracy of patient positioning prior to treatment delivery. During the image guidance process, operators need to take great effort to evaluate the image guidance quality before correcting a patient's position. This work proposes an image registration assessment method based on control chart monitoring to reduce the effort taken by the operator. According to the control chart plotted by daily registration scores of each patient, the proposed method can quickly detect both alignment errors and image quality inconsistency. Therefore, the proposed method can provide a clear guideline for the operators to identify unacceptable image quality and unacceptable image registration with minimal effort. Experimental results demonstrate that by using control charts from a clinical database of 10 patients undergoing prostate radiotherapy, the proposed method can quickly identify out-of-control signals and find special cause of out-of-control registration events.

  4. Fast interactive elastic registration of 12-bit multi-spectral images with subvoxel accuracy using display hardware

    NASA Astrophysics Data System (ADS)

    Noordmans, Herke Jan; de Roode, Rowland; Verdaasdonk, Rudolf

    2007-03-01

    Multi-spectral images of human tissue taken in-vivo often contain image alignment problems as patients have difficulty in retaining their posture during the acquisition time of 20 seconds. Previously, it has been attempted to correct motion errors with image registration software developed for MR or CT data but these algorithms have been proven to be too slow and erroneous for practical use with multi-spectral images. A new software package has been developed which allows the user to play a decisive role in the registration process as the user can monitor the progress of the registration continuously and force it in the right direction when it starts to fail. The software efficiently exploits videocard hardware to gain speed and to provide a perfect subvoxel correspondence between registration field and display. An 8 bit graphic card was used to efficiently register and resample 12 bit images using the hardware interpolation modes present on the graphic card. To show the feasibility of this new registration process, the software was applied in clinical practice evaluating the dosimetry for psoriasis and KTP laser treatment. The microscopic differences between images of normal skin and skin exposed to UV light proved that an affine registration step including zooming and slanting is critical for a subsequent elastic match to have success. The combination of user interactive registration software with optimal addressing the potentials of PC video card hardware greatly improves the speed of multi spectral image registration.

  5. Fast interactive registration tool for reproducible multi-spectral imaging for wound healing and treatment evaluation

    NASA Astrophysics Data System (ADS)

    Noordmans, Herke J.; de Roode, Rowland; Verdaasdonk, Rudolf

    2007-02-01

    Multi-spectral images of human tissue taken in-vivo often contain image alignment problems as patients have difficulty in retaining their posture during the acquisition time of 20 seconds. Previously, it has been attempted to correct motion errors with image registration software developed for MR or CT data but these algorithms have been proven to be too slow and erroneous for practical use with multi-spectral images. A new software package has been developed which allows the user to play a decisive role in the registration process as the user can monitor the progress of the registration continuously and force it in the right direction when it starts to fail. The software efficiently exploits videocard hardware to gain speed and to provide a perfect subvoxel correspondence between registration field and display. An 8 bit graphic card was used to efficiently register and resample 12 bit images using the hardware interpolation modes present on the graphic card. To show the feasibility of this new registration process, the software was applied in clinical practice evaluating the dosimetry for psoriasis and KTP laser treatment. The microscopic differences between images of normal skin and skin exposed to UV light proved that an affine registration step including zooming and slanting is critical for a subsequent elastic match to have success. The combination of user interactive registration software with optimal addressing the potentials of PC video card hardware greatly improves the speed of multi spectral image registration.

  6. Simultaneous Nonrigid Registration, Segmentation, and Tumor Detection in MRI Guided Cervical Cancer Radiation Therapy

    PubMed Central

    Lu, Chao; Chelikani, Sudhakar; Jaffray, David A.; Milosevic, Michael F.; Staib, Lawrence H.; Duncan, James S.

    2013-01-01

    External beam radiation therapy (EBRT) for the treatment of cancer enables accurate placement of radiation dose on the cancerous region. However, the deformation of soft tissue during the course of treatment, such as in cervical cancer, presents significant challenges for the delineation of the target volume and other structures of interest. Furthermore, the presence and regression of pathologies such as tumors may violate registration constraints and cause registration errors. In this paper, automatic segmentation, nonrigid registration and tumor detection in cervical magnetic resonance (MR) data are addressed simultaneously using a unified Bayesian framework. The proposed novel method can generate a tumor probability map while progressively identifying the boundary of an organ of interest based on the achieved nonrigid transformation. The method is able to handle the challenges of significant tumor regression and its effect on surrounding tissues. The new method was compared to various currently existing algorithms on a set of 36 MR data from six patients, each patient has six T2-weighted MR cervical images. The results show that the proposed approach achieves an accuracy comparable to manual segmentation and it significantly outperforms the existing registration algorithms. In addition, the tumor detection result generated by the proposed method has a high agreement with manual delineation by a qualified clinician. PMID:22328178

  7. Feature-Based Retinal Image Registration Using D-Saddle Feature

    PubMed Central

    Hasikin, Khairunnisa; A. Karim, Noor Khairiah; Ahmedy, Fatimah

    2017-01-01

    Retinal image registration is important to assist diagnosis and monitor retinal diseases, such as diabetic retinopathy and glaucoma. However, registering retinal images for various registration applications requires the detection and distribution of feature points on the low-quality region that consists of vessels of varying contrast and sizes. A recent feature detector known as Saddle detects feature points on vessels that are poorly distributed and densely positioned on strong contrast vessels. Therefore, we propose a multiresolution difference of Gaussian pyramid with Saddle detector (D-Saddle) to detect feature points on the low-quality region that consists of vessels with varying contrast and sizes. D-Saddle is tested on Fundus Image Registration (FIRE) Dataset that consists of 134 retinal image pairs. Experimental results show that D-Saddle successfully registered 43% of retinal image pairs with average registration accuracy of 2.329 pixels while a lower success rate is observed in other four state-of-the-art retinal image registration methods GDB-ICP (28%), Harris-PIIFD (4%), H-M (16%), and Saddle (16%). Furthermore, the registration accuracy of D-Saddle has the weakest correlation (Spearman) with the intensity uniformity metric among all methods. Finally, the paired t-test shows that D-Saddle significantly improved the overall registration accuracy of the original Saddle. PMID:29204257

  8. Three-dimensional registration of intravascular optical coherence tomography and cryo-image volumes for microscopic-resolution validation.

    PubMed

    Prabhu, David; Mehanna, Emile; Gargesha, Madhusudhana; Brandt, Eric; Wen, Di; van Ditzhuijzen, Nienke S; Chamie, Daniel; Yamamoto, Hirosada; Fujino, Yusuke; Alian, Ali; Patel, Jaymin; Costa, Marco; Bezerra, Hiram G; Wilson, David L

    2016-04-01

    Evidence suggests high-resolution, high-contrast, [Formula: see text] intravascular optical coherence tomography (IVOCT) can distinguish plaque types, but further validation is needed, especially for automated plaque characterization. We developed experimental and three-dimensional (3-D) registration methods to provide validation of IVOCT pullback volumes using microscopic, color, and fluorescent cryo-image volumes with optional registered cryo-histology. A specialized registration method matched IVOCT pullback images acquired in the catheter reference frame to a true 3-D cryo-image volume. Briefly, an 11-parameter registration model including a polynomial virtual catheter was initialized within the cryo-image volume, and perpendicular images were extracted, mimicking IVOCT image acquisition. Virtual catheter parameters were optimized to maximize cryo and IVOCT lumen overlap. Multiple assessments suggested that the registration error was better than the [Formula: see text] spacing between IVOCT image frames. Tests on a digital synthetic phantom gave a registration error of only [Formula: see text] (signed distance). Visual assessment of randomly presented nearby frames suggested registration accuracy within 1 IVOCT frame interval ([Formula: see text]). This would eliminate potential misinterpretations confronted by the typical histological approaches to validation, with estimated 1-mm errors. The method can be used to create annotated datasets and automated plaque classification methods and can be extended to other intravascular imaging modalities.

  9. Software for Automated Image-to-Image Co-registration

    NASA Technical Reports Server (NTRS)

    Benkelman, Cody A.; Hughes, Heidi

    2007-01-01

    The project objectives are: a) Develop software to fine-tune image-to-image co-registration, presuming images are orthorectified prior to input; b) Create a reusable software development kit (SDK) to enable incorporation of these tools into other software; d) provide automated testing for quantitative analysis; and e) Develop software that applies multiple techniques to achieve subpixel precision in the co-registration of image pairs.

  10. Neural network-based feature point descriptors for registration of optical and SAR images

    NASA Astrophysics Data System (ADS)

    Abulkhanov, Dmitry; Konovalenko, Ivan; Nikolaev, Dmitry; Savchik, Alexey; Shvets, Evgeny; Sidorchuk, Dmitry

    2018-04-01

    Registration of images of different nature is an important technique used in image fusion, change detection, efficient information representation and other problems of computer vision. Solving this task using feature-based approaches is usually more complex than registration of several optical images because traditional feature descriptors (SIFT, SURF, etc.) perform poorly when images have different nature. In this paper we consider the problem of registration of SAR and optical images. We train neural network to build feature point descriptors and use RANSAC algorithm to align found matches. Experimental results are presented that confirm the method's effectiveness.

  11. Deformable image registration with content mismatch: a demons variant to account for added material and surgical devices in the target image

    NASA Astrophysics Data System (ADS)

    Nithiananthan, S.; Uneri, A.; Schafer, S.; Mirota, D.; Otake, Y.; Stayman, J. W.; Zbijewski, W.; Khanna, A. J.; Reh, D. D.; Gallia, G. L.; Siewerdsen, J. H.

    2013-03-01

    Fast, accurate, deformable image registration is an important aspect of image-guided interventions. Among the factors that can confound registration is the presence of additional material in the intraoperative image - e.g., contrast bolus or a surgical implant - that was not present in the prior image. Existing deformable registration methods generally fail to account for tissue excised between image acquisitions and typically simply "move" voxels within the images with no ability to account for tissue that is removed or introduced between scans. We present a variant of the Demons algorithm to accommodate such content mismatch. The approach combines segmentation of mismatched content with deformable registration featuring an extra pseudo-spatial dimension representing a reservoir from which material can be drawn into the registered image. Previous work tested the registration method in the presence of tissue excision ("missing tissue"). The current paper tests the method in the presence of additional material in the target image and presents a general method by which either missing or additional material can be accommodated. The method was tested in phantom studies, simulations, and cadaver models in the context of intraoperative cone-beam CT with three examples of content mismatch: a variable-diameter bolus (contrast injection); surgical device (rod), and additional material (bone cement). Registration accuracy was assessed in terms of difference images and normalized cross correlation (NCC). We identify the difficulties that traditional registration algorithms encounter when faced with content mismatch and evaluate the ability of the proposed method to overcome these challenges.

  12. Biomechanical modelling for breast image registration

    NASA Astrophysics Data System (ADS)

    Lee, Angela; Rajagopal, Vijay; Chung, Jae-Hoon; Bier, Peter; Nielsen, Poul M. F.; Nash, Martyn P.

    2008-03-01

    Breast cancer is a leading cause of death in women. Tumours are usually detected by palpation or X-ray mammography followed by further imaging, such as magnetic resonance imaging (MRI) or ultrasound. The aim of this research is to develop a biophysically-based computational tool that will allow accurate collocation of features (such as suspicious lesions) across multiple imaging views and modalities in order to improve clinicians' diagnosis of breast cancer. We have developed a computational framework for generating individual-specific, 3D finite element models of the breast. MR images were obtained of the breast under gravity loading and neutrally buoyant conditions. Neutrally buoyant breast images, obtained whilst immersing the breast in water, were used to estimate the unloaded geometry of the breast (for present purposes, we have assumed that the densities of water and breast tissue are equal). These images were segmented to isolate the breast tissues, and a tricubic Hermite finite element mesh was fitted to the digitised data points in order to produce a customized breast model. The model was deformed, in accordance with finite deformation elasticity theory, to predict the gravity loaded state of the breast in the prone position. The unloaded breast images were embedded into the reference model and warped based on the predicted deformation. In order to analyse the accuracy of the model predictions, the cross-correlation image comparison metric was used to compare the warped, resampled images with the clinical images of the prone gravity loaded state. We believe that a biomechanical image registration tool of this kind will aid radiologists to provide more reliable diagnosis and localisation of breast cancer.

  13. Insight into efficient image registration techniques and the demons algorithm.

    PubMed

    Vercauteren, Tom; Pennec, Xavier; Malis, Ezio; Perchant, Aymeric; Ayache, Nicholas

    2007-01-01

    As image registration becomes more and more central to many biomedical imaging applications, the efficiency of the algorithms becomes a key issue. Image registration is classically performed by optimizing a similarity criterion over a given spatial transformation space. Even if this problem is considered as almost solved for linear registration, we show in this paper that some tools that have recently been developed in the field of vision-based robot control can outperform classical solutions. The adequacy of these tools for linear image registration leads us to revisit non-linear registration and allows us to provide interesting theoretical roots to the different variants of Thirion's demons algorithm. This analysis predicts a theoretical advantage to the symmetric forces variant of the demons algorithm. We show that, on controlled experiments, this advantage is confirmed, and yields a faster convergence.

  14. Non-rigid multi-frame registration of cell nuclei in live cell fluorescence microscopy image data.

    PubMed

    Tektonidis, Marco; Kim, Il-Han; Chen, Yi-Chun M; Eils, Roland; Spector, David L; Rohr, Karl

    2015-01-01

    The analysis of the motion of subcellular particles in live cell microscopy images is essential for understanding biological processes within cells. For accurate quantification of the particle motion, compensation of the motion and deformation of the cell nucleus is required. We introduce a non-rigid multi-frame registration approach for live cell fluorescence microscopy image data. Compared to existing approaches using pairwise registration, our approach exploits information from multiple consecutive images simultaneously to improve the registration accuracy. We present three intensity-based variants of the multi-frame registration approach and we investigate two different temporal weighting schemes. The approach has been successfully applied to synthetic and live cell microscopy image sequences, and an experimental comparison with non-rigid pairwise registration has been carried out. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. eHUGS: Enhanced Hierarchical Unbiased Graph Shrinkage for Efficient Groupwise Registration

    PubMed Central

    Wu, Guorong; Peng, Xuewei; Ying, Shihui; Wang, Qian; Yap, Pew-Thian; Shen, Dan; Shen, Dinggang

    2016-01-01

    Effective and efficient spatial normalization of a large population of brain images is critical for many clinical and research studies, but it is technically very challenging. A commonly used approach is to choose a certain image as the template and then align all other images in the population to this template by applying pairwise registration. To avoid the potential bias induced by the inappropriate template selection, groupwise registration methods have been proposed to simultaneously register all images to a latent common space. However, current groupwise registration methods do not make full use of image distribution information for more accurate registration. In this paper, we present a novel groupwise registration method that harnesses the image distribution information by capturing the image distribution manifold using a hierarchical graph with its nodes representing the individual images. More specifically, a low-level graph describes the image distribution in each subgroup, and a high-level graph encodes the relationship between representative images of subgroups. Given the graph representation, we can register all images to the common space by dynamically shrinking the graph on the image manifold. The topology of the entire image distribution is always maintained during graph shrinkage. Evaluations on two datasets, one for 80 elderly individuals and one for 285 infants, indicate that our method can yield promising results. PMID:26800361

  16. An effective non-rigid registration approach for ultrasound image based on "demons" algorithm.

    PubMed

    Liu, Yan; Cheng, H D; Huang, Jianhua; Zhang, Yingtao; Tang, Xianglong; Tian, Jiawei

    2013-06-01

    Medical image registration is an important component of computer-aided diagnosis system in diagnostics, therapy planning, and guidance of surgery. Because of its low signal/noise ratio (SNR), ultrasound (US) image registration is a difficult task. In this paper, a fully automatic non-rigid image registration algorithm based on demons algorithm is proposed for registration of ultrasound images. In the proposed method, an "inertia force" derived from the local motion trend of pixels in a Moore neighborhood system is produced and integrated into optical flow equation to estimate the demons force, which is helpful to handle the speckle noise and preserve the geometric continuity of US images. In the experiment, a series of US images and several similarity measure metrics are utilized for evaluating the performance. The experimental results demonstrate that the proposed method can register ultrasound images efficiently, robust to noise, quickly and automatically.

  17. [Non-rigid medical image registration based on mutual information and thin-plate spline].

    PubMed

    Cao, Guo-gang; Luo, Li-min

    2009-01-01

    To get precise and complete details, the contrast in different images is needed in medical diagnosis and computer assisted treatment. The image registration is the basis of contrast, but the regular rigid registration does not satisfy the clinic requirements. A non-rigid medical image registration method based on mutual information and thin-plate spline was present. Firstly, registering two images globally based on mutual information; secondly, dividing reference image and global-registered image into blocks and registering them; then getting the thin-plate spline transformation according to the shift of blocks' center; finally, applying the transformation to the global-registered image. The results show that the method is more precise than the global rigid registration based on mutual information and it reduces the complexity of getting control points and satisfy the clinic requirements better by getting control points of the thin-plate transformation automatically.

  18. Panorama imaging for image-to-physical registration of narrow drill holes inside spongy bones

    NASA Astrophysics Data System (ADS)

    Bergmeier, Jan; Fast, Jacob Friedemann; Ortmaier, Tobias; Kahrs, Lüder Alexander

    2017-03-01

    Image-to-physical registration based on volumetric data like computed tomography on the one side and intraoperative endoscopic images on the other side is an important method for various surgical applications. In this contribution, we present methods to generate panoramic views from endoscopic recordings for image-to-physical registration of narrow drill holes inside spongy bone. One core application is the registration of drill poses inside the mastoid during minimally invasive cochlear implantations. Besides the development of image processing software for registration, investigations are performed on a miniaturized optical system, achieving 360° radial imaging with one shot by extending a conventional, small, rigid, rod lens endoscope. A reflective cone geometry is used to deflect radially incoming light rays into the endoscope optics. Therefore, a cone mirror is mounted in front of a conventional 0° endoscope. Furthermore, panoramic images of inner drill hole surfaces in artificial bone material are created. Prior to drilling, cone beam computed tomography data is acquired from this artificial bone and simulated endoscopic views are generated from this data. A qualitative and quantitative image comparison of resulting views in terms of image-to-image registration is performed. First results show that downsizing of panoramic optics to a diameter of 3mm is possible. Conventional rigid rod lens endoscopes can be extended to produce suitable panoramic one-shot image data. Using unrolling and stitching methods, images of the inner drill hole surface similar to computed tomography image data of the same surface were created. Registration is performed on ten perturbations of the search space and results in target registration errors of (0:487 +/- 0:438)mm at the entry point and (0:957 +/- 0:948)mm at the exit as well as an angular error of (1:763 +/- 1:536)°. The results show suitability of this image data for image-to-image registration. Analysis of the error components in different directions reveals a strong influence of the pattern structure, meaning higher diversity results into smaller errors.

  19. Framework for 3D histologic reconstruction and fusion with in vivo MRI: Preliminary results of characterizing pulmonary inflammation in a mouse model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rusu, Mirabela, E-mail: mirabela.rusu@gmail.com; Wang, Haibo; Madabhushi, Anant

    Purpose: Pulmonary inflammation is associated with a variety of diseases. Assessing pulmonary inflammation on in vivo imaging may facilitate the early detection and treatment of lung diseases. Although routinely used in thoracic imaging, computed tomography has thus far not been compellingly shown to characterize inflammation in vivo. Alternatively, magnetic resonance imaging (MRI) is a nonionizing radiation technique to better visualize and characterize pulmonary tissue. Prior to routine adoption of MRI for early characterization of inflammation in humans, a rigorous and quantitative characterization of the utility of MRI to identify inflammation is required. Such characterization may be achieved by considering exmore » vivo histology as the ground truth, since it enables the definitive spatial assessment of inflammation. In this study, the authors introduce a novel framework to integrate 2D histology, ex vivo and in vivo imaging to enable the mapping of the extent of disease from ex vivo histology onto in vivo imaging, with the goal of facilitating computerized feature analysis and interrogation of disease appearance on in vivo imaging. The authors’ framework was evaluated in a preclinical preliminary study aimed to identify computer extracted features on in vivo MRI associated with chronic pulmonary inflammation. Methods: The authors’ image analytics framework first involves reconstructing the histologic volume in 3D from individual histology slices. Second, the authors map the disease ground truth onto in vivo MRI via coregistration with 3D histology using the ex vivo lung MRI as a conduit. Finally, computerized feature analysis of the disease extent is performed to identify candidate in vivo imaging signatures of disease presence and extent. Results: The authors evaluated the framework by assessing the quality of the 3D histology reconstruction and the histology—MRI fusion, in the context of an initial use case involving characterization of chronic inflammation in a mouse model. The authors’ evaluation considered three mice, two with an inflammation phenotype and one control. The authors’ iterative 3D histology reconstruction yielded a 70.1% ± 2.7% overlap with the ex vivo MRI volume. Across a total of 17 anatomic landmarks manually delineated at the division of airways, the target registration error between the ex vivo MRI and 3D histology reconstruction was 0.85 ± 0.44 mm, suggesting that a good alignment of the ex vivo 3D histology and ex vivo MRI had been achieved. The 3D histology-in vivo MRI coregistered volumes resulted in an overlap of 73.7% ± 0.9%. Preliminary computerized feature analysis was performed on an additional four control mice, for a total of seven mice considered in this study. Gabor texture filters appeared to best capture differences between the inflamed and noninflamed regions on MRI. Conclusions: The authors’ 3D histology reconstruction and multimodal registration framework were successfully employed to reconstruct the histology volume of the lung and fuse it with in vivo MRI to create a ground truth map for inflammation on in vivo MRI. The analytic platform presented here lays the framework for a rigorous validation of the identified imaging features for chronic lung inflammation on MRI in a large prospective cohort.« less

  20. The fusion of large scale classified side-scan sonar image mosaics.

    PubMed

    Reed, Scott; Tena, Ruiz Ioseba; Capus, Chris; Petillot, Yvan

    2006-07-01

    This paper presents a unified framework for the creation of classified maps of the seafloor from sonar imagery. Significant challenges in photometric correction, classification, navigation and registration, and image fusion are addressed. The techniques described are directly applicable to a range of remote sensing problems. Recent advances in side-scan data correction are incorporated to compensate for the sonar beam pattern and motion of the acquisition platform. The corrected images are segmented using pixel-based textural features and standard classifiers. In parallel, the navigation of the sonar device is processed using Kalman filtering techniques. A simultaneous localization and mapping framework is adopted to improve the navigation accuracy and produce georeferenced mosaics of the segmented side-scan data. These are fused within a Markovian framework and two fusion models are presented. The first uses a voting scheme regularized by an isotropic Markov random field and is applicable when the reliability of each information source is unknown. The Markov model is also used to inpaint regions where no final classification decision can be reached using pixel level fusion. The second model formally introduces the reliability of each information source into a probabilistic model. Evaluation of the two models using both synthetic images and real data from a large scale survey shows significant quantitative and qualitative improvement using the fusion approach.

  1. Acceptance test of a commercially available software for automatic image registration of computed tomography (CT), magnetic resonance imaging (MRI) and 99mTc-methoxyisobutylisonitrile (MIBI) single-photon emission computed tomography (SPECT) brain images.

    PubMed

    Loi, Gianfranco; Dominietto, Marco; Manfredda, Irene; Mones, Eleonora; Carriero, Alessandro; Inglese, Eugenio; Krengli, Marco; Brambilla, Marco

    2008-09-01

    This note describes a method to characterize the performances of image fusion software (Syntegra) with respect to accuracy and robustness. Computed tomography (CT), magnetic resonance imaging (MRI), and single-photon emission computed tomography (SPECT) studies were acquired from two phantoms and 10 patients. Image registration was performed independently by two couples composed of one radiotherapist and one physicist by means of superposition of anatomic landmarks. Each couple performed jointly and saved the registration. The two solutions were averaged to obtain the gold standard registration. A new set of estimators was defined to identify translation and rotation errors in the coordinate axes, independently from point position in image field of view (FOV). Algorithms evaluated were local correlation (LC) for CT-MRI, normalized mutual information (MI) for CT-MRI, and CT-SPECT registrations. To evaluate accuracy, estimator values were compared to limiting values for the algorithms employed, both in phantoms and in patients. To evaluate robustness, different alignments between images taken from a sample patient were produced and registration errors determined. LC algorithm resulted accurate in CT-MRI registrations in phantoms, but exceeded limiting values in 3 of 10 patients. MI algorithm resulted accurate in CT-MRI and CT-SPECT registrations in phantoms; limiting values were exceeded in one case in CT-MRI and never reached in CT-SPECT registrations. Thus, the evaluation of robustness was restricted to the algorithm of MI both for CT-MRI and CT-SPECT registrations. The algorithm of MI proved to be robust: limiting values were not exceeded with translation perturbations up to 2.5 cm, rotation perturbations up to 10 degrees and roto-translational perturbation up to 3 cm and 5 degrees.

  2. Efficient methods for implementation of multi-level nonrigid mass-preserving image registration on GPUs and multi-threaded CPUs.

    PubMed

    Ellingwood, Nathan D; Yin, Youbing; Smith, Matthew; Lin, Ching-Long

    2016-04-01

    Faster and more accurate methods for registration of images are important for research involved in conducting population-based studies that utilize medical imaging, as well as improvements for use in clinical applications. We present a novel computation- and memory-efficient multi-level method on graphics processing units (GPU) for performing registration of two computed tomography (CT) volumetric lung images. We developed a computation- and memory-efficient Diffeomorphic Multi-level B-Spline Transform Composite (DMTC) method to implement nonrigid mass-preserving registration of two CT lung images on GPU. The framework consists of a hierarchy of B-Spline control grids of increasing resolution. A similarity criterion known as the sum of squared tissue volume difference (SSTVD) was adopted to preserve lung tissue mass. The use of SSTVD consists of the calculation of the tissue volume, the Jacobian, and their derivatives, which makes its implementation on GPU challenging due to memory constraints. The use of the DMTC method enabled reduced computation and memory storage of variables with minimal communication between GPU and Central Processing Unit (CPU) due to ability to pre-compute values. The method was assessed on six healthy human subjects. Resultant GPU-generated displacement fields were compared against the previously validated CPU counterpart fields, showing good agreement with an average normalized root mean square error (nRMS) of 0.044±0.015. Runtime and performance speedup are compared between single-threaded CPU, multi-threaded CPU, and GPU algorithms. Best performance speedup occurs at the highest resolution in the GPU implementation for the SSTVD cost and cost gradient computations, with a speedup of 112 times that of the single-threaded CPU version and 11 times over the twelve-threaded version when considering average time per iteration using a Nvidia Tesla K20X GPU. The proposed GPU-based DMTC method outperforms its multi-threaded CPU version in terms of runtime. Total registration time reduced runtime to 2.9min on the GPU version, compared to 12.8min on twelve-threaded CPU version and 112.5min on a single-threaded CPU. Furthermore, the GPU implementation discussed in this work can be adapted for use of other cost functions that require calculation of the first derivatives. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  3. Phantom Study Investigating the Accuracy of Manual and Automatic Image Fusion with the GE Logiq E9: Implications for use in Percutaneous Liver Interventions.

    PubMed

    Burgmans, Mark Christiaan; den Harder, J Michiel; Meershoek, Philippa; van den Berg, Nynke S; Chan, Shaun Xavier Ju Min; van Leeuwen, Fijs W B; van Erkel, Arian R

    2017-06-01

    To determine the accuracy of automatic and manual co-registration methods for image fusion of three-dimensional computed tomography (CT) with real-time ultrasonography (US) for image-guided liver interventions. CT images of a skills phantom with liver lesions were acquired and co-registered to US using GE Logiq E9 navigation software. Manual co-registration was compared to automatic and semiautomatic co-registration using an active tracker. Also, manual point registration was compared to plane registration with and without an additional translation point. Finally, comparison was made between manual and automatic selection of reference points. In each experiment, accuracy of the co-registration method was determined by measurement of the residual displacement in phantom lesions by two independent observers. Mean displacements for a superficial and deep liver lesion were comparable after manual and semiautomatic co-registration: 2.4 and 2.0 mm versus 2.0 and 2.5 mm, respectively. Both methods were significantly better than automatic co-registration: 5.9 and 5.2 mm residual displacement (p < 0.001; p < 0.01). The accuracy of manual point registration was higher than that of plane registration, the latter being heavily dependent on accurate matching of axial CT and US images by the operator. Automatic reference point selection resulted in significantly lower registration accuracy compared to manual point selection despite lower root-mean-square deviation (RMSD) values. The accuracy of manual and semiautomatic co-registration is better than that of automatic co-registration. For manual co-registration using a plane, choosing the correct plane orientation is an essential first step in the registration process. Automatic reference point selection based on RMSD values is error-prone.

  4. SU-D-BRA-03: Analysis of Systematic Errors with 2D/3D Image Registration for Target Localization and Treatment Delivery in Stereotactic Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, H; Chetty, I; Wen, N

    Purpose: Determine systematic deviations between 2D/3D and 3D/3D image registrations with six degrees of freedom (6DOF) for various imaging modalities and registration algorithms on the Varian Edge Linac. Methods: The 6DOF systematic errors were assessed by comparing automated 2D/3D (kV/MV vs. CT) with 3D/3D (CBCT vs. CT) image registrations from different imaging pairs, CT slice thicknesses, couch angles, similarity measures, etc., using a Rando head and a pelvic phantom. The 2D/3D image registration accuracy was evaluated at different treatment sites (intra-cranial and extra-cranial) by statistically analyzing 2D/3D pre-treatment verification against 3D/3D localization of 192 Stereotactic Radiosurgery/Stereotactic Body Radiation Therapy treatmentmore » fractions for 88 patients. Results: The systematic errors of 2D/3D image registration using kV-kV, MV-kV and MV-MV image pairs using 0.8 mm slice thickness CT images were within 0.3 mm and 0.3° for translations and rotations with a 95% confidence interval (CI). No significant difference between 2D/3D and 3D/3D image registrations (P>0.05) was observed for target localization at various CT slice thicknesses ranging from 0.8 to 3 mm. Couch angles (30, 45, 60 degree) did not impact the accuracy of 2D/3D image registration. Using pattern intensity with content image filtering was recommended for 2D/3D image registration to achieve the best accuracy. For the patient study, translational error was within 2 mm and rotational error was within 0.6 degrees in terms of 95% CI for 2D/3D image registration. For intra-cranial sites, means and std. deviations of translational errors were −0.2±0.7, 0.04±0.5, 0.1±0.4 mm for LNG, LAT, VRT directions, respectively. For extra-cranial sites, means and std. deviations of translational errors were - 0.04±1, 0.2±1, 0.1±1 mm for LNG, LAT, VRT directions, respectively. 2D/3D image registration uncertainties for intra-cranial and extra-cranial sites were comparable. Conclusion: The Varian Edge radiosurgery 6DOF-based system, can perform 2D/3D image registration with high accuracy for target localization in image-guided stereotactic radiosurgery. The work was supported by a Research Scholar Grant, RSG-15-137-01-CCE from the American Cancer Society.« less

  5. Automatic Marker-free Longitudinal Infrared Image Registration by Shape Context Based Matching and Competitive Winner-guided Optimal Corresponding

    PubMed Central

    Lee, Chia-Yen; Wang, Hao-Jen; Lai, Jhih-Hao; Chang, Yeun-Chung; Huang, Chiun-Sheng

    2017-01-01

    Long-term comparisons of infrared image can facilitate the assessment of breast cancer tissue growth and early tumor detection, in which longitudinal infrared image registration is a necessary step. However, it is hard to keep markers attached on a body surface for weeks, and rather difficult to detect anatomic fiducial markers and match them in the infrared image during registration process. The proposed study, automatic longitudinal infrared registration algorithm, develops an automatic vascular intersection detection method and establishes feature descriptors by shape context to achieve robust matching, as well as to obtain control points for the deformation model. In addition, competitive winner-guided mechanism is developed for optimal corresponding. The proposed algorithm is evaluated in two ways. Results show that the algorithm can quickly lead to accurate image registration and that the effectiveness is superior to manual registration with a mean error being 0.91 pixels. These findings demonstrate that the proposed registration algorithm is reasonably accurate and provide a novel method of extracting a greater amount of useful data from infrared images. PMID:28145474

  6. Registration of angiographic image on real-time fluoroscopic image for image-guided percutaneous coronary intervention.

    PubMed

    Kim, Dongkue; Park, Sangsoo; Jeong, Myung Ho; Ryu, Jeha

    2018-02-01

    In percutaneous coronary intervention (PCI), cardiologists must study two different X-ray image sources: a fluoroscopic image and an angiogram. Manipulating a guidewire while alternately monitoring the two separate images on separate screens requires a deep understanding of the anatomy of coronary vessels and substantial training. We propose 2D/2D spatiotemporal image registration of the two images in a single image in order to provide cardiologists with enhanced visual guidance in PCI. The proposed 2D/2D spatiotemporal registration method uses a cross-correlation of two ECG series in each image to temporally synchronize two separate images and register an angiographic image onto the fluoroscopic image. A guidewire centerline is then extracted from the fluoroscopic image in real time, and the alignment of the centerline with vessel outlines of the chosen angiographic image is optimized using the iterative closest point algorithm for spatial registration. A proof-of-concept evaluation with a phantom coronary vessel model with engineering students showed an error reduction rate greater than 74% on wrong insertion to nontarget branches compared to the non-registration method and more than 47% reduction in the task completion time in performing guidewire manipulation for very difficult tasks. Evaluation with a small number of experienced doctors shows a potentially significant reduction in both task completion time and error rate for difficult tasks. The total registration time with real procedure X-ray (angiographic and fluoroscopic) images takes [Formula: see text] 60 ms, which is within the fluoroscopic image acquisition rate of 15 Hz. By providing cardiologists with better visual guidance in PCI, the proposed spatiotemporal image registration method is shown to be useful in advancing the guidewire to the coronary vessel branches, especially those difficult to insert into.

  7. Scale invariant feature transform in adaptive radiation therapy: a tool for deformable image registration assessment and re-planning indication

    NASA Astrophysics Data System (ADS)

    Paganelli, Chiara; Peroni, Marta; Riboldi, Marco; Sharp, Gregory C.; Ciardo, Delia; Alterio, Daniela; Orecchia, Roberto; Baroni, Guido

    2013-01-01

    Adaptive radiation therapy (ART) aims at compensating for anatomic and pathological changes to improve delivery along a treatment fraction sequence. Current ART protocols require time-consuming manual updating of all volumes of interest on the images acquired during treatment. Deformable image registration (DIR) and contour propagation stand as a state of the ART method to automate the process, but the lack of DIR quality control methods hinder an introduction into clinical practice. We investigated the scale invariant feature transform (SIFT) method as a quantitative automated tool (1) for DIR evaluation and (2) for re-planning decision-making in the framework of ART treatments. As a preliminary test, SIFT invariance properties at shape-preserving and deformable transformations were studied on a computational phantom, granting residual matching errors below the voxel dimension. Then a clinical dataset composed of 19 head and neck ART patients was used to quantify the performance in ART treatments. For the goal (1) results demonstrated SIFT potential as an operator-independent DIR quality assessment metric. We measured DIR group systematic residual errors up to 0.66 mm against 1.35 mm provided by rigid registration. The group systematic errors of both bony and all other structures were also analyzed, attesting the presence of anatomical deformations. The correct automated identification of 18 patients who might benefit from ART out of the total 22 cases using SIFT demonstrated its capabilities toward goal (2) achievement.

  8. Non-rigid image registration using graph-cuts.

    PubMed

    Tang, Tommy W H; Chung, Albert C S

    2007-01-01

    Non-rigid image registration is an ill-posed yet challenging problem due to its supernormal high degree of freedoms and inherent requirement of smoothness. Graph-cuts method is a powerful combinatorial optimization tool which has been successfully applied into image segmentation and stereo matching. Under some specific constraints, graph-cuts method yields either a global minimum or a local minimum in a strong sense. Thus, it is interesting to see the effects of using graph-cuts in non-rigid image registration. In this paper, we formulate non-rigid image registration as a discrete labeling problem. Each pixel in the source image is assigned a displacement label (which is a vector) indicating which position in the floating image it is spatially corresponding to. A smoothness constraint based on first derivative is used to penalize sharp changes in displacement labels across pixels. The whole system can be optimized by using the graph-cuts method via alpha-expansions. We compare 2D and 3D registration results of our method with two state-of-the-art approaches. It is found that our method is more robust to different challenging non-rigid registration cases with higher registration accuracy.

  9. MARS: a mouse atlas registration system based on a planar x-ray projector and an optical camera

    NASA Astrophysics Data System (ADS)

    Wang, Hongkai; Stout, David B.; Taschereau, Richard; Gu, Zheng; Vu, Nam T.; Prout, David L.; Chatziioannou, Arion F.

    2012-10-01

    This paper introduces a mouse atlas registration system (MARS), composed of a stationary top-view x-ray projector and a side-view optical camera, coupled to a mouse atlas registration algorithm. This system uses the x-ray and optical images to guide a fully automatic co-registration of a mouse atlas with each subject, in order to provide anatomical reference for small animal molecular imaging systems such as positron emission tomography (PET). To facilitate the registration, a statistical atlas that accounts for inter-subject anatomical variations was constructed based on 83 organ-labeled mouse micro-computed tomography (CT) images. The statistical shape model and conditional Gaussian model techniques were used to register the atlas with the x-ray image and optical photo. The accuracy of the atlas registration was evaluated by comparing the registered atlas with the organ-labeled micro-CT images of the test subjects. The results showed excellent registration accuracy of the whole-body region, and good accuracy for the brain, liver, heart, lungs and kidneys. In its implementation, the MARS was integrated with a preclinical PET scanner to deliver combined PET/MARS imaging, and to facilitate atlas-assisted analysis of the preclinical PET images.

  10. MARS: a mouse atlas registration system based on a planar x-ray projector and an optical camera.

    PubMed

    Wang, Hongkai; Stout, David B; Taschereau, Richard; Gu, Zheng; Vu, Nam T; Prout, David L; Chatziioannou, Arion F

    2012-10-07

    This paper introduces a mouse atlas registration system (MARS), composed of a stationary top-view x-ray projector and a side-view optical camera, coupled to a mouse atlas registration algorithm. This system uses the x-ray and optical images to guide a fully automatic co-registration of a mouse atlas with each subject, in order to provide anatomical reference for small animal molecular imaging systems such as positron emission tomography (PET). To facilitate the registration, a statistical atlas that accounts for inter-subject anatomical variations was constructed based on 83 organ-labeled mouse micro-computed tomography (CT) images. The statistical shape model and conditional Gaussian model techniques were used to register the atlas with the x-ray image and optical photo. The accuracy of the atlas registration was evaluated by comparing the registered atlas with the organ-labeled micro-CT images of the test subjects. The results showed excellent registration accuracy of the whole-body region, and good accuracy for the brain, liver, heart, lungs and kidneys. In its implementation, the MARS was integrated with a preclinical PET scanner to deliver combined PET/MARS imaging, and to facilitate atlas-assisted analysis of the preclinical PET images.

  11. Inverse consistent non-rigid image registration based on robust point set matching

    PubMed Central

    2014-01-01

    Background Robust point matching (RPM) has been extensively used in non-rigid registration of images to robustly register two sets of image points. However, except for the location at control points, RPM cannot estimate the consistent correspondence between two images because RPM is a unidirectional image matching approach. Therefore, it is an important issue to make an improvement in image registration based on RPM. Methods In our work, a consistent image registration approach based on the point sets matching is proposed to incorporate the property of inverse consistency and improve registration accuracy. Instead of only estimating the forward transformation between the source point sets and the target point sets in state-of-the-art RPM algorithms, the forward and backward transformations between two point sets are estimated concurrently in our algorithm. The inverse consistency constraints are introduced to the cost function of RPM and the fuzzy correspondences between two point sets are estimated based on both the forward and backward transformations simultaneously. A modified consistent landmark thin-plate spline registration is discussed in detail to find the forward and backward transformations during the optimization of RPM. The similarity of image content is also incorporated into point matching in order to improve image matching. Results Synthetic data sets, medical images are employed to demonstrate and validate the performance of our approach. The inverse consistent errors of our algorithm are smaller than RPM. Especially, the topology of transformations is preserved well for our algorithm for the large deformation between point sets. Moreover, the distance errors of our algorithm are similar to that of RPM, and they maintain a downward trend as whole, which demonstrates the convergence of our algorithm. The registration errors for image registrations are evaluated also. Again, our algorithm achieves the lower registration errors in same iteration number. The determinant of the Jacobian matrix of the deformation field is used to analyse the smoothness of the forward and backward transformations. The forward and backward transformations estimated by our algorithm are smooth for small deformation. For registration of lung slices and individual brain slices, large or small determinant of the Jacobian matrix of the deformation fields are observed. Conclusions Results indicate the improvement of the proposed algorithm in bi-directional image registration and the decrease of the inverse consistent errors of the forward and the reverse transformations between two images. PMID:25559889

  12. Joint image registration and fusion method with a gradient strength regularization

    NASA Astrophysics Data System (ADS)

    Lidong, Huang; Wei, Zhao; Jun, Wang

    2015-05-01

    Image registration is an essential process for image fusion, and fusion performance can be used to evaluate registration accuracy. We propose a maximum likelihood (ML) approach to joint image registration and fusion instead of treating them as two independent processes in the conventional way. To improve the visual quality of a fused image, a gradient strength (GS) regularization is introduced in the cost function of ML. The GS of the fused image is controllable by setting the target GS value in the regularization term. This is useful because a larger target GS brings a clearer fused image and a smaller target GS makes the fused image smoother and thus restrains noise. Hence, the subjective quality of the fused image can be improved whether the source images are polluted by noise or not. We can obtain the fused image and registration parameters successively by minimizing the cost function using an iterative optimization method. Experimental results show that our method is effective with transformation, rotation, and scale parameters in the range of [-2.0, 2.0] pixel, [-1.1 deg, 1.1 deg], and [0.95, 1.05], respectively, and variances of noise smaller than 300. It also demonstrated that our method yields a more visual pleasing fused image and higher registration accuracy compared with a state-of-the-art algorithm.

  13. Reconstruction of a time-averaged midposition CT scan for radiotherapy planning of lung cancer patients using deformable registration.

    PubMed

    Wolthaus, J W H; Sonke, J J; van Herk, M; Damen, E M F

    2008-09-01

    lower lobe lung tumors move with amplitudes of up to 2 cm due to respiration. To reduce respiration imaging artifacts in planning CT scans, 4D imaging techniques are used. Currently, we use a single (midventilation) frame of the 4D data set for clinical delineation of structures and radiotherapy planning. A single frame, however, often contains artifacts due to breathing irregularities, and is noisier than a conventional CT scan since the exposure per frame is lower. Moreover, the tumor may be displaced from the mean tumor position due to hysteresis. The aim of this work is to develop a framework for the acquisition of a good quality scan representing all scanned anatomy in the mean position by averaging transformed (deformed) CT frames, i.e., canceling out motion. A nonrigid registration method is necessary since motion varies over the lung. 4D and inspiration breath-hold (BH) CT scans were acquired for 13 patients. An iterative multiscale motion estimation technique was applied to the 4D CT scan, similar to optical flow but using image phase (gray-value transitions from bright to dark and vice versa) instead. From the (4D) deformation vector field (DVF) derived, the local mean position in the respiratory cycle was computed and the 4D DVF was modified to deform all structures of the original 4D CT scan to this mean position. A 3D midposition (MidP) CT scan was then obtained by (arithmetic or median) averaging of the deformed 4D CT scan. Image registration accuracy, tumor shape deviation with respect to the BH CT scan, and noise were determined to evaluate the image fidelity of the MidP CT scan and the performance of the technique. Accuracy of the used deformable image registration method was comparable to established automated locally rigid registration and to manual landmark registration (average difference to both methods < 0.5 mm for all directions) for the tumor region. From visual assessment, the registration was good for the clearly visible features (e.g., tumor and diaphragm). The shape of the tumor, with respect to that of the BH CT scan, was better represented by the MidP reconstructions than any of the 4D CT frames (including MidV; reduction of "shape differences" was 66%). The MidP scans contained about one-third the noise of individual 4D CT scan frames. We implemented an accurate method to estimate the motion of structures in a 4D CT scan. Subsequently, a novel method to create a midposition CT scan (time-weighted average of the anatomy) for treatment planning with reduced noise and artifacts was introduced. Tumor shape and position in the MidP CT scan represents that of the BH CT scan better than MidV CT scan and, therefore, was found to be appropriate for treatment planning.

  14. SU-E-J-90: 2D/3D Registration Using KV-MV Image Pairs for Higher Accuracy Image Guided Radiotherapy.

    PubMed

    Furtado, H; Figl, M; Stock, M; Georg, D; Birkfellner, W

    2012-06-01

    In this work, we investigate the impact of using paired portal mega-voltage (MV) and kilo-voltage (kV) images, on 2D/3D registration accuracy with the purpose of improving tumor motion tracking during radiotherapy. Tumor motion tracking is important as motion remains one of the biggest sources of uncertainty in dose application. 2D/3D registration is successfully used in online tumor motion tracking, nevertheless, one limitation of this technique is the inability to resolve movement along the imaging beam axis using only one projection image. Our evaluation consisted in comparing the accuracy of registration using different 2D image combinations: only one 2D image (1-kV), one kV and one MV image (1kV-1MV) and two kV images (2-kV). For each of the image combinations we evaluated the registration results using 250 starting points as initial displacements from the gold standard. We measured the final mean target registration error (mTRE) and the success rate for each registration. Each of the combinations was evaluated using four different merit functions. When using the MI merit function (a popular choice for this application) the RMS mTRE drops from 6.4 mm when using only one image to 2.1 mm when using image pairs. The success rate increases from 62% to 99.6%. A similar trend was observed for all four merit functions. Typically, the results are slightly better with 2-kV images than with 1kV-1MV. We evaluated the impact of using different image combinations on accuracy of 2D/3D registration for tumor motion monitoring. Our results show that using a kV-MV image pair, leads to improved results as motion can be accurately resolved in six degrees of freedom. Given the possibility to acquire these two images simultaneously, this is not only very workflow efficient but is also shown to be a good approach to improve registration accuracy. © 2012 American Association of Physicists in Medicine.

  15. Use of image registration and fusion algorithms and techniques in radiotherapy: Report of the AAPM Radiation Therapy Committee Task Group No. 132.

    PubMed

    Brock, Kristy K; Mutic, Sasa; McNutt, Todd R; Li, Hua; Kessler, Marc L

    2017-07-01

    Image registration and fusion algorithms exist in almost every software system that creates or uses images in radiotherapy. Most treatment planning systems support some form of image registration and fusion to allow the use of multimodality and time-series image data and even anatomical atlases to assist in target volume and normal tissue delineation. Treatment delivery systems perform registration and fusion between the planning images and the in-room images acquired during the treatment to assist patient positioning. Advanced applications are beginning to support daily dose assessment and enable adaptive radiotherapy using image registration and fusion to propagate contours and accumulate dose between image data taken over the course of therapy to provide up-to-date estimates of anatomical changes and delivered dose. This information aids in the detection of anatomical and functional changes that might elicit changes in the treatment plan or prescription. As the output of the image registration process is always used as the input of another process for planning or delivery, it is important to understand and communicate the uncertainty associated with the software in general and the result of a specific registration. Unfortunately, there is no standard mathematical formalism to perform this for real-world situations where noise, distortion, and complex anatomical variations can occur. Validation of the software systems performance is also complicated by the lack of documentation available from commercial systems leading to use of these systems in undesirable 'black-box' fashion. In view of this situation and the central role that image registration and fusion play in treatment planning and delivery, the Therapy Physics Committee of the American Association of Physicists in Medicine commissioned Task Group 132 to review current approaches and solutions for image registration (both rigid and deformable) in radiotherapy and to provide recommendations for quality assurance and quality control of these clinical processes. © 2017 American Association of Physicists in Medicine.

  16. Spatially weighted mutual information image registration for image guided radiation therapy.

    PubMed

    Park, Samuel B; Rhee, Frank C; Monroe, James I; Sohn, Jason W

    2010-09-01

    To develop a new metric for image registration that incorporates the (sub)pixelwise differential importance along spatial location and to demonstrate its application for image guided radiation therapy (IGRT). It is well known that rigid-body image registration with mutual information is dependent on the size and location of the image subset on which the alignment analysis is based [the designated region of interest (ROI)]. Therefore, careful review and manual adjustments of the resulting registration are frequently necessary. Although there were some investigations of weighted mutual information (WMI), these efforts could not apply the differential importance to a particular spatial location since WMI only applies the weight to the joint histogram space. The authors developed the spatially weighted mutual information (SWMI) metric by incorporating an adaptable weight function with spatial localization into mutual information. SWMI enables the user to apply the selected transform to medically "important" areas such as tumors and critical structures, so SWMI is neither dominated by, nor neglects the neighboring structures. Since SWMI can be utilized with any weight function form, the authors presented two examples of weight functions for IGRT application: A Gaussian-shaped weight function (GW) applied to a user-defined location and a structures-of-interest (SOI) based weight function. An image registration example using a synthesized 2D image is presented to illustrate the efficacy of SWMI. The convergence and feasibility of the registration method as applied to clinical imaging is illustrated by fusing a prostate treatment planning CT with a clinical cone beam CT (CBCT) image set acquired for patient alignment. Forty-one trials are run to test the speed of convergence. The authors also applied SWMI registration using two types of weight functions to two head and neck cases and a prostate case with clinically acquired CBCT/ MVCT image sets. The SWMI registration with a Gaussian weight function (SWMI-GW) was tested between two different imaging modalities: CT and MRI image sets. SWMI-GW converges 10% faster than registration using mutual information with an ROI. SWMI-GW as well as SWMI with SOI-based weight function (SWMI-SOI) shows better compensation of the target organ's deformation and neighboring critical organs' deformation. SWMI-GW was also used to successfully fuse MRI and CT images. Rigid-body image registration using our SWMI-GW and SWMI-SOI as cost functions can achieve better registration results in (a) designated image region(s) as well as faster convergence. With the theoretical foundation established, we believe SWMI could be extended to larger clinical testing.

  17. Influence of image registration on ADC images computed from free-breathing diffusion MRIs of the abdomen

    NASA Astrophysics Data System (ADS)

    Guyader, Jean-Marie; Bernardin, Livia; Douglas, Naomi H. M.; Poot, Dirk H. J.; Niessen, Wiro J.; Klein, Stefan

    2014-03-01

    The apparent diffusion coefficient (ADC) is an imaging biomarker providing quantitative information on the diffusion of water in biological tissues. This measurement could be of relevance in oncology drug development, but it suffers from a lack of reliability. ADC images are computed by applying a voxelwise exponential fitting to multiple diffusion-weighted MR images (DW-MRIs) acquired with different diffusion gradients. In the abdomen, respiratory motion induces misalignments in the datasets, creating visible artefacts and inducing errors in the ADC maps. We propose a multistep post-acquisition motion compensation pipeline based on 3D non-rigid registrations. It corrects for motion within each image and brings all DW-MRIs to a common image space. The method is evaluated on 10 datasets of free-breathing abdominal DW-MRIs acquired from healthy volunteers. Regions of interest (ROIs) are segmented in the right part of the abdomen and measurements are compared in the three following cases: no image processing, Gaussian blurring of the raw DW-MRIs and registration. Results show that both blurring and registration improve the visual quality of ADC images, but compared to blurring, registration yields visually sharper images. Measurement uncertainty is reduced both by registration and blurring. For homogeneous ROIs, blurring and registration result in similar median ADCs, which are lower than without processing. In a ROI at the interface between liver and kidney, registration and blurring yield different median ADCs, suggesting that uncorrected motion introduces a bias. Our work indicates that averaging procedures on the scanner should be avoided, as they remove the opportunity to perform motion correction.

  18. Alternative radiation-free registration technique for image-guided pedicle screw placement in deformed cervico-thoracic segments.

    PubMed

    Kantelhardt, Sven R; Neulen, Axel; Keric, Naureen; Gutenberg, Angelika; Conrad, Jens; Giese, Alf

    2017-10-01

    Image-guided pedicle screw placement in the cervico-thoracic region is a commonly applied technique. In some patients with deformed cervico-thoracic segments, conventional or 3D fluoroscopy based registration of image-guidance might be difficult or impossible because of the anatomic/pathological conditions. Landmark based registration has been used as an alternative, mostly using separate registration of each vertebra. We here investigated a routine for landmark based registration of rigid spinal segments as single objects, using cranial image-guidance software. Landmark based registration of image-guidance was performed using cranial navigation software. After surgical exposure of the spinous processes, lamina and facet joints and fixation of a reference marker array, up to 26 predefined landmarks were acquired using a pointer. All pedicle screws were implanted using image guidance alone. Following image-guided screw placement all patients underwent postoperative CT scanning. Screw positions as well as intraoperative and clinical parameters were retrospectively analyzed. Thirteen patients received 73 pedicle screws at levels C6 to Th8. Registration of spinal segments, using the cranial image-guidance succeeded in all cases. Pedicle perforations were observed in 11.0%, severe perforations of >2 mm occurred in 5.4%. One patient developed a transient C8 syndrome and had to be revised for deviation of the C7 pedicle screw. No other pedicle screw-related complications were observed. In selected patients suffering from pathologies of the cervico-thoracic region, which impair intraoperative fluoroscopy or 3D C-arm imaging, landmark based registration of image-guidance using cranial software is a feasible, radiation-saving and a safe alternative.

  19. Ultrasound guidance system for prostate biopsy

    NASA Astrophysics Data System (ADS)

    Hummel, Johann; Kerschner, Reinhard; Kaar, Marcus; Birkfellner, Wolfgang; Figl, Michael

    2017-03-01

    We designed a guidance system for prostate biopsy based on PET/MR images and 3D ultrasound (US). With our proposed method common inter-modal MR-US (or CT-US in case of PET/CTs) registration can be replaced by an intra-modal 3D/3D-US/US registration and an optical tracking system (OTS). On the pre-operative site, a PET/MR calibration allows to link both hybrid modalities with an abdominal 3D-US. On the interventional site, another abdominal 3D US is taken to merge the pre-operative images with the real-time 3D-US via 3D/3D-US/US registration. Finally, the images of a tracked trans-rectal US probe can be displayed with the pre-operative images by overlay. For PET/MR image fusion we applied a point-to-point registration between PET and OTS and MR and OTS, respectively. 3D/3D-US/US registration was evaluated for images taken in supine and lateral patient position. To enable table shifts between PET/MR and US image acquisition a table calibration procedure is presented. We found fiducial registration errors of 0.9 mm and 2.8 mm, respectively, with respect to the MR and PET calibration. A target registration error between MR and 3D US amounted to 1.4 mm. The registration error for the 3D/3D-US/US registration was found to be 3.7 mm. Furthermore, we have shown that ultrasound is applicable in an MR environment.

  20. Pixel Perfect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perrine, Kenneth A.; Hopkins, Derek F.; Lamarche, Brian L.

    2005-09-01

    Biologists and computer engineers at Pacific Northwest National Laboratory have specified, designed, and implemented a hardware/software system for performing real-time, multispectral image processing on a confocal microscope. This solution is intended to extend the capabilities of the microscope, enabling scientists to conduct advanced experiments on cell signaling and other kinds of protein interactions. FRET (fluorescence resonance energy transfer) techniques are used to locate and monitor protein activity. In FRET, it is critical that spectral images be precisely aligned with each other despite disturbances in the physical imaging path caused by imperfections in lenses and cameras, and expansion and contraction ofmore » materials due to temperature changes. The central importance of this work is therefore automatic image registration. This runs in a framework that guarantees real-time performance (processing pairs of 1024x1024, 8-bit images at 15 frames per second) and enables the addition of other types of advanced image processing algorithms such as image feature characterization. The supporting system architecture consists of a Visual Basic front-end containing a series of on-screen interfaces for controlling various aspects of the microscope and a script engine for automation. One of the controls is an ActiveX component written in C++ for handling the control and transfer of images. This component interfaces with a pair of LVDS image capture boards and a PCI board containing a 6-million gate Xilinx Virtex-II FPGA. Several types of image processing are performed on the FPGA in a pipelined fashion, including the image registration. The FPGA offloads work that would otherwise need to be performed by the main CPU and has a guaranteed real-time throughput. Image registration is performed in the FPGA by applying a cubic warp on one image to precisely align it with the other image. Before each experiment, an automated calibration procedure is run in order to set up the cubic warp. During image acquisitions, the cubic warp is evaluated by way of forward differencing. Unwanted pixelation artifacts are minimized by bilinear sampling. The resulting system is state-of-the-art for biological imaging. Precisely registered images enable the reliable use of FRET techniques. In addition, real-time image processing performance allows computed images to be fed back and displayed to scientists immediately, and the pipelined nature of the FPGA allows additional image processing algorithms to be incorporated into the system without slowing throughput.« less

  1. Three modality image registration of brain SPECT/CT and MR images for quantitative analysis of dopamine transporter imaging

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Yuzuho; Takeda, Yuta; Hara, Takeshi; Zhou, Xiangrong; Matsusako, Masaki; Tanaka, Yuki; Hosoya, Kazuhiko; Nihei, Tsutomu; Katafuchi, Tetsuro; Fujita, Hiroshi

    2016-03-01

    Important features in Parkinson's disease (PD) are degenerations and losses of dopamine neurons in corpus striatum. 123I-FP-CIT can visualize activities of the dopamine neurons. The activity radio of background to corpus striatum is used for diagnosis of PD and Dementia with Lewy Bodies (DLB). The specific activity can be observed in the corpus striatum on SPECT images, but the location and the shape of the corpus striatum on SPECT images only are often lost because of the low uptake. In contrast, MR images can visualize the locations of the corpus striatum. The purpose of this study was to realize a quantitative image analysis for the SPECT images by using image registration technique with brain MR images that can determine the region of corpus striatum. In this study, the image fusion technique was used to fuse SPECT and MR images by intervening CT image taken by SPECT/CT. The mutual information (MI) for image registration between CT and MR images was used for the registration. Six SPECT/CT and four MR scans of phantom materials are taken by changing the direction. As the results of the image registrations, 16 of 24 combinations were registered within 1.3mm. By applying the approach to 32 clinical SPECT/CT and MR cases, all of the cases were registered within 0.86mm. In conclusions, our registration method has a potential in superimposing MR images on SPECT images.

  2. SU-E-J-47: Comparison of Online Image Registrations of Varian TrueBeam Cone-Beam CT and BrainLab ExacTrac Imaging Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, J; Shi, W; Andrews, D

    2015-06-15

    Purpose To compare online image registrations of TrueBeam cone-beam CT (CBCT) and BrainLab ExacTrac imaging systems. Methods Tests were performed on a Varian TrueBeam STx linear accelerator (Version 2.0), which is integrated with a BrainLab ExacTrac imaging system (Version 6.0.5). The study was focused on comparing the online image registrations for translational shifts. A Rando head phantom was placed on treatment couch and immobilized with a BrainLab mask. The phantom was shifted by moving the couch translationally for 8 mm with a step size of 1 mm, in vertical, longitudinal, and lateral directions, respectively. At each location, the phantom wasmore » imaged with CBCT and ExacTrac x-ray. CBCT images were registered with TrueBeam and ExacTrac online registration algorithms, respectively. And ExacTrac x-ray image registrations were performed. Shifts calculated from different registrations were compared with nominal couch shifts. Results The averages and ranges of absolute differences between couch shifts and calculated phantom shifts obtained from ExacTrac x-ray registration, ExacTrac CBCT registration with default window, ExaxTrac CBCT registration with adjusted window (bone), Truebeam CBCT registration with bone window, and Truebeam CBCT registration with soft tissue window, were: 0.07 (0.02–0.14), 0.14 (0.01–0.35), 0.12 (0.02–0.28), 0.09 (0–0.20), and 0.06 (0–0.10) mm, in vertical direction; 0.06 (0.01–0.12), 0.27 (0.07–0.57), 0.23 (0.02–0.48), 0.04 (0–0.10), and 0.08 (0– 0.20) mm, in longitudinal direction; 0.05 (0.01–0.21), 0.35 (0.14–0.80), 0.25 (0.01–0.56), 0.19 (0–0.40), and 0.20 (0–0.40) mm, in lateral direction. Conclusion The shifts calculated from ExacTrac x-ray and TrueBeam CBCT registrations were close to each other (the differences between were less than 0.40 mm in any direction), and had better agreements with couch shifts than those from ExacTrac CBCT registrations. There were no significant differences between TrueBeam CBCT registrations using different windows. In ExacTrac CBCT registrations, using bone window led to better agreements than using default window.« less

  3. An incompressible fluid flow model with mutual information for MR image registration

    NASA Astrophysics Data System (ADS)

    Tsai, Leo; Chang, Herng-Hua

    2013-03-01

    Image registration is one of the fundamental and essential tasks within image processing. It is a process of determining the correspondence between structures in two images, which are called the template image and the reference image, respectively. The challenge of registration is to find an optimal geometric transformation between corresponding image data. This paper develops a new MR image registration algorithm that uses a closed incompressible viscous fluid model associated with mutual information. In our approach, we treat the image pixels as the fluid elements of a viscous fluid flow governed by the nonlinear Navier-Stokes partial differential equation (PDE). We replace the pressure term with the body force mainly used to guide the transformation with a weighting coefficient, which is expressed by the mutual information between the template and reference images. To solve this modified Navier-Stokes PDE, we adopted the fast numerical techniques proposed by Seibold1. The registration process of updating the body force, the velocity and deformation fields is repeated until the mutual information weight reaches a prescribed threshold. We applied our approach to the BrainWeb and real MR images. As consistent with the theory of the proposed fluid model, we found that our method accurately transformed the template images into the reference images based on the intensity flow. Experimental results indicate that our method is of potential in a wide variety of medical image registration applications.

  4. Research based on the SoPC platform of feature-based image registration

    NASA Astrophysics Data System (ADS)

    Shi, Yue-dong; Wang, Zhi-hui

    2015-12-01

    This paper focuses on the study of implementing feature-based image registration by System on a Programmable Chip (SoPC) hardware platform. We solidify the image registration algorithm on the FPGA chip, in which embedded soft core processor Nios II can speed up the image processing system. In this way, we can make image registration technology get rid of the PC. And, consequently, this kind of technology will be got an extensive use. The experiment result indicates that our system shows stable performance, particularly in terms of matching processing which noise immunity is good. And feature points of images show a reasonable distribution.

  5. CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.

    PubMed

    Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos

    2013-12-31

    Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.

  6. SU-D-BRA-04: Computerized Framework for Marker-Less Localization of Anatomical Feature Points in Range Images Based On Differential Geometry Features for Image-Guided Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soufi, M; Arimura, H; Toyofuku, F

    Purpose: To propose a computerized framework for localization of anatomical feature points on the patient surface in infrared-ray based range images by using differential geometry (curvature) features. Methods: The general concept was to reconstruct the patient surface by using a mathematical modeling technique for the computation of differential geometry features that characterize the local shapes of the patient surfaces. A region of interest (ROI) was firstly extracted based on a template matching technique applied on amplitude (grayscale) images. The extracted ROI was preprocessed for reducing temporal and spatial noises by using Kalman and bilateral filters, respectively. Next, a smooth patientmore » surface was reconstructed by using a non-uniform rational basis spline (NURBS) model. Finally, differential geometry features, i.e. the shape index and curvedness features were computed for localizing the anatomical feature points. The proposed framework was trained for optimizing shape index and curvedness thresholds and tested on range images of an anthropomorphic head phantom. The range images were acquired by an infrared ray-based time-of-flight (TOF) camera. The localization accuracy was evaluated by measuring the mean of minimum Euclidean distances (MMED) between reference (ground truth) points and the feature points localized by the proposed framework. The evaluation was performed for points localized on convex regions (e.g. apex of nose) and concave regions (e.g. nasofacial sulcus). Results: The proposed framework has localized anatomical feature points on convex and concave anatomical landmarks with MMEDs of 1.91±0.50 mm and 3.70±0.92 mm, respectively. A statistically significant difference was obtained between the feature points on the convex and concave regions (P<0.001). Conclusion: Our study has shown the feasibility of differential geometry features for localization of anatomical feature points on the patient surface in range images. The proposed framework might be useful for tasks involving feature-based image registration in range-image guided radiation therapy.« less

  7. Supervoxels for graph cuts-based deformable image registration using guided image filtering

    NASA Astrophysics Data System (ADS)

    Szmul, Adam; Papież, Bartłomiej W.; Hallack, Andre; Grau, Vicente; Schnabel, Julia A.

    2017-11-01

    We propose combining a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for three-dimensional (3-D) deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to two-dimensional (2-D) applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation combined with graph cuts-based optimization can be applied to 3-D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model "sliding motion." Applying this method to lung image registration results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available computed tomography lung image dataset leads to the observation that our approach compares very favorably with state of the art methods in continuous and discrete image registration, achieving target registration error of 1.16 mm on average per landmark.

  8. Supervoxels for Graph Cuts-Based Deformable Image Registration Using Guided Image Filtering.

    PubMed

    Szmul, Adam; Papież, Bartłomiej W; Hallack, Andre; Grau, Vicente; Schnabel, Julia A

    2017-10-04

    In this work we propose to combine a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for 3D deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to 2D applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation, combined with graph cuts-based optimization can be applied to 3D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model 'sliding motion'. Applying this method to lung image registration, results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available Computed Tomography lung image dataset (www.dir-lab.com) leads to the observation that our new approach compares very favorably with state-of-the-art in continuous and discrete image registration methods achieving Target Registration Error of 1.16mm on average per landmark.

  9. Supervoxels for Graph Cuts-Based Deformable Image Registration Using Guided Image Filtering

    PubMed Central

    Szmul, Adam; Papież, Bartłomiej W.; Hallack, Andre; Grau, Vicente; Schnabel, Julia A.

    2017-01-01

    In this work we propose to combine a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for 3D deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to 2D applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation, combined with graph cuts-based optimization can be applied to 3D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model ‘sliding motion’. Applying this method to lung image registration, results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available Computed Tomography lung image dataset (www.dir-lab.com) leads to the observation that our new approach compares very favorably with state-of-the-art in continuous and discrete image registration methods achieving Target Registration Error of 1.16mm on average per landmark. PMID:29225433

  10. The plant virus microscope image registration method based on mismatches removing.

    PubMed

    Wei, Lifang; Zhou, Shucheng; Dong, Heng; Mao, Qianzhuo; Lin, Jiaxiang; Chen, Riqing

    2016-01-01

    The electron microscopy is one of the major means to observe the virus. The view of virus microscope images is limited by making specimen and the size of the camera's view field. To solve this problem, the virus sample is produced into multi-slice for information fusion and image registration techniques are applied to obtain large field and whole sections. Image registration techniques have been developed in the past decades for increasing the camera's field of view. Nevertheless, these approaches typically work in batch mode and rely on motorized microscopes. Alternatively, the methods are conceived just to provide visually pleasant registration for high overlap ratio image sequence. This work presents a method for virus microscope image registration acquired with detailed visual information and subpixel accuracy, even when overlap ratio of image sequence is 10% or less. The method proposed focus on the correspondence set and interimage transformation. A mismatch removal strategy is proposed by the spatial consistency and the components of keypoint to enrich the correspondence set. And the translation model parameter as well as tonal inhomogeneities is corrected by the hierarchical estimation and model select. In the experiments performed, we tested different registration approaches and virus images, confirming that the translation model is not always stationary, despite the fact that the images of the sample come from the same sequence. The mismatch removal strategy makes building registration of virus microscope images at subpixel accuracy easier and optional parameters for building registration according to the hierarchical estimation and model select strategies make the proposed method high precision and reliable for low overlap ratio image sequence. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Validation of elastic registration algorithms based on adaptive irregular grids for medical applications

    NASA Astrophysics Data System (ADS)

    Franz, Astrid; Carlsen, Ingwer C.; Renisch, Steffen; Wischmann, Hans-Aloys

    2006-03-01

    Elastic registration of medical images is an active field of current research. Registration algorithms have to be validated in order to show that they fulfill the requirements of a particular clinical application. Furthermore, validation strategies compare the performance of different registration algorithms and can hence judge which algorithm is best suited for a target application. In the literature, validation strategies for rigid registration algorithms have been analyzed. For a known ground truth they assess the displacement error at a few landmarks, which is not sufficient for elastic transformations described by a huge number of parameters. Hence we consider the displacement error averaged over all pixels in the whole image or in a region-of-interest of clinical relevance. Using artificially, but realistically deformed images of the application domain, we use this quality measure to analyze an elastic registration based on transformations defined on adaptive irregular grids for the following clinical applications: Magnetic Resonance (MR) images of freely moving joints for orthopedic investigations, thoracic Computed Tomography (CT) images for the detection of pulmonary embolisms, and transmission images as used for the attenuation correction and registration of independently acquired Positron Emission Tomography (PET) and CT images. The definition of a region-of-interest allows to restrict the analysis of the registration accuracy to clinically relevant image areas. The behaviour of the displacement error as a function of the number of transformation control points and their placement can be used for identifying the best strategy for the initial placement of the control points.

  12. Subject-Specific Sparse Dictionary Learning for Atlas-Based Brain MRI Segmentation.

    PubMed

    Roy, Snehashis; He, Qing; Sweeney, Elizabeth; Carass, Aaron; Reich, Daniel S; Prince, Jerry L; Pham, Dzung L

    2015-09-01

    Quantitative measurements from segmentations of human brain magnetic resonance (MR) images provide important biomarkers for normal aging and disease progression. In this paper, we propose a patch-based tissue classification method from MR images that uses a sparse dictionary learning approach and atlas priors. Training data for the method consists of an atlas MR image, prior information maps depicting where different tissues are expected to be located, and a hard segmentation. Unlike most atlas-based classification methods that require deformable registration of the atlas priors to the subject, only affine registration is required between the subject and training atlas. A subject-specific patch dictionary is created by learning relevant patches from the atlas. Then the subject patches are modeled as sparse combinations of learned atlas patches leading to tissue memberships at each voxel. The combination of prior information in an example-based framework enables us to distinguish tissues having similar intensities but different spatial locations. We demonstrate the efficacy of the approach on the application of whole-brain tissue segmentation in subjects with healthy anatomy and normal pressure hydrocephalus, as well as lesion segmentation in multiple sclerosis patients. For each application, quantitative comparisons are made against publicly available state-of-the art approaches.

  13. Deformable Medical Image Registration: A Survey

    PubMed Central

    Sotiras, Aristeidis; Davatzikos, Christos; Paragios, Nikos

    2013-01-01

    Deformable image registration is a fundamental task in medical image processing. Among its most important applications, one may cite: i) multi-modality fusion, where information acquired by different imaging devices or protocols is fused to facilitate diagnosis and treatment planning; ii) longitudinal studies, where temporal structural or anatomical changes are investigated; and iii) population modeling and statistical atlases used to study normal anatomical variability. In this paper, we attempt to give an overview of deformable registration methods, putting emphasis on the most recent advances in the domain. Additional emphasis has been given to techniques applied to medical images. In order to study image registration methods in depth, their main components are identified and studied independently. The most recent techniques are presented in a systematic fashion. The contribution of this paper is to provide an extensive account of registration techniques in a systematic manner. PMID:23739795

  14. SU-E-J-89: Deformable Registration Method Using B-TPS in Radiotherapy.

    PubMed

    Xie, Y

    2012-06-01

    A novel deformable registration method for four-dimensional computed tomography (4DCT) images is developed in radiation therapy. The proposed method combines the thin plate spline (TPS) and B-spline together to achieve high accuracy and high efficiency. The method consists of two steps. First, TPS is used as a global registration method to deform large unfit regions in the moving image to match counterpart in the reference image. Then B-spline is used for local registration, the previous deformed moving image is further deformed to match the reference image more accurately. Two clinical CT image sets, including one pair of lung and one pair of liver, are simulated using the proposed algorithm, which results in a tremendous improvement in both run-time and registration quality, compared with the conventional methods solely using either TPS or B-spline. The proposed method can combine the efficiency of TPS and the accuracy of B-spline, performing good adaptively and robust in registration of clinical 4DCT image. © 2012 American Association of Physicists in Medicine.

  15. MRI and CBCT image registration of temporomandibular joint: a systematic review.

    PubMed

    Al-Saleh, Mohammed A Q; Alsufyani, Noura A; Saltaji, Humam; Jaremko, Jacob L; Major, Paul W

    2016-05-10

    The purpose of the present review is to systematically and critically analyze the available literature regarding the importance, applicability, and practicality of (MRI), computerized tomography (CT) or cone-beam CT (CBCT) image registration for TMJ anatomy and assessment. A systematic search of 4 databases; MEDLINE, EMBASE, EBM reviews and Scopus, was conducted by 2 reviewers. An additional manual search of the bibliography was performed. All articles discussing the magnetic resonance imaging MRI and CT or CBCT image registration for temporomandibular joint (TMJ) visualization or assessment were included. Only 3 articles satisfied the inclusion criteria. All included articles were published within the last 7 years. Two articles described MRI to CT multimodality image registration as a complementary tool to visualize TMJ. Both articles used images of one patient only to introduce the complementary concept of MRI-CT fused image. One article assessed the reliability of using MRI-CBCT registration to evaluate the TMJ disc position and osseous pathology for 10 temporomandibular disorder (TMD) patients. There are very limited studies of MRI-CT/CBCT registration to reach a conclusion regarding its accuracy or clinical use in the temporomandibular joints.

  16. WHOLE BODY NONRIGID CT-PET REGISTRATION USING WEIGHTED DEMONS.

    PubMed

    Suh, J W; Kwon, Oh-K; Scheinost, D; Sinusas, A J; Cline, Gary W; Papademetris, X

    2011-03-30

    We present a new registration method for whole-body rat computed tomography (CT) image and positron emission tomography (PET) images using a weighted demons algorithm. The CT and PET images are acquired in separate scanners at different times and the inherent differences in the imaging protocols produced significant nonrigid changes between the two acquisitions in addition to heterogeneous image characteristics. In this situation, we utilized both the transmission-PET and the emission-PET images in the deformable registration process emphasizing particular regions of the moving transmission-PET image using the emission-PET image. We validated our results with nine rat image sets using M-Hausdorff distance similarity measure. We demonstrate improved performance compared to standard methods such as Demons and normalized mutual information-based non-rigid FFD registration.

  17. D Point Cloud Model Colorization by Dense Registration of Digital Images

    NASA Astrophysics Data System (ADS)

    Crombez, N.; Caron, G.; Mouaddib, E.

    2015-02-01

    Architectural heritage is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Modern tools like 3D laser scanners are more and more used in heritage documentation. Most of the time, the 3D laser scanner is completed by a digital camera which is used to enrich the accurate geometric informations with the scanned objects colors. However, the photometric quality of the acquired point clouds is generally rather low because of several problems presented below. We propose an accurate method for registering digital images acquired from any viewpoints on point clouds which is a crucial step for a good colorization by colors projection. We express this image-to-geometry registration as a pose estimation problem. The camera pose is computed using the entire images intensities under a photometric visual and virtual servoing (VVS) framework. The camera extrinsic and intrinsic parameters are automatically estimated. Because we estimates the intrinsic parameters we do not need any informations about the camera which took the used digital image. Finally, when the point cloud model and the digital image are correctly registered, we project the 3D model in the digital image frame and assign new colors to the visible points. The performance of the approach is proven in simulation and real experiments on indoor and outdoor datasets of the cathedral of Amiens, which highlight the success of our method, leading to point clouds with better photometric quality and resolution.

  18. Avoiding Stair-Step Artifacts in Image Registration for GOES-R Navigation and Registration Assessment

    NASA Technical Reports Server (NTRS)

    Grycewicz, Thomas J.; Tan, Bin; Isaacson, Peter J.; De Luccia, Frank J.; Dellomo, John

    2016-01-01

    In developing software for independent verification and validation (IVV) of the Image Navigation and Registration (INR) capability for the Geostationary Operational Environmental Satellite R Series (GOES-R) Advanced Baseline Imager (ABI), we have encountered an image registration artifact which limits the accuracy of image offset estimation at the subpixel scale using image correlation. Where the two images to be registered have the same pixel size, subpixel image registration preferentially selects registration values where the image pixel boundaries are close to lined up. Because of the shape of a curve plotting input displacement to estimated offset, we call this a stair-step artifact. When one image is at a higher resolution than the other, the stair-step artifact is minimized by correlating at the higher resolution. For validating ABI image navigation, GOES-R images are correlated with Landsat-based ground truth maps. To create the ground truth map, the Landsat image is first transformed to the perspective seen from the GOES-R satellite, and then is scaled to an appropriate pixel size. Minimizing processing time motivates choosing the map pixels to be the same size as the GOES-R pixels. At this pixel size image processing of the shift estimate is efficient, but the stair-step artifact is present. If the map pixel is very small, stair-step is not a problem, but image correlation is computation-intensive. This paper describes simulation-based selection of the scale for truth maps for registering GOES-R ABI images.

  19. Feature and Intensity Based Medical Image Registration Using Particle Swarm Optimization.

    PubMed

    Abdel-Basset, Mohamed; Fakhry, Ahmed E; El-Henawy, Ibrahim; Qiu, Tie; Sangaiah, Arun Kumar

    2017-11-03

    Image registration is an important aspect in medical image analysis, and kinds use in a variety of medical applications. Examples include diagnosis, pre/post surgery guidance, comparing/merging/integrating images from multi-modal like Magnetic Resonance Imaging (MRI), and Computed Tomography (CT). Whether registering images across modalities for a single patient or registering across patients for a single modality, registration is an effective way to combine information from different images into a normalized frame for reference. Registered datasets can be used for providing information relating to the structure, function, and pathology of the organ or individual being imaged. In this paper a hybrid approach for medical images registration has been developed. It employs a modified Mutual Information (MI) as a similarity metric and Particle Swarm Optimization (PSO) method. Computation of mutual information is modified using a weighted linear combination of image intensity and image gradient vector flow (GVF) intensity. In this manner, statistical as well as spatial image information is included into the image registration process. Maximization of the modified mutual information is effected using the versatile Particle Swarm Optimization which is developed easily with adjusted less parameter. The developed approach has been tested and verified successfully on a number of medical image data sets that include images with missing parts, noise contamination, and/or of different modalities (CT, MRI). The registration results indicate the proposed model as accurate and effective, and show the posture contribution in inclusion of both statistical and spatial image data to the developed approach.

  20. 3D/2D model-to-image registration by imitation learning for cardiac procedures.

    PubMed

    Toth, Daniel; Miao, Shun; Kurzendorfer, Tanja; Rinaldi, Christopher A; Liao, Rui; Mansi, Tommaso; Rhode, Kawal; Mountney, Peter

    2018-05-12

    In cardiac interventions, such as cardiac resynchronization therapy (CRT), image guidance can be enhanced by involving preoperative models. Multimodality 3D/2D registration for image guidance, however, remains a significant research challenge for fundamentally different image data, i.e., MR to X-ray. Registration methods must account for differences in intensity, contrast levels, resolution, dimensionality, field of view. Furthermore, same anatomical structures may not be visible in both modalities. Current approaches have focused on developing modality-specific solutions for individual clinical use cases, by introducing constraints, or identifying cross-modality information manually. Machine learning approaches have the potential to create more general registration platforms. However, training image to image methods would require large multimodal datasets and ground truth for each target application. This paper proposes a model-to-image registration approach instead, because it is common in image-guided interventions to create anatomical models for diagnosis, planning or guidance prior to procedures. An imitation learning-based method, trained on 702 datasets, is used to register preoperative models to intraoperative X-ray images. Accuracy is demonstrated on cardiac models and artificial X-rays generated from CTs. The registration error was [Formula: see text] on 1000 test cases, superior to that of manual ([Formula: see text]) and gradient-based ([Formula: see text]) registration. High robustness is shown in 19 clinical CRT cases. Besides the proposed methods feasibility in a clinical environment, evaluation has shown good accuracy and high robustness indicating that it could be applied in image-guided interventions.

  1. A Local Fast Marching-Based Diffusion Tensor Image Registration Algorithm by Simultaneously Considering Spatial Deformation and Tensor Orientation

    PubMed Central

    Xue, Zhong; Li, Hai; Guo, Lei; Wong, Stephen T.C.

    2010-01-01

    It is a key step to spatially align diffusion tensor images (DTI) to quantitatively compare neural images obtained from different subjects or the same subject at different timepoints. Different from traditional scalar or multi-channel image registration methods, tensor orientation should be considered in DTI registration. Recently, several DTI registration methods have been proposed in the literature, but deformation fields are purely dependent on the tensor features not the whole tensor information. Other methods, such as the piece-wise affine transformation and the diffeomorphic non-linear registration algorithms, use analytical gradients of the registration objective functions by simultaneously considering the reorientation and deformation of tensors during the registration. However, only relatively local tensor information such as voxel-wise tensor-similarity, is utilized. This paper proposes a new DTI image registration algorithm, called local fast marching (FM)-based simultaneous registration. The algorithm not only considers the orientation of tensors during registration but also utilizes the neighborhood tensor information of each voxel to drive the deformation, and such neighborhood tensor information is extracted from a local fast marching algorithm around the voxels of interest. These local fast marching-based tensor features efficiently reflect the diffusion patterns around each voxel within a spherical neighborhood and can capture relatively distinctive features of the anatomical structures. Using simulated and real DTI human brain data the experimental results show that the proposed algorithm is more accurate compared with the FA-based registration and is more efficient than its counterpart, the neighborhood tensor similarity-based registration. PMID:20382233

  2. Shearlet Features for Registration of Remotely Sensed Multitemporal Images

    NASA Technical Reports Server (NTRS)

    Murphy, James M.; Le Moigne, Jacqueline

    2015-01-01

    We investigate the role of anisotropic feature extraction methods for automatic image registration of remotely sensed multitemporal images. Building on the classical use of wavelets in image registration, we develop an algorithm based on shearlets, a mathematical generalization of wavelets that offers increased directional sensitivity. Initial experimental results on LANDSAT images are presented, which indicate superior performance of the shearlet algorithm when compared to classical wavelet algorithms.

  3. Band co-registration modeling of LAPAN-A3/IPB multispectral imager based on satellite attitude

    NASA Astrophysics Data System (ADS)

    Hakim, P. R.; Syafrudin, A. H.; Utama, S.; Jayani, A. P. S.

    2018-05-01

    One of significant geometric distortion on images of LAPAN-A3/IPB multispectral imager is co-registration error between each color channel detector. Band co-registration distortion usually can be corrected by using several approaches, which are manual method, image matching algorithm, or sensor modeling and calibration approach. This paper develops another approach to minimize band co-registration distortion on LAPAN-A3/IPB multispectral image by using supervised modeling of image matching with respect to satellite attitude. Modeling results show that band co-registration error in across-track axis is strongly influenced by yaw angle, while error in along-track axis is fairly influenced by both pitch and roll angle. Accuracy of the models obtained is pretty good, which lies between 1-3 pixels error for each axis of each pair of band co-registration. This mean that the model can be used to correct the distorted images without the need of slower image matching algorithm, nor the laborious effort needed in manual approach and sensor calibration. Since the calculation can be executed in order of seconds, this approach can be used in real time quick-look image processing in ground station or even in satellite on-board image processing.

  4. A fast rigid-registration method of inferior limb X-ray image and 3D CT images for TKA surgery

    NASA Astrophysics Data System (ADS)

    Ito, Fumihito; O. D. A, Prima; Uwano, Ikuko; Ito, Kenzo

    2010-03-01

    In this paper, we propose a fast rigid-registration method of inferior limb X-ray films (two-dimensional Computed Radiography (CR) images) and three-dimensional Computed Tomography (CT) images for Total Knee Arthroplasty (TKA) surgery planning. The position of the each bone, such as femur and tibia (shin bone), in X-ray film and 3D CT images is slightly different, and we must pay attention how to use the two different images, since X-ray film image is captured in the standing position, and 3D CT is captured in decubitus (face up) position, respectively. Though the conventional registration mainly uses cross-correlation function between two images,and utilizes optimization techniques, it takes enormous calculation time and it is difficult to use it in interactive operations. In order to solve these problems, we calculate the center line (bone axis) of femur and tibia (shin bone) automatically, and we use them as initial positions for the registration. We evaluate our registration method by using three patient's image data, and we compare our proposed method and a conventional registration, which uses down-hill simplex algorithm. The down-hill simplex method is an optimization algorithm that requires only function evaluations, and doesn't need the calculation of derivatives. Our registration method is more effective than the downhill simplex method in computational time and the stable convergence. We have developed the implant simulation system on a personal computer, in order to support the surgeon in a preoperative planning of TKA. Our registration method is implemented in the simulation system, and user can manipulate 2D/3D translucent templates of implant components on X-ray film and 3D CT images.

  5. Automatic selection of landmarks in T1-weighted head MRI with regression forests for image registration initialization.

    PubMed

    Wang, Jianing; Liu, Yuan; Noble, Jack H; Dawant, Benoit M

    2017-10-01

    Medical image registration establishes a correspondence between images of biological structures, and it is at the core of many applications. Commonly used deformable image registration methods depend on a good preregistration initialization. We develop a learning-based method to automatically find a set of robust landmarks in three-dimensional MR image volumes of the head. These landmarks are then used to compute a thin plate spline-based initialization transformation. The process involves two steps: (1) identifying a set of landmarks that can be reliably localized in the images and (2) selecting among them the subset that leads to a good initial transformation. To validate our method, we use it to initialize five well-established deformable registration algorithms that are subsequently used to register an atlas to MR images of the head. We compare our proposed initialization method with a standard approach that involves estimating an affine transformation with an intensity-based approach. We show that for all five registration algorithms the final registration results are statistically better when they are initialized with the method that we propose than when a standard approach is used. The technique that we propose is generic and could be used to initialize nonrigid registration algorithms for other applications.

  6. Optical registration of spaceborne low light remote sensing camera

    NASA Astrophysics Data System (ADS)

    Li, Chong-yang; Hao, Yan-hui; Xu, Peng-mei; Wang, Dong-jie; Ma, Li-na; Zhao, Ying-long

    2018-02-01

    For the high precision requirement of spaceborne low light remote sensing camera optical registration, optical registration of dual channel for CCD and EMCCD is achieved by the high magnification optical registration system. System integration optical registration and accuracy of optical registration scheme for spaceborne low light remote sensing camera with short focal depth and wide field of view is proposed in this paper. It also includes analysis of parallel misalignment of CCD and accuracy of optical registration. Actual registration results show that imaging clearly, MTF and accuracy of optical registration meet requirements, it provide important guarantee to get high quality image data in orbit.

  7. Improved image alignment method in application to X-ray images and biological images.

    PubMed

    Wang, Ching-Wei; Chen, Hsiang-Chou

    2013-08-01

    Alignment of medical images is a vital component of a large number of applications throughout the clinical track of events; not only within clinical diagnostic settings, but prominently so in the area of planning, consummation and evaluation of surgical and radiotherapeutical procedures. However, image registration of medical images is challenging because of variations on data appearance, imaging artifacts and complex data deformation problems. Hence, the aim of this study is to develop a robust image alignment method for medical images. An improved image registration method is proposed, and the method is evaluated with two types of medical data, including biological microscopic tissue images and dental X-ray images and compared with five state-of-the-art image registration techniques. The experimental results show that the presented method consistently performs well on both types of medical images, achieving 88.44 and 88.93% averaged registration accuracies for biological tissue images and X-ray images, respectively, and outperforms the benchmark methods. Based on the Tukey's honestly significant difference test and Fisher's least square difference test tests, the presented method performs significantly better than all existing methods (P ≤ 0.001) for tissue image alignment, and for the X-ray image registration, the proposed method performs significantly better than the two benchmark b-spline approaches (P < 0.001). The software implementation of the presented method and the data used in this study are made publicly available for scientific communities to use (http://www-o.ntust.edu.tw/∼cweiwang/ImprovedImageRegistration/). cweiwang@mail.ntust.edu.tw.

  8. Graphics Processing Unit-Accelerated Nonrigid Registration of MR Images to CT Images During CT-Guided Percutaneous Liver Tumor Ablations.

    PubMed

    Tokuda, Junichi; Plishker, William; Torabi, Meysam; Olubiyi, Olutayo I; Zaki, George; Tatli, Servet; Silverman, Stuart G; Shekher, Raj; Hata, Nobuhiko

    2015-06-01

    Accuracy and speed are essential for the intraprocedural nonrigid magnetic resonance (MR) to computed tomography (CT) image registration in the assessment of tumor margins during CT-guided liver tumor ablations. Although both accuracy and speed can be improved by limiting the registration to a region of interest (ROI), manual contouring of the ROI prolongs the registration process substantially. To achieve accurate and fast registration without the use of an ROI, we combined a nonrigid registration technique on the basis of volume subdivision with hardware acceleration using a graphics processing unit (GPU). We compared the registration accuracy and processing time of GPU-accelerated volume subdivision-based nonrigid registration technique to the conventional nonrigid B-spline registration technique. Fourteen image data sets of preprocedural MR and intraprocedural CT images for percutaneous CT-guided liver tumor ablations were obtained. Each set of images was registered using the GPU-accelerated volume subdivision technique and the B-spline technique. Manual contouring of ROI was used only for the B-spline technique. Registration accuracies (Dice similarity coefficient [DSC] and 95% Hausdorff distance [HD]) and total processing time including contouring of ROIs and computation were compared using a paired Student t test. Accuracies of the GPU-accelerated registrations and B-spline registrations, respectively, were 88.3 ± 3.7% versus 89.3 ± 4.9% (P = .41) for DSC and 13.1 ± 5.2 versus 11.4 ± 6.3 mm (P = .15) for HD. Total processing time of the GPU-accelerated registration and B-spline registration techniques was 88 ± 14 versus 557 ± 116 seconds (P < .000000002), respectively; there was no significant difference in computation time despite the difference in the complexity of the algorithms (P = .71). The GPU-accelerated volume subdivision technique was as accurate as the B-spline technique and required significantly less processing time. The GPU-accelerated volume subdivision technique may enable the implementation of nonrigid registration into routine clinical practice. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  9. Accurate band-to-band registration of AOTF imaging spectrometer using motion detection technology

    NASA Astrophysics Data System (ADS)

    Zhou, Pengwei; Zhao, Huijie; Jin, Shangzhong; Li, Ningchuan

    2016-05-01

    This paper concerns the problem of platform vibration induced band-to-band misregistration with acousto-optic imaging spectrometer in spaceborne application. Registrating images of different bands formed at different time or different position is difficult, especially for hyperspectral images form acousto-optic tunable filter (AOTF) imaging spectrometer. In this study, a motion detection method is presented using the polychromatic undiffracted beam of AOTF. The factors affecting motion detect accuracy are analyzed theoretically, and calculations show that optical distortion is an easily overlooked factor to achieve accurate band-to-band registration. Hence, a reflective dual-path optical system has been proposed for the first time, with reduction of distortion and chromatic aberration, indicating the potential of higher registration accuracy. Consequently, a spectra restoration experiment using additional motion detect channel is presented for the first time, which shows the accurate spectral image registration capability of this technique.

  10. Slice-to-volume medical image registration: A survey.

    PubMed

    Ferrante, Enzo; Paragios, Nikos

    2017-07-01

    During the last decades, the research community of medical imaging has witnessed continuous advances in image registration methods, which pushed the limits of the state-of-the-art and enabled the development of novel medical procedures. A particular type of image registration problem, known as slice-to-volume registration, played a fundamental role in areas like image guided surgeries and volumetric image reconstruction. However, to date, and despite the extensive literature available on this topic, no survey has been written to discuss this challenging problem. This paper introduces the first comprehensive survey of the literature about slice-to-volume registration, presenting a categorical study of the algorithms according to an ad-hoc taxonomy and analyzing advantages and disadvantages of every category. We draw some general conclusions from this analysis and present our perspectives on the future of the field. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. TPS-HAMMER: improving HAMMER registration algorithm by soft correspondence matching and thin-plate splines based deformation interpolation.

    PubMed

    Wu, Guorong; Yap, Pew-Thian; Kim, Minjeong; Shen, Dinggang

    2010-02-01

    We present an improved MR brain image registration algorithm, called TPS-HAMMER, which is based on the concepts of attribute vectors and hierarchical landmark selection scheme proposed in the highly successful HAMMER registration algorithm. We demonstrate that TPS-HAMMER algorithm yields better registration accuracy, robustness, and speed over HAMMER owing to (1) the employment of soft correspondence matching and (2) the utilization of thin-plate splines (TPS) for sparse-to-dense deformation field generation. These two aspects can be integrated into a unified framework to refine the registration iteratively by alternating between soft correspondence matching and dense deformation field estimation. Compared with HAMMER, TPS-HAMMER affords several advantages: (1) unlike the Gaussian propagation mechanism employed in HAMMER, which can be slow and often leaves unreached blotches in the deformation field, the deformation interpolation in the non-landmark points can be obtained immediately with TPS in our algorithm; (2) the smoothness of deformation field is preserved due to the nice properties of TPS; (3) possible misalignments can be alleviated by allowing the matching of the landmarks with a number of possible candidate points and enforcing more exact matches in the final stages of the registration. Extensive experiments have been conducted, using the original HAMMER as a comparison baseline, to validate the merits of TPS-HAMMER. The results show that TPS-HAMMER yields significant improvement in both accuracy and speed, indicating high applicability for the clinical scenario. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  12. Decomposed multidimensional control grid interpolation for common consumer electronic image processing applications

    NASA Astrophysics Data System (ADS)

    Zwart, Christine M.; Venkatesan, Ragav; Frakes, David H.

    2012-10-01

    Interpolation is an essential and broadly employed function of signal processing. Accordingly, considerable development has focused on advancing interpolation algorithms toward optimal accuracy. Such development has motivated a clear shift in the state-of-the art from classical interpolation to more intelligent and resourceful approaches, registration-based interpolation for example. As a natural result, many of the most accurate current algorithms are highly complex, specific, and computationally demanding. However, the diverse hardware destinations for interpolation algorithms present unique constraints that often preclude use of the most accurate available options. For example, while computationally demanding interpolators may be suitable for highly equipped image processing platforms (e.g., computer workstations and clusters), only more efficient interpolators may be practical for less well equipped platforms (e.g., smartphones and tablet computers). The latter examples of consumer electronics present a design tradeoff in this regard: high accuracy interpolation benefits the consumer experience but computing capabilities are limited. It follows that interpolators with favorable combinations of accuracy and efficiency are of great practical value to the consumer electronics industry. We address multidimensional interpolation-based image processing problems that are common to consumer electronic devices through a decomposition approach. The multidimensional problems are first broken down into multiple, independent, one-dimensional (1-D) interpolation steps that are then executed with a newly modified registration-based one-dimensional control grid interpolator. The proposed approach, decomposed multidimensional control grid interpolation (DMCGI), combines the accuracy of registration-based interpolation with the simplicity, flexibility, and computational efficiency of a 1-D interpolation framework. Results demonstrate that DMCGI provides improved interpolation accuracy (and other benefits) in image resizing, color sample demosaicing, and video deinterlacing applications, at a computational cost that is manageable or reduced in comparison to popular alternatives.

  13. Phantom Study Investigating the Accuracy of Manual and Automatic Image Fusion with the GE Logiq E9: Implications for use in Percutaneous Liver Interventions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burgmans, Mark Christiaan, E-mail: m.c.burgmans@lumc.nl; Harder, J. Michiel den, E-mail: chiel.den.harder@gmail.com; Meershoek, Philippa, E-mail: P.Meershoek@lumc.nl

    PurposeTo determine the accuracy of automatic and manual co-registration methods for image fusion of three-dimensional computed tomography (CT) with real-time ultrasonography (US) for image-guided liver interventions.Materials and MethodsCT images of a skills phantom with liver lesions were acquired and co-registered to US using GE Logiq E9 navigation software. Manual co-registration was compared to automatic and semiautomatic co-registration using an active tracker. Also, manual point registration was compared to plane registration with and without an additional translation point. Finally, comparison was made between manual and automatic selection of reference points. In each experiment, accuracy of the co-registration method was determined bymore » measurement of the residual displacement in phantom lesions by two independent observers.ResultsMean displacements for a superficial and deep liver lesion were comparable after manual and semiautomatic co-registration: 2.4 and 2.0 mm versus 2.0 and 2.5 mm, respectively. Both methods were significantly better than automatic co-registration: 5.9 and 5.2 mm residual displacement (p < 0.001; p < 0.01). The accuracy of manual point registration was higher than that of plane registration, the latter being heavily dependent on accurate matching of axial CT and US images by the operator. Automatic reference point selection resulted in significantly lower registration accuracy compared to manual point selection despite lower root-mean-square deviation (RMSD) values.ConclusionThe accuracy of manual and semiautomatic co-registration is better than that of automatic co-registration. For manual co-registration using a plane, choosing the correct plane orientation is an essential first step in the registration process. Automatic reference point selection based on RMSD values is error-prone.« less

  14. Research Issues in Image Registration for Remote Sensing

    NASA Technical Reports Server (NTRS)

    Eastman, Roger D.; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    Image registration is an important element in data processing for remote sensing with many applications and a wide range of solutions. Despite considerable investigation the field has not settled on a definitive solution for most applications and a number of questions remain open. This article looks at selected research issues by surveying the experience of operational satellite teams, application-specific requirements for Earth science, and our experiments in the evaluation of image registration algorithms with emphasis on the comparison of algorithms for subpixel accuracy. We conclude that remote sensing applications put particular demands on image registration algorithms to take into account domain-specific knowledge of geometric transformations and image content.

  15. Applications of digital image processing techniques to problems of data registration and correlation

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1978-01-01

    An overview is presented of the evolution of the computer configuration at JPL's Image Processing Laboratory (IPL). The development of techniques for the geometric transformation of digital imagery is discussed and consideration is given to automated and semiautomated image registration, and the registration of imaging and nonimaging data. The increasing complexity of image processing tasks at IPL is illustrated with examples of various applications from the planetary program and earth resources activities. It is noted that the registration of existing geocoded data bases with Landsat imagery will continue to be important if the Landsat data is to be of genuine use to the user community.

  16. Registration uncertainties between 3D cone beam computed tomography and different reference CT datasets in lung stereotactic body radiation therapy.

    PubMed

    Oechsner, Markus; Chizzali, Barbara; Devecka, Michal; Combs, Stephanie Elisabeth; Wilkens, Jan Jakob; Duma, Marciana Nona

    2016-10-26

    The aim of this study was to analyze differences in couch shifts (setup errors) resulting from image registration of different CT datasets with free breathing cone beam CTs (FB-CBCT). As well automatic as manual image registrations were performed and registration results were correlated to tumor characteristics. FB-CBCT image registration was performed for 49 patients with lung lesions using slow planning CT (PCT), average intensity projection (AIP), maximum intensity projection (MIP) and mid-ventilation CTs (MidV) as reference images. Both, automatic and manual image registrations were applied. Shift differences were evaluated between the registered CT datasets for automatic and manual registration, respectively. Furthermore, differences between automatic and manual registration were analyzed for the same CT datasets. The registration results were statistically analyzed and correlated to tumor characteristics (3D tumor motion, tumor volume, superior-inferior (SI) distance, tumor environment). Median 3D shift differences over all patients were between 0.5 mm (AIPvsMIP) and 1.9 mm (MIPvsPCT and MidVvsPCT) for the automatic registration and between 1.8 mm (AIPvsPCT) and 2.8 mm (MIPvsPCT and MidVvsPCT) for the manual registration. For some patients, large shift differences (>5.0 mm) were found (maximum 10.5 mm, automatic registration). Comparing automatic vs manual registrations for the same reference CTs, ∆AIP achieved the smallest (1.1 mm) and ∆MIP the largest (1.9 mm) median 3D shift differences. The standard deviation (variability) for the 3D shift differences was also the smallest for ∆AIP (1.1 mm). Significant correlations (p < 0.01) between 3D shift difference and 3D tumor motion (AIPvsMIP, MIPvsMidV) and SI distance (AIPvsMIP) (automatic) and also for 3D tumor motion (∆PCT, ∆MidV; automatic vs manual) were found. Using different CT datasets for image registration with FB-CBCTs can result in different 3D couch shifts. Manual registrations achieved partly different 3D shifts than automatic registrations. AIP CTs yielded the smallest shift differences and might be the most appropriate CT dataset for registration with 3D FB-CBCTs.

  17. SU-F-J-57: Effectiveness of Daily CT-Based Three-Dimensional Image Guided and Adaptive Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moriya, S; National Cancer Center, Kashiwa, Chiba; Tachibana, H

    Purpose: Daily CT-based three-dimensional image-guided and adaptive (CTIGRT-ART) proton therapy system was designed and developed. We also evaluated the effectiveness of the CTIGRT-ART. Methods: Retrospective analysis was performed in three lung cancer patients: Proton treatment planning was performed using CT image datasets acquired by Toshiba Aquilion ONE. Planning target volume and surrounding organs were contoured by a well-trained radiation oncologist. Dose distribution was optimized using 180-deg. and 270-deg. two fields in passive scattering proton therapy. Well commissioned Simplified Monte Carlo algorithm was used as dose calculation engine. Daily consecutive CT image datasets was acquired by an in-room CT (Toshiba Aquilionmore » LB). In our in-house program, two image registrations for bone and tumor were performed to shift the isocenter using treatment CT image dataset. Subsequently, dose recalculation was performed after the shift of the isocenter. When the dose distribution after the tumor registration exhibits change of dosimetric parameter of CTV D90% compared to the initial plan, an additional process of was performed that the range shifter thickness was optimized. Dose distribution with CTV D90% for the bone registration, the tumor registration only and adaptive plan with the tumor registration was compared to the initial plan. Results: In the bone registration, tumor dose coverage was decreased by 16% on average (Maximum: 56%). The tumor registration shows better coverage than the bone registration, however the coverage was also decreased by 9% (Maximum: 22%) The adaptive plan shows similar dose coverage of the tumor (Average: 2%, Maximum: 7%). Conclusion: There is a high possibility that only image registration for bone and tumor may reduce tumor coverage. Thus, our proposed methodology of image guidance and adaptive planning using the range adaptation after tumor registration would be effective for proton therapy. This research is partially supported by Japan Agency for Medical Research and Development (AMED).« less

  18. Deformation field correction for spatial normalization of PET images

    PubMed Central

    Bilgel, Murat; Carass, Aaron; Resnick, Susan M.; Wong, Dean F.; Prince, Jerry L.

    2015-01-01

    Spatial normalization of positron emission tomography (PET) images is essential for population studies, yet the current state of the art in PET-to-PET registration is limited to the application of conventional deformable registration methods that were developed for structural images. A method is presented for the spatial normalization of PET images that improves their anatomical alignment over the state of the art. The approach works by correcting the deformable registration result using a model that is learned from training data having both PET and structural images. In particular, viewing the structural registration of training data as ground truth, correction factors are learned by using a generalized ridge regression at each voxel given the PET intensities and voxel locations in a population-based PET template. The trained model can then be used to obtain more accurate registration of PET images to the PET template without the use of a structural image. A cross validation evaluation on 79 subjects shows that the proposed method yields more accurate alignment of the PET images compared to deformable PET-to-PET registration as revealed by 1) a visual examination of the deformed images, 2) a smaller error in the deformation fields, and 3) a greater overlap of the deformed anatomical labels with ground truth segmentations. PMID:26142272

  19. Ultrasound fusion image error correction using subject-specific liver motion model and automatic image registration.

    PubMed

    Yang, Minglei; Ding, Hui; Zhu, Lei; Wang, Guangzhi

    2016-12-01

    Ultrasound fusion imaging is an emerging tool and benefits a variety of clinical applications, such as image-guided diagnosis and treatment of hepatocellular carcinoma and unresectable liver metastases. However, respiratory liver motion-induced misalignment of multimodal images (i.e., fusion error) compromises the effectiveness and practicability of this method. The purpose of this paper is to develop a subject-specific liver motion model and automatic registration-based method to correct the fusion error. An online-built subject-specific motion model and automatic image registration method for 2D ultrasound-3D magnetic resonance (MR) images were combined to compensate for the respiratory liver motion. The key steps included: 1) Build a subject-specific liver motion model for current subject online and perform the initial registration of pre-acquired 3D MR and intra-operative ultrasound images; 2) During fusion imaging, compensate for liver motion first using the motion model, and then using an automatic registration method to further correct the respiratory fusion error. Evaluation experiments were conducted on liver phantom and five subjects. In the phantom study, the fusion error (superior-inferior axis) was reduced from 13.90±2.38mm to 4.26±0.78mm by using the motion model only. The fusion error further decreased to 0.63±0.53mm by using the registration method. The registration method also decreased the rotation error from 7.06±0.21° to 1.18±0.66°. In the clinical study, the fusion error was reduced from 12.90±9.58mm to 6.12±2.90mm by using the motion model alone. Moreover, the fusion error decreased to 1.96±0.33mm by using the registration method. The proposed method can effectively correct the respiration-induced fusion error to improve the fusion image quality. This method can also reduce the error correction dependency on the initial registration of ultrasound and MR images. Overall, the proposed method can improve the clinical practicability of ultrasound fusion imaging. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. A method to quantify mechanobiologic forces during zebrafish cardiac development using 4-D light sheet imaging and computational modeling

    PubMed Central

    Vedula, Vijay; Lee, Juhyun; Xu, Hao; Hsiai, Tzung K.

    2017-01-01

    Blood flow and mechanical forces in the ventricle are implicated in cardiac development and trabeculation. However, the mechanisms of mechanotransduction remain elusive. This is due in part to the challenges associated with accurately quantifying mechanical forces in the developing heart. We present a novel computational framework to simulate cardiac hemodynamics in developing zebrafish embryos by coupling 4-D light sheet imaging with a stabilized finite element flow solver, and extract time-dependent mechanical stimuli data. We employ deformable image registration methods to segment the motion of the ventricle from high resolution 4-D light sheet image data. This results in a robust and efficient workflow, as segmentation need only be performed at one cardiac phase, while wall position in the other cardiac phases is found by image registration. Ventricular hemodynamics are then quantified by numerically solving the Navier-Stokes equations in the moving wall domain with our validated flow solver. We demonstrate the applicability of the workflow in wild type zebrafish and three treated fish types that disrupt trabeculation: (a) chemical treatment using AG1478, an ErbB2 signaling inhibitor that inhibits proliferation and differentiation of cardiac trabeculation; (b) injection of gata1a morpholino oligomer (gata1aMO) suppressing hematopoiesis and resulting in attenuated trabeculation; (c) weak-atriumm58 mutant (wea) with inhibited atrial contraction leading to a highly undeveloped ventricle and poor cardiac function. Our simulations reveal elevated wall shear stress (WSS) in wild type and AG1478 compared to gata1aMO and wea. High oscillatory shear index (OSI) in the grooves between trabeculae, compared to lower values on the ridges, in the wild type suggest oscillatory forces as a possible regulatory mechanism of cardiac trabeculation development. The framework has broad applicability for future cardiac developmental studies focused on quantitatively investigating the role of hemodynamic forces and mechanotransduction during morphogenesis. PMID:29084212

  1. Comparative Evaluation of Registration Algorithms in Different Brain Databases With Varying Difficulty: Results and Insights

    PubMed Central

    Akbari, Hamed; Bilello, Michel; Da, Xiao; Davatzikos, Christos

    2015-01-01

    Evaluating various algorithms for the inter-subject registration of brain magnetic resonance images (MRI) is a necessary topic receiving growing attention. Existing studies evaluated image registration algorithms in specific tasks or using specific databases (e.g., only for skull-stripped images, only for single-site images, etc.). Consequently, the choice of registration algorithms seems task- and usage/parameter-dependent. Nevertheless, recent large-scale, often multi-institutional imaging-related studies create the need and raise the question whether some registration algorithms can 1) generally apply to various tasks/databases posing various challenges; 2) perform consistently well, and while doing so, 3) require minimal or ideally no parameter tuning. In seeking answers to this question, we evaluated 12 general-purpose registration algorithms, for their generality, accuracy and robustness. We fixed their parameters at values suggested by algorithm developers as reported in the literature. We tested them in 7 databases/tasks, which present one or more of 4 commonly-encountered challenges: 1) inter-subject anatomical variability in skull-stripped images; 2) intensity homogeneity, noise and large structural differences in raw images; 3) imaging protocol and field-of-view (FOV) differences in multi-site data; and 4) missing correspondences in pathology-bearing images. Totally 7,562 registrations were performed. Registration accuracies were measured by (multi-)expert-annotated landmarks or regions of interest (ROIs). To ensure reproducibility, we used public software tools, public databases (whenever possible), and we fully disclose the parameter settings. We show evaluation results, and discuss the performances in light of algorithms’ similarity metrics, transformation models and optimization strategies. We also discuss future directions for the algorithm development and evaluations. PMID:24951685

  2. Histostitcher™: An informatics software platform for reconstructing whole-mount prostate histology using the extensible imaging platform framework

    PubMed Central

    Toth, Robert J.; Shih, Natalie; Tomaszewski, John E.; Feldman, Michael D.; Kutter, Oliver; Yu, Daphne N.; Paulus, John C.; Paladini, Ginaluca; Madabhushi, Anant

    2014-01-01

    Context: Co-registration of ex-vivo histologic images with pre-operative imaging (e.g., magnetic resonance imaging [MRI]) can be used to align and map disease extent, and to identify quantitative imaging signatures. However, ex-vivo histology images are frequently sectioned into quarters prior to imaging. Aims: This work presents Histostitcher™, a software system designed to create a pseudo whole mount histology section (WMHS) from a stitching of four individual histology quadrant images. Materials and Methods: Histostitcher™ uses user-identified fiducials on the boundary of two quadrants to stitch such quadrants. An original prototype of Histostitcher™ was designed using the Matlab programming languages. However, clinical use was limited due to slow performance, computer memory constraints and an inefficient workflow. The latest version was created using the extensible imaging platform (XIP™) architecture in the C++ programming language. A fast, graphics processor unit renderer was designed to intelligently cache the visible parts of the histology quadrants and the workflow was significantly improved to allow modifying existing fiducials, fast transformations of the quadrants and saving/loading sessions. Results: The new stitching platform yielded significantly more efficient workflow and reconstruction than the previous prototype. It was tested on a traditional desktop computer, a Windows 8 Surface Pro table device and a 27 inch multi-touch display, with little performance difference between the different devices. Conclusions: Histostitcher™ is a fast, efficient framework for reconstructing pseudo WMHS from individually imaged quadrants. The highly modular XIP™ framework was used to develop an intuitive interface and future work will entail mapping the disease extent from the pseudo WMHS onto pre-operative MRI. PMID:24843820

  3. A Parallel Nonrigid Registration Algorithm Based on B-Spline for Medical Images.

    PubMed

    Du, Xiaogang; Dang, Jianwu; Wang, Yangping; Wang, Song; Lei, Tao

    2016-01-01

    The nonrigid registration algorithm based on B-spline Free-Form Deformation (FFD) plays a key role and is widely applied in medical image processing due to the good flexibility and robustness. However, it requires a tremendous amount of computing time to obtain more accurate registration results especially for a large amount of medical image data. To address the issue, a parallel nonrigid registration algorithm based on B-spline is proposed in this paper. First, the Logarithm Squared Difference (LSD) is considered as the similarity metric in the B-spline registration algorithm to improve registration precision. After that, we create a parallel computing strategy and lookup tables (LUTs) to reduce the complexity of the B-spline registration algorithm. As a result, the computing time of three time-consuming steps including B-splines interpolation, LSD computation, and the analytic gradient computation of LSD, is efficiently reduced, for the B-spline registration algorithm employs the Nonlinear Conjugate Gradient (NCG) optimization method. Experimental results of registration quality and execution efficiency on the large amount of medical images show that our algorithm achieves a better registration accuracy in terms of the differences between the best deformation fields and ground truth and a speedup of 17 times over the single-threaded CPU implementation due to the powerful parallel computing ability of Graphics Processing Unit (GPU).

  4. SU-C-207B-07: Deep Convolutional Neural Network Image Matching for Ultrasound Guidance in Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, N; Najafi, M; Hancock, S

    Purpose: Robust matching of ultrasound images is a challenging problem as images of the same anatomy often present non-trivial differences. This poses an obstacle for ultrasound guidance in radiotherapy. Thus our objective is to overcome this obstacle by designing and evaluating an image blocks matching framework based on a two channel deep convolutional neural network. Methods: We extend to 3D an algorithmic structure previously introduced for 2D image feature learning [1]. To obtain the similarity between two 3D image blocks A and B, the 3D image blocks are divided into 2D patches Ai and Bi. The similarity is then calculatedmore » as the average similarity score of Ai and Bi. The neural network was then trained with public non-medical image pairs, and subsequently evaluated on ultrasound image blocks for the following scenarios: (S1) same image blocks with/without shifts (A and A-shift-x); (S2) non-related random block pairs; (S3) ground truth registration matched pairs of different ultrasound images with/without shifts (A-i and A-reg-i-shift-x). Results: For S1 the similarity scores of A and A-shift-x were 32.63, 18.38, 12.95, 9.23, 2.15 and 0.43 for x=ranging from 0 mm to 10 mm in 2 mm increments. For S2 the average similarity score for non-related block pairs was −1.15. For S3 the average similarity score of ground truth registration matched blocks A-i and A-reg-i-shift-0 (1≤i≤5) was 12.37. After translating A-reg-i-shift-0 by 0 mm, 2 mm, 4 mm, 6 mm, 8 mm, and 10 mm, the average similarity scores of A-i and A-reg-i-shift-x were 11.04, 8.42, 4.56, 2.27, and 0.29 respectively. Conclusion: The proposed method correctly assigns highest similarity to corresponding 3D ultrasound image blocks despite differences in image content and thus can form the basis for ultrasound image registration and tracking.[1] Zagoruyko, Komodakis, “Learning to compare image patches via convolutional neural networks', IEEE CVPR 2015,pp.4353–4361.« less

  5. Image Processing Of Images From Peripheral-Artery Digital Subtraction Angiography (DSA) Studies

    NASA Astrophysics Data System (ADS)

    Wilson, David L.; Tarbox, Lawrence R.; Cist, David B.; Faul, David D.

    1988-06-01

    A system is being developed to test the possibility of doing peripheral, digital subtraction angiography (DSA) with a single contrast injection using a moving gantry system. Given repositioning errors that occur between the mask and contrast-containing images, factors affecting the success of subtractions following image registration have been investigated theoretically and experimentally. For a 1 mm gantry displacement, parallax and geometric image distortion (pin-cushion) both give subtraction errors following registration that are approximately 25% of the error resulting from no registration. Image processing techniques improve the subtractions. The geometric distortion effect is reduced using a piece-wise, 8 parameter unwarping method. Plots of image similarity measures versus pixel shift are well behaved and well fit by a parabola, leading to the development of an iterative, automatic registration algorithm that uses parabolic prediction of the new minimum. The registration algorithm converges quickly (less than 1 second on a MicroVAX) and is relatively immune to the region of interest (ROI) selected.

  6. Image Registration Algorithm Based on Parallax Constraint and Clustering Analysis

    NASA Astrophysics Data System (ADS)

    Wang, Zhe; Dong, Min; Mu, Xiaomin; Wang, Song

    2018-01-01

    To resolve the problem of slow computation speed and low matching accuracy in image registration, a new image registration algorithm based on parallax constraint and clustering analysis is proposed. Firstly, Harris corner detection algorithm is used to extract the feature points of two images. Secondly, use Normalized Cross Correlation (NCC) function to perform the approximate matching of feature points, and the initial feature pair is obtained. Then, according to the parallax constraint condition, the initial feature pair is preprocessed by K-means clustering algorithm, which is used to remove the feature point pairs with obvious errors in the approximate matching process. Finally, adopt Random Sample Consensus (RANSAC) algorithm to optimize the feature points to obtain the final feature point matching result, and the fast and accurate image registration is realized. The experimental results show that the image registration algorithm proposed in this paper can improve the accuracy of the image matching while ensuring the real-time performance of the algorithm.

  7. Improvement of registration accuracy in accelerated partial breast irradiation using the point-based rigid-body registration algorithm for patients with implanted fiducial markers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Inoue, Minoru; Yoshimura, Michio, E-mail: myossy@kuhp.kyoto-u.ac.jp; Sato, Sayaka

    2015-04-15

    Purpose: To investigate image-registration errors when using fiducial markers with a manual method and the point-based rigid-body registration (PRBR) algorithm in accelerated partial breast irradiation (APBI) patients, with accompanying fiducial deviations. Methods: Twenty-two consecutive patients were enrolled in a prospective trial examining 10-fraction APBI. Titanium clips were implanted intraoperatively around the seroma in all patients. For image-registration, the positions of the clips in daily kV x-ray images were matched to those in the planning digitally reconstructed radiographs. Fiducial and gravity registration errors (FREs and GREs, respectively), representing resulting misalignments of the edge and center of the target, respectively, were comparedmore » between the manual and algorithm-based methods. Results: In total, 218 fractions were evaluated. Although the mean FRE/GRE values for the manual and algorithm-based methods were within 3 mm (2.3/1.7 and 1.3/0.4 mm, respectively), the percentages of fractions where FRE/GRE exceeded 3 mm using the manual and algorithm-based methods were 18.8%/7.3% and 0%/0%, respectively. Manual registration resulted in 18.6% of patients with fractions of FRE/GRE exceeding 5 mm. The patients with larger clip deviation had significantly more fractions showing large FRE/GRE using manual registration. Conclusions: For image-registration using fiducial markers in APBI, the manual registration results in more fractions with considerable registration error due to loss of fiducial objectivity resulting from their deviation. The authors recommend the PRBR algorithm as a safe and effective strategy for accurate, image-guided registration and PTV margin reduction.« less

  8. [Research on non-rigid registration of multi-modal medical image based on Demons algorithm].

    PubMed

    Hao, Peibo; Chen, Zhen; Jiang, Shaofeng; Wang, Yang

    2014-02-01

    Non-rigid medical image registration is a popular subject in the research areas of the medical image and has an important clinical value. In this paper we put forward an improved algorithm of Demons, together with the conservation of gray model and local structure tensor conservation model, to construct a new energy function processing multi-modal registration problem. We then applied the L-BFGS algorithm to optimize the energy function and solve complex three-dimensional data optimization problem. And finally we used the multi-scale hierarchical refinement ideas to solve large deformation registration. The experimental results showed that the proposed algorithm for large de formation and multi-modal three-dimensional medical image registration had good effects.

  9. New Protocol for Skin Landmark Registration in Image-Guided Neurosurgery: Technical Note.

    PubMed

    Gerard, Ian J; Hall, Jeffery A; Mok, Kelvin; Collins, D Louis

    2015-09-01

    Newer versions of the commercial Medtronic StealthStation allow the use of only 8 landmark pairs for patient-to-image registration as opposed to 9 landmarks in older systems. The choice of which landmark pair to drop in these newer systems can have an effect on the quality of the patient-to-image registration. To investigate 4 landmark registration protocols based on 8 landmark pairs and compare the resulting registration accuracy with a 9-landmark protocol. Four different protocols were tested on both phantoms and patients. Two of the protocols involved using 4 ear landmarks and 4 facial landmarks and the other 2 involved using 3 ear landmarks and 5 facial landmarks. Both the fiducial registration error and target registration error were evaluated for each of the different protocols to determine any difference between them and the 9-landmark protocol. No difference in fiducial registration error was found between any of the 8-landmark protocols and the 9-landmark protocol. A significant decrease (P < .05) in target registration error was found when using a protocol based on 4 ear landmarks and 4 facial landmarks compared with the other protocols based on 3 ear landmarks. When using 8 landmarks to perform the patient-to-image registration, the protocol using 4 ear landmarks and 4 facial landmarks greatly outperformed the other 8-landmark protocols and 9-landmark protocol, resulting in the lowest target registration error.

  10. Automated reconstruction of standing posture panoramas from multi-sector long limb x-ray images

    NASA Astrophysics Data System (ADS)

    Miller, Linzey; Trier, Caroline; Ben-Zikri, Yehuda K.; Linte, Cristian A.

    2016-03-01

    Due to the digital X-ray imaging system's limited field of view, several individual sector images are required to capture the posture of an individual in standing position. These images are then "stitched together" to reconstruct the standing posture. We have created an image processing application that automates the stitching, therefore minimizing user input, optimizing workflow, and reducing human error. The application begins with pre-processing the input images by removing artifacts, filtering out isolated noisy regions, and amplifying a seamless bone edge. The resulting binary images are then registered together using a rigid-body intensity based registration algorithm. The identified registration transformations are then used to map the original sector images into the panorama image. Our method focuses primarily on the use of the anatomical content of the images to generate the panoramas as opposed to using external markers employed to aid with the alignment process. Currently, results show robust edge detection prior to registration and we have tested our approach by comparing the resulting automatically-stitched panoramas to the manually stitched panoramas in terms of registration parameters, target registration error of homologous markers, and the homogeneity of the digitally subtracted automatically- and manually-stitched images using 26 patient datasets.

  11. Semiautomatic registration of 3D transabdominal ultrasound images for patient repositioning during postprostatectomy radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Presles, Benoît, E-mail: benoit.presles@creatis.insa-lyon.fr; Rit, Simon; Sarrut, David

    2014-12-15

    Purpose: The aim of the present work is to propose and evaluate registration algorithms of three-dimensional (3D) transabdominal (TA) ultrasound (US) images to setup postprostatectomy patients during radiation therapy. Methods: Three registration methods have been developed and evaluated to register a reference 3D-TA-US image acquired during the planning CT session and a 3D-TA-US image acquired before each treatment session. The first method (method A) uses only gray value information, whereas the second one (method B) uses only gradient information. The third one (method C) combines both sets of information. All methods restrict the comparison to a region of interest computedmore » from the dilated reference positioning volume drawn on the reference image and use mutual information as a similarity measure. The considered geometric transformations are translations and have been optimized by using the adaptive stochastic gradient descent algorithm. Validation has been carried out using manual registration by three operators of the same set of image pairs as the algorithms. Sixty-two treatment US images of seven patients irradiated after a prostatectomy have been registered to their corresponding reference US image. The reference registration has been defined as the average of the manual registration values. Registration error has been calculated by subtracting the reference registration from the algorithm result. For each session, the method has been considered a failure if the registration error was above both the interoperator variability of the session and a global threshold of 3.0 mm. Results: All proposed registration algorithms have no systematic bias. Method B leads to the best results with mean errors of −0.6, 0.7, and −0.2 mm in left–right (LR), superior–inferior (SI), and anterior–posterior (AP) directions, respectively. With this method, the standard deviations of the mean error are of 1.7, 2.4, and 2.6 mm in LR, SI, and AP directions, respectively. The latter are inferior to the interoperator registration variabilities which are of 2.5, 2.5, and 3.5 mm in LR, SI, and AP directions, respectively. Failures occur in 5%, 18%, and 10% of cases in LR, SI, and AP directions, respectively. 69% of the sessions have no failure. Conclusions: Results of the best proposed registration algorithm of 3D-TA-US images for postprostatectomy treatment have no bias and are in the same variability range as manual registration. As the algorithm requires a short computation time, it could be used in clinical practice provided that a visual review is performed.« less

  12. Position tracking of moving liver lesion based on real-time registration between 2D ultrasound and 3D preoperative images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weon, Chijun; Hyun Nam, Woo; Lee, Duhgoon

    Purpose: Registration between 2D ultrasound (US) and 3D preoperative magnetic resonance (MR) (or computed tomography, CT) images has been studied recently for US-guided intervention. However, the existing techniques have some limits, either in the registration speed or the performance. The purpose of this work is to develop a real-time and fully automatic registration system between two intermodal images of the liver, and subsequently an indirect lesion positioning/tracking algorithm based on the registration result, for image-guided interventions. Methods: The proposed position tracking system consists of three stages. In the preoperative stage, the authors acquire several 3D preoperative MR (or CT) imagesmore » at different respiratory phases. Based on the transformations obtained from nonrigid registration of the acquired 3D images, they then generate a 4D preoperative image along the respiratory phase. In the intraoperative preparatory stage, they properly attach a 3D US transducer to the patient’s body and fix its pose using a holding mechanism. They then acquire a couple of respiratory-controlled 3D US images. Via the rigid registration of these US images to the 3D preoperative images in the 4D image, the pose information of the fixed-pose 3D US transducer is determined with respect to the preoperative image coordinates. As feature(s) to use for the rigid registration, they may choose either internal liver vessels or the inferior vena cava. Since the latter is especially useful in patients with a diffuse liver disease, the authors newly propose using it. In the intraoperative real-time stage, they acquire 2D US images in real-time from the fixed-pose transducer. For each US image, they select candidates for its corresponding 2D preoperative slice from the 4D preoperative MR (or CT) image, based on the predetermined pose information of the transducer. The correct corresponding image is then found among those candidates via real-time 2D registration based on a gradient-based similarity measure. Finally, if needed, they obtain the position information of the liver lesion using the 3D preoperative image to which the registered 2D preoperative slice belongs. Results: The proposed method was applied to 23 clinical datasets and quantitative evaluations were conducted. With the exception of one clinical dataset that included US images of extremely low quality, 22 datasets of various liver status were successfully applied in the evaluation. Experimental results showed that the registration error between the anatomical features of US and preoperative MR images is less than 3 mm on average. The lesion tracking error was also found to be less than 5 mm at maximum. Conclusions: A new system has been proposed for real-time registration between 2D US and successive multiple 3D preoperative MR/CT images of the liver and was applied for indirect lesion tracking for image-guided intervention. The system is fully automatic and robust even with images that had low quality due to patient status. Through visual examinations and quantitative evaluations, it was verified that the proposed system can provide high lesion tracking accuracy as well as high registration accuracy, at performance levels which were acceptable for various clinical applications.« less

  13. Introduction to Remote Sensing Image Registration

    NASA Technical Reports Server (NTRS)

    Le Moigne, Jacqueline

    2017-01-01

    For many applications, accurate and fast image registration of large amounts of multi-source data is the first necessary step before subsequent processing and integration. Image registration is defined by several steps and each step can be approached by various methods which all present diverse advantages and drawbacks depending on the type of data, the type of applications, the a prior information known about the data and the type of accuracy that is required. This paper will first present a general overview of remote sensing image registration and then will go over a few specific methods and their applications

  14. Comparison of time-series registration methods in breast dynamic infrared imaging

    NASA Astrophysics Data System (ADS)

    Riyahi-Alam, S.; Agostini, V.; Molinari, F.; Knaflitz, M.

    2015-03-01

    Automated motion reduction in dynamic infrared imaging is on demand in clinical applications, since movement disarranges time-temperature series of each pixel, thus originating thermal artifacts that might bias the clinical decision. All previously proposed registration methods are feature based algorithms requiring manual intervention. The aim of this work is to optimize the registration strategy specifically for Breast Dynamic Infrared Imaging and to make it user-independent. We implemented and evaluated 3 different 3D time-series registration methods: 1. Linear affine, 2. Non-linear Bspline, 3. Demons applied to 12 datasets of healthy breast thermal images. The results are evaluated through normalized mutual information with average values of 0.70 ±0.03, 0.74 ±0.03 and 0.81 ±0.09 (out of 1) for Affine, Bspline and Demons registration, respectively, as well as breast boundary overlap and Jacobian determinant of the deformation field. The statistical analysis of the results showed that symmetric diffeomorphic Demons' registration method outperforms also with the best breast alignment and non-negative Jacobian values which guarantee image similarity and anatomical consistency of the transformation, due to homologous forces enforcing the pixel geometric disparities to be shortened on all the frames. We propose Demons' registration as an effective technique for time-series dynamic infrared registration, to stabilize the local temperature oscillation.

  15. Improving the convergence rate in affine registration of PET and SPECT brain images using histogram equalization.

    PubMed

    Salas-Gonzalez, D; Górriz, J M; Ramírez, J; Padilla, P; Illán, I A

    2013-01-01

    A procedure to improve the convergence rate for affine registration methods of medical brain images when the images differ greatly from the template is presented. The methodology is based on a histogram matching of the source images with respect to the reference brain template before proceeding with the affine registration. The preprocessed source brain images are spatially normalized to a template using a general affine model with 12 parameters. A sum of squared differences between the source images and the template is considered as objective function, and a Gauss-Newton optimization algorithm is used to find the minimum of the cost function. Using histogram equalization as a preprocessing step improves the convergence rate in the affine registration algorithm of brain images as we show in this work using SPECT and PET brain images.

  16. 3D non-rigid surface-based MR-TRUS registration for image-guided prostate biopsy

    NASA Astrophysics Data System (ADS)

    Sun, Yue; Qiu, Wu; Romagnoli, Cesare; Fenster, Aaron

    2014-03-01

    Two dimensional (2D) transrectal ultrasound (TRUS) guided prostate biopsy is the standard approach for definitive diagnosis of prostate cancer (PCa). However, due to the lack of image contrast of prostate tumors needed to clearly visualize early-stage PCa, prostate biopsy often results in false negatives, requiring repeat biopsies. Magnetic Resonance Imaging (MRI) has been considered to be a promising imaging modality for noninvasive identification of PCa, since it can provide a high sensitivity and specificity for the detection of early stage PCa. Our main objective is to develop and validate a registration method of 3D MR-TRUS images, allowing generation of volumetric 3D maps of targets identified in 3D MR images to be biopsied using 3D TRUS images. Our registration method first makes use of an initial rigid registration of 3D MR images to 3D TRUS images using 6 manually placed approximately corresponding landmarks in each image. Following the manual initialization, two prostate surfaces are segmented from 3D MR and TRUS images and then non-rigidly registered using a thin-plate spline (TPS) algorithm. The registration accuracy was evaluated using 4 patient images by measuring target registration error (TRE) of manually identified corresponding intrinsic fiducials (calcifications and/or cysts) in the prostates. Experimental results show that the proposed method yielded an overall mean TRE of 2.05 mm, which is favorably comparable to a clinical requirement for an error of less than 2.5 mm.

  17. Unsupervised Detection of Planetary Craters by a Marked Point Process

    NASA Technical Reports Server (NTRS)

    Troglio, G.; Benediktsson, J. A.; Le Moigne, J.; Moser, G.; Serpico, S. B.

    2011-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images is being acquired. Preferably, automatic and robust processing techniques need to be used for data analysis because of the huge amount of the acquired data. Here, the aim is to achieve a robust and general methodology for crater detection. A novel technique based on a marked point process is proposed. First, the contours in the image are extracted. The object boundaries are modeled as a configuration of an unknown number of random ellipses, i.e., the contour image is considered as a realization of a marked point process. Then, an energy function is defined, containing both an a priori energy and a likelihood term. The global minimum of this function is estimated by using reversible jump Monte-Carlo Markov chain dynamics and a simulated annealing scheme. The main idea behind marked point processes is to model objects within a stochastic framework: Marked point processes represent a very promising current approach in the stochastic image modeling and provide a powerful and methodologically rigorous framework to efficiently map and detect objects and structures in an image with an excellent robustness to noise. The proposed method for crater detection has several feasible applications. One such application area is image registration by matching the extracted features.

  18. Joint deformable liver registration and bias field correction for MR-guided HDR brachytherapy.

    PubMed

    Rak, Marko; König, Tim; Tönnies, Klaus D; Walke, Mathias; Ricke, Jens; Wybranski, Christian

    2017-12-01

    In interstitial high-dose rate brachytherapy, liver cancer is treated by internal radiation, requiring percutaneous placement of applicators within or close to the tumor. To maximize utility, the optimal applicator configuration is pre-planned on magnetic resonance images. The pre-planned configuration is then implemented via a magnetic resonance-guided intervention. Mapping the pre-planning information onto interventional data would reduce the radiologist's cognitive load during the intervention and could possibly minimize discrepancies between optimally pre-planned and actually placed applicators. We propose a fast and robust two-step registration framework suitable for interventional settings: first, we utilize a multi-resolution rigid registration to correct for differences in patient positioning (rotation and translation). Second, we employ a novel iterative approach alternating between bias field correction and Markov random field deformable registration in a multi-resolution framework to compensate for non-rigid movements of the liver, the tumors and the organs at risk. In contrast to existing pre-correction methods, our multi-resolution scheme can recover bias field artifacts of different extents at marginal computational costs. We compared our approach to deformable registration via B-splines, demons and the SyN method on 22 registration tasks from eleven patients. Results showed that our approach is more accurate than the contenders for liver as well as for tumor tissues. We yield average liver volume overlaps of 94.0 ± 2.7% and average surface-to-surface distances of 2.02 ± 0.87 mm and 3.55 ± 2.19 mm for liver and tumor tissue, respectively. The reported distances are close to (or even below) the slice spacing (2.5 - 3.0 mm) of our data. Our approach is also the fastest, taking 35.8 ± 12.8 s per task. The presented approach is sufficiently accurate to map information available from brachytherapy pre-planning onto interventional data. It is also reasonably fast, providing a starting point for computer-aidance during intervention.

  19. Quicksilver: Fast predictive image registration - A deep learning approach.

    PubMed

    Yang, Xiao; Kwitt, Roland; Styner, Martin; Niethammer, Marc

    2017-09-01

    This paper introduces Quicksilver, a fast deformable image registration method. Quicksilver registration for image-pairs works by patch-wise prediction of a deformation model based directly on image appearance. A deep encoder-decoder network is used as the prediction model. While the prediction strategy is general, we focus on predictions for the Large Deformation Diffeomorphic Metric Mapping (LDDMM) model. Specifically, we predict the momentum-parameterization of LDDMM, which facilitates a patch-wise prediction strategy while maintaining the theoretical properties of LDDMM, such as guaranteed diffeomorphic mappings for sufficiently strong regularization. We also provide a probabilistic version of our prediction network which can be sampled during the testing time to calculate uncertainties in the predicted deformations. Finally, we introduce a new correction network which greatly increases the prediction accuracy of an already existing prediction network. We show experimental results for uni-modal atlas-to-image as well as uni-/multi-modal image-to-image registrations. These experiments demonstrate that our method accurately predicts registrations obtained by numerical optimization, is very fast, achieves state-of-the-art registration results on four standard validation datasets, and can jointly learn an image similarity measure. Quicksilver is freely available as an open-source software. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Advanced Image Processing for NASA Applications

    NASA Technical Reports Server (NTRS)

    LeMoign, Jacqueline

    2007-01-01

    The future of space exploration will involve cooperating fleets of spacecraft or sensor webs geared towards coordinated and optimal observation of Earth Science phenomena. The main advantage of such systems is to utilize multiple viewing angles as well as multiple spatial and spectral resolutions of sensors carried on multiple spacecraft but acting collaboratively as a single system. Within this framework, our research focuses on all areas related to sensing in collaborative environments, which means systems utilizing intracommunicating spatially distributed sensor pods or crafts being deployed to monitor or explore different environments. This talk will describe the general concept of sensing in collaborative environments, will give a brief overview of several technologies developed at NASA Goddard Space Flight Center in this area, and then will concentrate on specific image processing research related to that domain, specifically image registration and image fusion.

  1. Registration of heat capacity mapping mission day and night images

    NASA Technical Reports Server (NTRS)

    Watson, K.; Hummer-Miller, S.; Sawatzky, D. L.

    1982-01-01

    Registration of thermal images is complicated by distinctive differences in the appearance of day and night features needed as control in the registration process. These changes are unlike those that occur between Landsat scenes and pose unique constraints. Experimentation with several potentially promising techniques has led to selection of a fairly simple scheme for registration of data from the experimental thermal satellite HCMM using an affine transformation. Two registration examples are provided.

  2. Adaptive Registration of Varying Contrast-Weighted Images for Improved Tissue Characterization (ARCTIC): Application to T1 Mapping

    PubMed Central

    Roujol, Sébastien; Foppa, Murilo; Weingartner, Sebastian; Manning, Warren J.; Nezafat, Reza

    2014-01-01

    Purpose To propose and evaluate a novel non-rigid image registration approach for improved myocardial T1 mapping. Methods Myocardial motion is estimated as global affine motion refined by a novel local non-rigid motion estimation algorithm. A variational framework is proposed, which simultaneously estimates motion field and intensity variations, and uses an additional regularization term to constrain the deformation field using automatic feature tracking. The method was evaluated in 29 patients by measuring the DICE similarity coefficient (DSC) and the myocardial boundary error (MBE) in short axis and four chamber data. Each image series was visually assessed as “no motion” or “with motion”. Overall T1 map quality and motion artifacts were assessed in the 85 T1 maps acquired in short axis view using a 4-point scale (1-non diagnostic/severe motion artifact, 4-excellent/no motion artifact). Results Increased DSC (0.78±0.14 to 0.87±0.03, p<0.001), reduced MBE (1.29±0.72mm to 0.84±0.20mm, p<0.001), improved overall T1 map quality (2.86±1.04 to 3.49±0.77, p<0.001), and reduced T1 map motion artifacts (2.51±0.84 to 3.61±0.64, p<0.001) were obtained after motion correction of “with motion” data (~56% of data). Conclusion The proposed non-rigid registration approach reduces the respiratory-induced motion that occurs during breath-hold T1 mapping, and significantly improves T1 map quality. PMID:24798588

  3. A B-spline image registration based CAD scheme to evaluate drug treatment response of ovarian cancer patients

    NASA Astrophysics Data System (ADS)

    Tan, Maxine; Li, Zheng; Moore, Kathleen; Thai, Theresa; Ding, Kai; Liu, Hong; Zheng, Bin

    2016-03-01

    Ovarian cancer is the second most common cancer amongst gynecologic malignancies, and has the highest death rate. Since the majority of ovarian cancer patients (>75%) are diagnosed in the advanced stage with tumor metastasis, chemotherapy is often required after surgery to remove the primary ovarian tumors. In order to quickly assess patient response to the chemotherapy in the clinical trials, two sets of CT examinations are taken pre- and post-therapy (e.g., after 6 weeks). Treatment efficacy is then evaluated based on Response Evaluation Criteria in Solid Tumors (RECIST) guideline, whereby tumor size is measured by the longest diameter on one CT image slice and only a subset of selected tumors are tracked. However, this criterion cannot fully represent the volumetric changes of the tumors and might miss potentially problematic unmarked tumors. Thus, we developed a new CAD approach to measure and analyze volumetric tumor growth/shrinkage using a cubic B-spline deformable image registration method. In this initial study, on 14 sets of pre- and post-treatment CT scans, we registered the two consecutive scans using cubic B-spline registration in a multiresolution (from coarse to fine) framework. We used Mattes mutual information metric as the similarity criterion and the L-BFGS-B optimizer. The results show that our method can quantify volumetric changes in the tumors more accurately than RECIST, and also detect (highlight) potentially problematic regions that were not originally targeted by radiologists. Despite the encouraging results of this preliminary study, further validation of scheme performance is required using large and diverse datasets in future.

  4. Improved cardiac motion detection from ultrasound images using TDIOF: a combined B-mode/ tissue Doppler approach

    NASA Astrophysics Data System (ADS)

    Tavakoli, Vahid; Stoddard, Marcus F.; Amini, Amir A.

    2013-03-01

    Quantitative motion analysis of echocardiographic images helps clinicians with the diagnosis and therapy of patients suffering from cardiac disease. Quantitative analysis is usually based on TDI (Tissue Doppler Imaging) or speckle tracking. These methods are based on two independent techniques - the Doppler Effect and image registration, respectively. In order to increase the accuracy of the speckle tracking technique and cope with the angle dependency of TDI, herein, a combined approach dubbed TDIOF (Tissue Doppler Imaging Optical Flow) is proposed. TDIOF is formulated based on the combination of B-mode and Doppler energy terms in an optical flow framework and minimized using algebraic equations. In this paper, we report on validations with simulated, physical cardiac phantom, and in-vivo patient data. It is shown that the additional Doppler term is able to increase the accuracy of speckle tracking, the basis for several commercially available echocardiography analysis techniques.

  5. Automation of Hessian-Based Tubularity Measure Response Function in 3D Biomedical Images.

    PubMed

    Dzyubak, Oleksandr P; Ritman, Erik L

    2011-01-01

    The blood vessels and nerve trees consist of tubular objects interconnected into a complex tree- or web-like structure that has a range of structural scale 5 μm diameter capillaries to 3 cm aorta. This large-scale range presents two major problems; one is just making the measurements, and the other is the exponential increase of component numbers with decreasing scale. With the remarkable increase in the volume imaged by, and resolution of, modern day 3D imagers, it is almost impossible to make manual tracking of the complex multiscale parameters from those large image data sets. In addition, the manual tracking is quite subjective and unreliable. We propose a solution for automation of an adaptive nonsupervised system for tracking tubular objects based on multiscale framework and use of Hessian-based object shape detector incorporating National Library of Medicine Insight Segmentation and Registration Toolkit (ITK) image processing libraries.

  6. SAR image registration based on Susan algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Chun-bo; Fu, Shao-hua; Wei, Zhong-yi

    2011-10-01

    Synthetic Aperture Radar (SAR) is an active remote sensing system which can be installed on aircraft, satellite and other carriers with the advantages of all day and night and all-weather ability. It is the important problem that how to deal with SAR and extract information reasonably and efficiently. Particularly SAR image geometric correction is the bottleneck to impede the application of SAR. In this paper we introduces image registration and the Susan algorithm knowledge firstly, then introduces the process of SAR image registration based on Susan algorithm and finally presents experimental results of SAR image registration. The Experiment shows that this method is effective and applicable, no matter from calculating the time or from the calculation accuracy.

  7. Phantom study and accuracy evaluation of an image-to-world registration approach used with electro-magnetic tracking system for neurosurgery

    NASA Astrophysics Data System (ADS)

    Li, Senhu; Sarment, David

    2015-12-01

    Minimally invasive neurosurgery needs intraoperative imaging updates and high efficient image guide system to facilitate the procedure. An automatic image guided system utilized with a compact and mobile intraoperative CT imager was introduced in this work. A tracking frame that can be easily attached onto the commercially available skull clamp was designed. With known geometry of fiducial and tracking sensor arranged on this rigid frame that was fabricated through high precision 3D printing, not only was an accurate, fully automatic registration method developed in a simple and less-costly approach, but also it helped in estimating the errors from fiducial localization in image space through image processing, and in patient space through the calibration of tracking frame. Our phantom study shows the fiducial registration error as 0.348+/-0.028mm, comparing the manual registration error as 1.976+/-0.778mm. The system in this study provided a robust and accurate image-to-patient registration without interruption of routine surgical workflow and any user interactions involved through the neurosurgery.

  8. Nonrigid Image Registration in Digital Subtraction Angiography Using Multilevel B-Spline

    PubMed Central

    2013-01-01

    We address the problem of motion artifact reduction in digital subtraction angiography (DSA) using image registration techniques. Most of registration algorithms proposed for application in DSA, have been designed for peripheral and cerebral angiography images in which we mainly deal with global rigid motions. These algorithms did not yield good results when applied to coronary angiography images because of complex nonrigid motions that exist in this type of angiography images. Multiresolution and iterative algorithms are proposed to cope with this problem, but these algorithms are associated with high computational cost which makes them not acceptable for real-time clinical applications. In this paper we propose a nonrigid image registration algorithm for coronary angiography images that is significantly faster than multiresolution and iterative blocking methods and outperforms competing algorithms evaluated on the same data sets. This algorithm is based on a sparse set of matched feature point pairs and the elastic registration is performed by means of multilevel B-spline image warping. Experimental results with several clinical data sets demonstrate the effectiveness of our approach. PMID:23971026

  9. Accurate segmentation of lung fields on chest radiographs using deep convolutional networks

    NASA Astrophysics Data System (ADS)

    Arbabshirani, Mohammad R.; Dallal, Ahmed H.; Agarwal, Chirag; Patel, Aalpan; Moore, Gregory

    2017-02-01

    Accurate segmentation of lung fields on chest radiographs is the primary step for computer-aided detection of various conditions such as lung cancer and tuberculosis. The size, shape and texture of lung fields are key parameters for chest X-ray (CXR) based lung disease diagnosis in which the lung field segmentation is a significant primary step. Although many methods have been proposed for this problem, lung field segmentation remains as a challenge. In recent years, deep learning has shown state of the art performance in many visual tasks such as object detection, image classification and semantic image segmentation. In this study, we propose a deep convolutional neural network (CNN) framework for segmentation of lung fields. The algorithm was developed and tested on 167 clinical posterior-anterior (PA) CXR images collected retrospectively from picture archiving and communication system (PACS) of Geisinger Health System. The proposed multi-scale network is composed of five convolutional and two fully connected layers. The framework achieved IOU (intersection over union) of 0.96 on the testing dataset as compared to manual segmentation. The suggested framework outperforms state of the art registration-based segmentation by a significant margin. To our knowledge, this is the first deep learning based study of lung field segmentation on CXR images developed on a heterogeneous clinical dataset. The results suggest that convolutional neural networks could be employed reliably for lung field segmentation.

  10. Integration of prior CT into CBCT reconstruction for improved image quality via reconstruction of difference: first patient studies

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Gang, Grace J.; Lee, Junghoon; Wong, John; Stayman, J. Webster

    2017-03-01

    Purpose: There are many clinical situations where diagnostic CT is used for an initial diagnosis or treatment planning, followed by one or more CBCT scans that are part of an image-guided intervention. Because the high-quality diagnostic CT scan is a rich source of patient-specific anatomical knowledge, this provides an opportunity to incorporate the prior CT image into subsequent CBCT reconstruction for improved image quality. We propose a penalized-likelihood method called reconstruction of difference (RoD), to directly reconstruct differences between the CBCT scan and the CT prior. In this work, we demonstrate the efficacy of RoD with clinical patient datasets. Methods: We introduce a data processing workflow using the RoD framework to reconstruct anatomical changes between the prior CT and current CBCT. This workflow includes processing steps to account for non-anatomical differences between the two scans including 1) scatter correction for CBCT datasets due to increased scatter fractions in CBCT data; 2) histogram matching for attenuation variations between CT and CBCT; and 3) registration for different patient positioning. CBCT projection data and CT planning volumes for two radiotherapy patients - one abdominal study and one head-and-neck study - were investigated. Results: In comparisons between the proposed RoD framework and more traditional FDK and penalized-likelihood reconstructions, we find a significant improvement in image quality when prior CT information is incorporated into the reconstruction. RoD is able to provide additional low-contrast details while correctly incorporating actual physical changes in patient anatomy. Conclusions: The proposed framework provides an opportunity to either improve image quality or relax data fidelity constraints for CBCT imaging when prior CT studies of the same patient are available. Possible clinical targets include CBCT image-guided radiotherapy and CBCT image-guided surgeries.

  11. Fast and robust multimodal image registration using a local derivative pattern.

    PubMed

    Jiang, Dongsheng; Shi, Yonghong; Chen, Xinrong; Wang, Manning; Song, Zhijian

    2017-02-01

    Deformable multimodal image registration, which can benefit radiotherapy and image guided surgery by providing complementary information, remains a challenging task in the medical image analysis field due to the difficulty of defining a proper similarity measure. This article presents a novel, robust and fast binary descriptor, the discriminative local derivative pattern (dLDP), which is able to encode images of different modalities into similar image representations. dLDP calculates a binary string for each voxel according to the pattern of intensity derivatives in its neighborhood. The descriptor similarity is evaluated using the Hamming distance, which can be efficiently computed, instead of conventional L1 or L2 norms. For the first time, we validated the effectiveness and feasibility of the local derivative pattern for multimodal deformable image registration with several multi-modal registration applications. dLDP was compared with three state-of-the-art methods in artificial image and clinical settings. In the experiments of deformable registration between different magnetic resonance imaging (MRI) modalities from BrainWeb, between computed tomography and MRI images from patient data, and between MRI and ultrasound images from BITE database, we show our method outperforms localized mutual information and entropy images in terms of both accuracy and time efficiency. We have further validated dLDP for the deformable registration of preoperative MRI and three-dimensional intraoperative ultrasound images. Our results indicate that dLDP reduces the average mean target registration error from 4.12 mm to 2.30 mm. This accuracy is statistically equivalent to the accuracy of the state-of-the-art methods in the study; however, in terms of computational complexity, our method significantly outperforms other methods and is even comparable to the sum of the absolute difference. The results reveal that dLDP can achieve superior performance regarding both accuracy and time efficiency in general multimodal image registration. In addition, dLDP also indicates the potential for clinical ultrasound guided intervention. © 2016 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  12. Modified dixon‐based renal dynamic contrast‐enhanced MRI facilitates automated registration and perfusion analysis

    PubMed Central

    Leiner, Tim; Vink, Eva E.; Blankestijn, Peter J.; van den Berg, Cornelis A.T.

    2017-01-01

    Purpose Renal dynamic contrast‐enhanced (DCE) MRI provides information on renal perfusion and filtration. However, clinical implementation is hampered by challenges in postprocessing as a result of misalignment of the kidneys due to respiration. We propose to perform automated image registration using the fat‐only images derived from a modified Dixon reconstruction of a dual‐echo acquisition because these provide consistent contrast over the dynamic series. Methods DCE data of 10 hypertensive patients was used. Dual‐echo images were acquired at 1.5 T with temporal resolution of 3.9 s during contrast agent injection. Dixon fat, water, and in‐phase and opposed‐phase (OP) images were reconstructed. Postprocessing was automated. Registration was performed both to fat images and OP images for comparison. Perfusion and filtration values were extracted from a two‐compartment model fit. Results Automatic registration to fat images performed better than automatic registration to OP images with visible contrast enhancement. Median vertical misalignment of the kidneys was 14 mm prior to registration, compared to 3 mm and 5 mm with registration to fat images and OP images, respectively (P = 0.03). Mean perfusion values and MR‐based glomerular filtration rates (GFR) were 233 ± 64 mL/100 mL/min and 60 ± 36 mL/minute, respectively, based on fat‐registered images. MR‐based GFR correlated with creatinine‐based GFR (P = 0.04) for fat‐registered images. For unregistered and OP‐registered images, this correlation was not significant. Conclusion Absence of contrast changes on Dixon fat images improves registration in renal DCE MRI and enables automated postprocessing, resulting in a more accurate estimation of GFR. Magn Reson Med 80:66–76, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes. PMID:29134673

  13. Non-invasive breast biopsy method using GD-DTPA contrast enhanced MRI series and F-18-FDG PET/CT dynamic image series

    NASA Astrophysics Data System (ADS)

    Magri, Alphonso William

    This study was undertaken to develop a nonsurgical breast biopsy from Gd-DTPA Contrast Enhanced Magnetic Resonance (CE-MR) images and F-18-FDG PET/CT dynamic image series. A five-step process was developed to accomplish this. (1) Dynamic PET series were nonrigidly registered to the initial frame using a finite element method (FEM) based registration that requires fiducial skin markers to sample the displacement field between image frames. A commercial FEM package (ANSYS) was used for meshing and FEM calculations. Dynamic PET image series registrations were evaluated using similarity measurements SAVD and NCC. (2) Dynamic CE-MR series were nonrigidly registered to the initial frame using two registration methods: a multi-resolution free-form deformation (FFD) registration driven by normalized mutual information, and a FEM-based registration method. Dynamic CE-MR image series registrations were evaluated using similarity measurements, localization measurements, and qualitative comparison of motion artifacts. FFD registration was found to be superior to FEM-based registration. (3) Nonlinear curve fitting was performed for each voxel of the PET/CT volume of activity versus time, based on a realistic two-compartmental Patlak model. Three parameters for this model were fitted; two of them describe the activity levels in the blood and in the cellular compartment, while the third characterizes the washout rate of F-18-FDG from the cellular compartment. (4) Nonlinear curve fitting was performed for each voxel of the MR volume of signal intensity versus time, based on a realistic two-compartment Brix model. Three parameters for this model were fitted: rate of Gd exiting the compartment, representing the extracellular space of a lesion; rate of Gd exiting a blood compartment; and a parameter that characterizes the strength of signal intensities. Curve fitting used for PET/CT and MR series was accomplished by application of the Levenburg-Marquardt nonlinear regression algorithm. The best-fit parameters were used to create 3D parametric images. Compartmental modeling evaluation was based on the ability of parameter values to differentiate between tissue types. This evaluation was used on registered and unregistered image series and found that registration improved results. (5) PET and MR parametric images were registered through FEM- and FFD-based registration. Parametric image registration was evaluated using similarity measurements, target registration error, and qualitative comparison. Comparing FFD and FEM-based registration results showed that the FEM method is superior. This five-step process constitutes a novel multifaceted approach to a nonsurgical breast biopsy that successfully executes each step. Comparison of this method to biopsy still needs to be done with a larger set of subject data.

  14. On removing interpolation and resampling artifacts in rigid image registration.

    PubMed

    Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R; Fischl, Bruce

    2013-02-01

    We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration.

  15. On Removing Interpolation and Resampling Artifacts in Rigid Image Registration

    PubMed Central

    Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R.; Fischl, Bruce

    2013-01-01

    We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration. PMID:23076044

  16. WE-H-202-04: Advanced Medical Image Registration Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, G.

    Deformable image registration has now been commercially available for several years, with solid performance in a number of sites and for several applications including contour and dose mapping. However, more complex applications have arisen, such as assessing response to radiation therapy over time, registering images pre- and post-surgery, and auto-segmentation from atlases. These applications require innovative registration algorithms to achieve accurate alignment. The goal of this session is to highlight emerging registration technology and these new applications. The state of the art in image registration will be presented from an engineering perspective. Translational clinical applications will also be discussed tomore » tie these new registration approaches together with imaging and radiation therapy applications in specific diseases such as cervical and lung cancers. Learning Objectives: To understand developing techniques and algorithms in deformable image registration that are likely to translate into clinical tools in the near future. To understand emerging imaging and radiation therapy clinical applications that require such new registration algorithms. Research supported in part by the National Institutes of Health under award numbers P01CA059827, R01CA166119, and R01CA166703. Disclosures: Phillips Medical systems (Hugo), Roger Koch (Christensen) support, Varian Medical Systems (Brock), licensing agreements from Raysearch (Brock) and Varian (Hugo).; K. Brock, Licensing Agreement - RaySearch Laboratories. Research Funding - Varian Medical Systems; G. Hugo, Research grant from National Institutes of Health, award number R01CA166119.; G. Christensen, Research support from NIH grants CA166119 and CA166703 and a gift from Roger Koch. There are no conflicts of interest.« less

  17. Potential accuracy of translation estimation between radar and optical images

    NASA Astrophysics Data System (ADS)

    Uss, M.; Vozel, B.; Lukin, V.; Chehdi, K.

    2015-10-01

    This paper investigates the potential accuracy achievable for optical to radar image registration by area-based approach. The analysis is carried out mainly based on the Cramér-Rao Lower Bound (CRLB) on translation estimation accuracy previously proposed by the authors and called CRLBfBm. This bound is now modified to take into account radar image speckle noise properties: spatial correlation and signal-dependency. The newly derived theoretical bound is fed with noise and texture parameters estimated for the co-registered pair of optical Landsat 8 and radar SIR-C images. It is found that difficulty of optical to radar image registration stems more from speckle noise influence than from dissimilarity of the considered kinds of images. At finer scales (and higher speckle noise level), probability of finding control fragments (CF) suitable for registration is low (1% or less) but overall number of such fragments is high thanks to image size. Conversely, at the coarse scale, where speckle noise level is reduced, probability of finding CFs suitable for registration can be as high as 40%, but overall number of such CFs is lower. Thus, the study confirms and supports area-based multiresolution approach for optical to radar registration where coarse scales are used for fast registration "lock" and finer scales for reaching higher registration accuracy. The CRLBfBm is found inaccurate for the main scale due to intensive speckle noise influence. For other scales, the validity of the CRLBfBm bound is confirmed by calculating statistical efficiency of area-based registration method based on normalized correlation coefficient (NCC) measure that takes high values of about 25%.

  18. Image navigation and registration performance assessment tool set for the GOES-R Advanced Baseline Imager and Geostationary Lightning Mapper

    NASA Astrophysics Data System (ADS)

    De Luccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.

    2016-05-01

    The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99. 73rd percentile of the errors accumulated over a 24 hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.

  19. Registration of 2D to 3D joint images using phase-based mutual information

    NASA Astrophysics Data System (ADS)

    Dalvi, Rupin; Abugharbieh, Rafeef; Pickering, Mark; Scarvell, Jennie; Smith, Paul

    2007-03-01

    Registration of two dimensional to three dimensional orthopaedic medical image data has important applications particularly in the area of image guided surgery and sports medicine. Fluoroscopy to computer tomography (CT) registration is an important case, wherein digitally reconstructed radiographs derived from the CT data are registered to the fluoroscopy data. Traditional registration metrics such as intensity-based mutual information (MI) typically work well but often suffer from gross misregistration errors when the image to be registered contains a partial view of the anatomy visible in the target image. Phase-based MI provides a robust alternative similarity measure which, in addition to possessing the general robustness and noise immunity that MI provides, also employs local phase information in the registration process which makes it less susceptible to the aforementioned errors. In this paper, we propose using the complex wavelet transform for computing image phase information and incorporating that into a phase-based MI measure for image registration. Tests on a CT volume and 6 fluoroscopy images of the knee are presented. The femur and the tibia in the CT volume were individually registered to the fluoroscopy images using intensity-based MI, gradient-based MI and phase-based MI. Errors in the coordinates of fiducials present in the bone structures were used to assess the accuracy of the different registration schemes. Quantitative results demonstrate that the performance of intensity-based MI was the worst. Gradient-based MI performed slightly better, while phase-based MI results were the best consistently producing the lowest errors.

  20. Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for the GOES-R Advanced Baseline Imager and Geostationary Lightning Mapper

    NASA Technical Reports Server (NTRS)

    DeLuccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.

    2016-01-01

    The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99.73rd percentile of the errors accumulated over a 24 hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.

  1. Image Navigation and Registration Performance Assessment Tool Set for the GOES-R Advanced Baseline Imager and Geostationary Lightning Mapper

    NASA Technical Reports Server (NTRS)

    De Luccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.

    2016-01-01

    The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99.73rd percentile of the errors accumulated over a 24-hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24-hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.

  2. [Medical image elastic registration smoothed by unconstrained optimized thin-plate spline].

    PubMed

    Zhang, Yu; Li, Shuxiang; Chen, Wufan; Liu, Zhexing

    2003-12-01

    Elastic registration of medical image is an important subject in medical image processing. Previous work has concentrated on selecting the corresponding landmarks manually and then using thin-plate spline interpolating to gain the elastic transformation. However, the landmarks extraction is always prone to error, which will influence the registration results. Localizing the landmarks manually is also difficult and time-consuming. We the optimization theory to improve the thin-plate spline interpolation, and based on it, used an automatic method to extract the landmarks. Combining these two steps, we have proposed an automatic, exact and robust registration method and have gained satisfactory registration results.

  3. Self-correcting multi-atlas segmentation

    NASA Astrophysics Data System (ADS)

    Gao, Yi; Wilford, Andrew; Guo, Liang

    2016-03-01

    In multi-atlas segmentation, one typically registers several atlases to the new image, and their respective segmented label images are transformed and fused to form the final segmentation. After each registration, the quality of the registration is reflected by the single global value: the final registration cost. Ideally, if the quality of the registration can be evaluated at each point, independent of the registration process, which also provides a direction in which the deformation can further be improved, the overall segmentation performance can be improved. We propose such a self-correcting multi-atlas segmentation method. The method is applied on hippocampus segmentation from brain images and statistically significantly improvement is observed.

  4. An approach to defect inspection for packing presswork with virtual orientation points and threshold template image

    NASA Astrophysics Data System (ADS)

    Hao, Xiangyang; Liu, Songlin; Zhao, Fulai; Jiang, Lixing

    2015-05-01

    The packing presswork is an important factor of industrial product, especially for the luxury commodities such as cigarettes. In order to ensure the packing presswork to be qualified, the products should be inspected and unqualified one be picked out piece by piece with the vision-based inspection method, which has such advantages as no-touch inspection, high efficiency and automation. Vision-based inspection of packing presswork mainly consists of steps as image acquisition, image registration and defect inspection. The registration between inspected image and reference image is the foundation and premise of visual inspection. In order to realize rapid, reliable and accurate image registration, a registration method based on virtual orientation points is put forward. The precision of registration between inspected image and reference image can reach to sub pixels. Since defect is without fixed position, shape, size and color, three measures are taken to improve the inspection effect. Firstly, the concept of threshold template image is put forward to resolve the problem of variable threshold of intensity difference. Secondly, the color difference is calculated by comparing each pixel with the adjacent pixels of its correspondence on reference image to avoid false defect resulted from color registration error. Thirdly, the strategy of image pyramid is applied in the inspection algorithm to enhance the inspection efficiency. Experiments show that the related algorithm is effective to defect inspection and it takes 27.4 ms on average to inspect a piece of cigarette packing presswork.

  5. MIND Demons for MR-to-CT Deformable Image Registration In Image-Guided Spine Surgery

    PubMed Central

    Reaungamornrat, S.; De Silva, T.; Uneri, A.; Wolinsky, J.-P.; Khanna, A. J.; Kleinszig, G.; Vogt, S.; Prince, J. L.; Siewerdsen, J. H.

    2016-01-01

    Purpose Localization of target anatomy and critical structures defined in preoperative MR images can be achieved by means of multi-modality deformable registration to intraoperative CT. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. Method The method, called MIND Demons, solves for the deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the velocity fields and the diffeomorphisms, a modality-insensitive similarity function suitable to multi-modality images, and constraints on geodesics in Lagrangian coordinates. Direct optimization (without relying on an exponential map of stationary velocity fields used in conventional diffeomorphic Demons) is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, in phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to conventional mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, and normalized MI (NMI) Demons. Result The method yielded sub-voxel invertibility (0.006 mm) and nonsingular spatial Jacobians with capability to preserve local orientation and topology. It demonstrated improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.5 mm compared to 10.9, 2.3, and 4.6 mm for MI FFD, LMI FFD, and NMI Demons methods, respectively. Validation in clinical studies demonstrated realistic deformation with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. Conclusions A modality-independent deformable registration method has been developed to estimate a viscoelastic diffeomorphic map between preoperative MR and intraoperative CT. The method yields registration accuracy suitable to application in image-guided spine surgery across a broad range of anatomical sites and modes of deformation. PMID:27330239

  6. MIND Demons for MR-to-CT deformable image registration in image-guided spine surgery

    NASA Astrophysics Data System (ADS)

    Reaungamornrat, S.; De Silva, T.; Uneri, A.; Wolinsky, J.-P.; Khanna, A. J.; Kleinszig, G.; Vogt, S.; Prince, J. L.; Siewerdsen, J. H.

    2016-03-01

    Purpose: Localization of target anatomy and critical structures defined in preoperative MR images can be achieved by means of multi-modality deformable registration to intraoperative CT. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. Method: The method, called MIND Demons, solves for the deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the velocity fields and the diffeomorphisms, a modality-insensitive similarity function suitable to multi-modality images, and constraints on geodesics in Lagrangian coordinates. Direct optimization (without relying on an exponential map of stationary velocity fields used in conventional diffeomorphic Demons) is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, in phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to conventional mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, and normalized MI (NMI) Demons. Result: The method yielded sub-voxel invertibility (0.006 mm) and nonsingular spatial Jacobians with capability to preserve local orientation and topology. It demonstrated improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.5 mm compared to 10.9, 2.3, and 4.6 mm for MI FFD, LMI FFD, and NMI Demons methods, respectively. Validation in clinical studies demonstrated realistic deformation with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. Conclusions: A modality-independent deformable registration method has been developed to estimate a viscoelastic diffeomorphic map between preoperative MR and intraoperative CT. The method yields registration accuracy suitable to application in image-guided spine surgery across a broad range of anatomical sites and modes of deformation.

  7. Deformable Image Registration for Cone-Beam CT Guided Transoral Robotic Base of Tongue Surgery

    PubMed Central

    Reaungamornrat, S.; Liu, W. P.; Wang, A. S.; Otake, Y.; Nithiananthan, S.; Uneri, A.; Schafer, S.; Tryggestad, E.; Richmon, J.; Sorger, J. M.; Siewerdsen, J. H.; Taylor, R. H.

    2013-01-01

    Transoral robotic surgery (TORS) offers a minimally invasive approach to resection of base of tongue tumors. However, precise localization of the surgical target and adjacent critical structures can be challenged by the highly deformed intraoperative setup. We propose a deformable registration method using intraoperative cone-beam CT (CBCT) to accurately align preoperative CT or MR images with the intraoperative scene. The registration method combines a Gaussian mixture (GM) model followed by a variation of the Demons algorithm. First, following segmentation of the volume of interest (i.e., volume of the tongue extending to the hyoid), a GM model is applied to surface point clouds for rigid initialization (GM rigid) followed by nonrigid deformation (GM nonrigid). Second, the registration is refined using the Demons algorithm applied to distance map transforms of the (GM-registered) preoperative image and intraoperative CBCT. Performance was evaluated in repeat cadaver studies (25 image pairs) in terms of target registration error (TRE), entropy correlation coefficient (ECC), and normalized pointwise mutual information (NPMI). Retraction of the tongue in the TORS operative setup induced gross deformation >30 mm. The mean TRE following the GM rigid, GM nonrigid, and Demons steps was 4.6, 2.1, and 1.7 mm, respectively. The respective ECC was 0.57, 0.70, and 0.73 and NPMI was 0.46, 0.57, and 0.60. Registration accuracy was best across the superior aspect of the tongue and in proximity to the hyoid (by virtue of GM registration of surface points on these structures). The Demons step refined registration primarily in deeper portions of the tongue further from the surface and hyoid bone. Since the method does not use image intensities directly, it is suitable to multi-modality registration of preoperative CT or MR with intraoperative CBCT. Extending the 3D image registration to the fusion of image and planning data in stereo-endoscopic video is anticipated to support safer, high-precision base of tongue robotic surgery. PMID:23807549

  8. MIND Demons for MR-to-CT Deformable Image Registration In Image-Guided Spine Surgery.

    PubMed

    Reaungamornrat, S; De Silva, T; Uneri, A; Wolinsky, J-P; Khanna, A J; Kleinszig, G; Vogt, S; Prince, J L; Siewerdsen, J H

    2016-02-27

    Localization of target anatomy and critical structures defined in preoperative MR images can be achieved by means of multi-modality deformable registration to intraoperative CT. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. The method, called MIND Demons, solves for the deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the velocity fields and the diffeomorphisms, a modality-insensitive similarity function suitable to multi-modality images, and constraints on geodesics in Lagrangian coordinates. Direct optimization (without relying on an exponential map of stationary velocity fields used in conventional diffeomorphic Demons) is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, in phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to conventional mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, and normalized MI (NMI) Demons. The method yielded sub-voxel invertibility (0.006 mm) and nonsingular spatial Jacobians with capability to preserve local orientation and topology. It demonstrated improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.5 mm compared to 10.9, 2.3, and 4.6 mm for MI FFD, LMI FFD, and NMI Demons methods, respectively. Validation in clinical studies demonstrated realistic deformation with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. A modality-independent deformable registration method has been developed to estimate a viscoelastic diffeomorphic map between preoperative MR and intraoperative CT. The method yields registration accuracy suitable to application in image-guided spine surgery across a broad range of anatomical sites and modes of deformation.

  9. Deformable image registration for cone-beam CT guided transoral robotic base-of-tongue surgery

    NASA Astrophysics Data System (ADS)

    Reaungamornrat, S.; Liu, W. P.; Wang, A. S.; Otake, Y.; Nithiananthan, S.; Uneri, A.; Schafer, S.; Tryggestad, E.; Richmon, J.; Sorger, J. M.; Siewerdsen, J. H.; Taylor, R. H.

    2013-07-01

    Transoral robotic surgery (TORS) offers a minimally invasive approach to resection of base-of-tongue tumors. However, precise localization of the surgical target and adjacent critical structures can be challenged by the highly deformed intraoperative setup. We propose a deformable registration method using intraoperative cone-beam computed tomography (CBCT) to accurately align preoperative CT or MR images with the intraoperative scene. The registration method combines a Gaussian mixture (GM) model followed by a variation of the Demons algorithm. First, following segmentation of the volume of interest (i.e. volume of the tongue extending to the hyoid), a GM model is applied to surface point clouds for rigid initialization (GM rigid) followed by nonrigid deformation (GM nonrigid). Second, the registration is refined using the Demons algorithm applied to distance map transforms of the (GM-registered) preoperative image and intraoperative CBCT. Performance was evaluated in repeat cadaver studies (25 image pairs) in terms of target registration error (TRE), entropy correlation coefficient (ECC) and normalized pointwise mutual information (NPMI). Retraction of the tongue in the TORS operative setup induced gross deformation >30 mm. The mean TRE following the GM rigid, GM nonrigid and Demons steps was 4.6, 2.1 and 1.7 mm, respectively. The respective ECC was 0.57, 0.70 and 0.73, and NPMI was 0.46, 0.57 and 0.60. Registration accuracy was best across the superior aspect of the tongue and in proximity to the hyoid (by virtue of GM registration of surface points on these structures). The Demons step refined registration primarily in deeper portions of the tongue further from the surface and hyoid bone. Since the method does not use image intensities directly, it is suitable to multi-modality registration of preoperative CT or MR with intraoperative CBCT. Extending the 3D image registration to the fusion of image and planning data in stereo-endoscopic video is anticipated to support safer, high-precision base-of-tongue robotic surgery.

  10. Use of Multi-Resolution Wavelet Feature Pyramids for Automatic Registration of Multi-Sensor Imagery

    NASA Technical Reports Server (NTRS)

    Zavorin, Ilya; LeMoigne, Jacqueline

    2003-01-01

    The problem of image registration, or alignment of two or more images representing the same scene or object, has to be addressed in various disciplines that employ digital imaging. In the area of remote sensing, just like in medical imaging or computer vision, it is necessary to design robust, fast and widely applicable algorithms that would allow automatic registration of images generated by various imaging platforms at the same or different times, and that would provide sub-pixel accuracy. One of the main issues that needs to be addressed when developing a registration algorithm is what type of information should be extracted from the images being registered, to be used in the search for the geometric transformation that best aligns them. The main objective of this paper is to evaluate several wavelet pyramids that may be used both for invariant feature extraction and for representing images at multiple spatial resolutions to accelerate registration. We find that the band-pass wavelets obtained from the Steerable Pyramid due to Simoncelli perform better than two types of low-pass pyramids when the images being registered have relatively small amount of nonlinear radiometric variations between them. Based on these findings, we propose a modification of a gradient-based registration algorithm that has recently been developed for medical data. We test the modified algorithm on several sets of real and synthetic satellite imagery.

  11. Group-wise feature-based registration of CT and ultrasound images of spine

    NASA Astrophysics Data System (ADS)

    Rasoulian, Abtin; Mousavi, Parvin; Hedjazi Moghari, Mehdi; Foroughi, Pezhman; Abolmaesumi, Purang

    2010-02-01

    Registration of pre-operative CT and freehand intra-operative ultrasound of lumbar spine could aid surgeons in the spinal needle injection which is a common procedure for pain management. Patients are always in a supine position during the CT scan, and in the prone or sitting position during the intervention. This leads to a difference in the spinal curvature between the two imaging modalities, which means a single rigid registration cannot be used for all of the lumbar vertebrae. In this work, a method for group-wise registration of pre-operative CT and intra-operative freehand 2-D ultrasound images of the lumbar spine is presented. The approach utilizes a pointbased registration technique based on the unscented Kalman filter, taking as input segmented vertebrae surfaces in both CT and ultrasound data. Ultrasound images are automatically segmented using a dynamic programming approach, while the CT images are semi-automatically segmented using thresholding. Since the curvature of the spine is different between the pre-operative and the intra-operative data, the registration approach is designed to simultaneously align individual groups of points segmented from each vertebra in the two imaging modalities. A biomechanical model is used to constrain the vertebrae transformation parameters during the registration and to ensure convergence. The mean target registration error achieved for individual vertebrae on five spine phantoms generated from CT data of patients, is 2.47 mm with standard deviation of 1.14 mm.

  12. A new unified framework for the early detection of the progression to diabetic retinopathy from fundus images.

    PubMed

    Leontidis, Georgios

    2017-11-01

    Human retina is a diverse and important tissue, vastly studied for various retinal and other diseases. Diabetic retinopathy (DR), a leading cause of blindness, is one of them. This work proposes a novel and complete framework for the accurate and robust extraction and analysis of a series of retinal vascular geometric features. It focuses on studying the registered bifurcations in successive years of progression from diabetes (no DR) to DR, in order to identify the vascular alterations. Retinal fundus images are utilised, and multiple experimental designs are employed. The framework includes various steps, such as image registration and segmentation, extraction of features, statistical analysis and classification models. Linear mixed models are utilised for making the statistical inferences, alongside the elastic-net logistic regression, boruta algorithm, and regularised random forests for the feature selection and classification phases, in order to evaluate the discriminative potential of the investigated features and also build classification models. A number of geometric features, such as the central retinal artery and vein equivalents, are found to differ significantly across the experiments and also have good discriminative potential. The classification systems yield promising results with the area under the curve values ranging from 0.821 to 0.968, across the four different investigated combinations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Riemannian Metric Optimization on Surfaces (RMOS) for Intrinsic Brain Mapping in the Laplace-Beltrami Embedding Space

    PubMed Central

    Gahm, Jin Kyu; Shi, Yonggang

    2018-01-01

    Surface mapping methods play an important role in various brain imaging studies from tracking the maturation of adolescent brains to mapping gray matter atrophy patterns in Alzheimer’s disease. Popular surface mapping approaches based on spherical registration, however, have inherent numerical limitations when severe metric distortions are present during the spherical parameterization step. In this paper, we propose a novel computational framework for intrinsic surface mapping in the Laplace-Beltrami (LB) embedding space based on Riemannian metric optimization on surfaces (RMOS). Given a diffeomorphism between two surfaces, an isometry can be defined using the pullback metric, which in turn results in identical LB embeddings from the two surfaces. The proposed RMOS approach builds upon this mathematical foundation and achieves general feature-driven surface mapping in the LB embedding space by iteratively optimizing the Riemannian metric defined on the edges of triangular meshes. At the core of our framework is an optimization engine that converts an energy function for surface mapping into a distance measure in the LB embedding space, which can be effectively optimized using gradients of the LB eigen-system with respect to the Riemannian metrics. In the experimental results, we compare the RMOS algorithm with spherical registration using large-scale brain imaging data, and show that RMOS achieves superior performance in the prediction of hippocampal subfields and cortical gyral labels, and the holistic mapping of striatal surfaces for the construction of a striatal connectivity atlas from substantia nigra. PMID:29574399

  14. Onboard Image Registration from Invariant Features

    NASA Technical Reports Server (NTRS)

    Wang, Yi; Ng, Justin; Garay, Michael J.; Burl, Michael C

    2008-01-01

    This paper describes a feature-based image registration technique that is potentially well-suited for onboard deployment. The overall goal is to provide a fast, robust method for dynamically combining observations from multiple platforms into sensors webs that respond quickly to short-lived events and provide rich observations of objects that evolve in space and time. The approach, which has enjoyed considerable success in mainstream computer vision applications, uses invariant SIFT descriptors extracted at image interest points together with the RANSAC algorithm to robustly estimate transformation parameters that relate one image to another. Experimental results for two satellite image registration tasks are presented: (1) automatic registration of images from the MODIS instrument on Terra to the MODIS instrument on Aqua and (2) automatic stabilization of a multi-day sequence of GOES-West images collected during the October 2007 Southern California wildfires.

  15. Simulation-Based Joint Estimation of Body Deformation and Elasticity Parameters for Medical Image Analysis

    PubMed Central

    Foskey, Mark; Niethammer, Marc; Krajcevski, Pavel; Lin, Ming C.

    2014-01-01

    Estimation of tissue stiffness is an important means of noninvasive cancer detection. Existing elasticity reconstruction methods usually depend on a dense displacement field (inferred from ultrasound or MR images) and known external forces. Many imaging modalities, however, cannot provide details within an organ and therefore cannot provide such a displacement field. Furthermore, force exertion and measurement can be difficult for some internal organs, making boundary forces another missing parameter. We propose a general method for estimating elasticity and boundary forces automatically using an iterative optimization framework, given the desired (target) output surface. During the optimization, the input model is deformed by the simulator, and an objective function based on the distance between the deformed surface and the target surface is minimized numerically. The optimization framework does not depend on a particular simulation method and is therefore suitable for different physical models. We show a positive correlation between clinical prostate cancer stage (a clinical measure of severity) and the recovered elasticity of the organ. Since the surface correspondence is established, our method also provides a non-rigid image registration, where the quality of the deformation fields is guaranteed, as they are computed using a physics-based simulation. PMID:22893381

  16. A fast and mobile system for registration of low-altitude visual and thermal aerial images using multiple small-scale UAVs

    NASA Astrophysics Data System (ADS)

    Yahyanejad, Saeed; Rinner, Bernhard

    2015-06-01

    The use of multiple small-scale UAVs to support first responders in disaster management has become popular because of their speed and low deployment costs. We exploit such UAVs to perform real-time monitoring of target areas by fusing individual images captured from heterogeneous aerial sensors. Many approaches have already been presented to register images from homogeneous sensors. These methods have demonstrated robustness against scale, rotation and illumination variations and can also cope with limited overlap among individual images. In this paper we focus on thermal and visual image registration and propose different methods to improve the quality of interspectral registration for the purpose of real-time monitoring and mobile mapping. Images captured by low-altitude UAVs represent a very challenging scenario for interspectral registration due to the strong variations in overlap, scale, rotation, point of view and structure of such scenes. Furthermore, these small-scale UAVs have limited processing and communication power. The contributions of this paper include (i) the introduction of a feature descriptor for robustly identifying corresponding regions of images in different spectrums, (ii) the registration of image mosaics, and (iii) the registration of depth maps. We evaluated the first method using a test data set consisting of 84 image pairs. In all instances our approach combined with SIFT or SURF feature-based registration was superior to the standard versions. Although we focus mainly on aerial imagery, our evaluation shows that the presented approach would also be beneficial in other scenarios such as surveillance and human detection. Furthermore, we demonstrated the advantages of the other two methods in case of multiple image pairs.

  17. Surface-based prostate registration with biomechanical regularization

    NASA Astrophysics Data System (ADS)

    van de Ven, Wendy J. M.; Hu, Yipeng; Barentsz, Jelle O.; Karssemeijer, Nico; Barratt, Dean; Huisman, Henkjan J.

    2013-03-01

    Adding MR-derived information to standard transrectal ultrasound (TRUS) images for guiding prostate biopsy is of substantial clinical interest. A tumor visible on MR images can be projected on ultrasound by using MRUS registration. A common approach is to use surface-based registration. We hypothesize that biomechanical modeling will better control deformation inside the prostate than a regular surface-based registration method. We developed a novel method by extending a surface-based registration with finite element (FE) simulation to better predict internal deformation of the prostate. For each of six patients, a tetrahedral mesh was constructed from the manual prostate segmentation. Next, the internal prostate deformation was simulated using the derived radial surface displacement as boundary condition. The deformation field within the gland was calculated using the predicted FE node displacements and thin-plate spline interpolation. We tested our method on MR guided MR biopsy imaging data, as landmarks can easily be identified on MR images. For evaluation of the registration accuracy we used 45 anatomical landmarks located in all regions of the prostate. Our results show that the median target registration error of a surface-based registration with biomechanical regularization is 1.88 mm, which is significantly different from 2.61 mm without biomechanical regularization. We can conclude that biomechanical FE modeling has the potential to improve the accuracy of multimodal prostate registration when comparing it to regular surface-based registration.

  18. A Parallel Nonrigid Registration Algorithm Based on B-Spline for Medical Images

    PubMed Central

    Wang, Yangping; Wang, Song

    2016-01-01

    The nonrigid registration algorithm based on B-spline Free-Form Deformation (FFD) plays a key role and is widely applied in medical image processing due to the good flexibility and robustness. However, it requires a tremendous amount of computing time to obtain more accurate registration results especially for a large amount of medical image data. To address the issue, a parallel nonrigid registration algorithm based on B-spline is proposed in this paper. First, the Logarithm Squared Difference (LSD) is considered as the similarity metric in the B-spline registration algorithm to improve registration precision. After that, we create a parallel computing strategy and lookup tables (LUTs) to reduce the complexity of the B-spline registration algorithm. As a result, the computing time of three time-consuming steps including B-splines interpolation, LSD computation, and the analytic gradient computation of LSD, is efficiently reduced, for the B-spline registration algorithm employs the Nonlinear Conjugate Gradient (NCG) optimization method. Experimental results of registration quality and execution efficiency on the large amount of medical images show that our algorithm achieves a better registration accuracy in terms of the differences between the best deformation fields and ground truth and a speedup of 17 times over the single-threaded CPU implementation due to the powerful parallel computing ability of Graphics Processing Unit (GPU). PMID:28053653

  19. Homographic Patch Feature Transform: A Robustness Registration for Gastroscopic Surgery.

    PubMed

    Hu, Weiling; Zhang, Xu; Wang, Bin; Liu, Jiquan; Duan, Huilong; Dai, Ning; Si, Jianmin

    2016-01-01

    Image registration is a key component of computer assistance in image guided surgery, and it is a challenging topic in endoscopic environments. In this study, we present a method for image registration named Homographic Patch Feature Transform (HPFT) to match gastroscopic images. HPFT can be used for tracking lesions and augmenting reality applications during gastroscopy. Furthermore, an overall evaluation scheme is proposed to validate the precision, robustness and uniformity of the registration results, which provides a standard for rejection of false matching pairs from corresponding results. Finally, HPFT is applied for processing in vivo gastroscopic data. The experimental results show that HPFT has stable performance in gastroscopic applications.

  20. Feature-based US to CT registration of the aortic root

    NASA Astrophysics Data System (ADS)

    Lang, Pencilla; Chen, Elvis C. S.; Guiraudon, Gerard M.; Jones, Doug L.; Bainbridge, Daniel; Chu, Michael W.; Drangova, Maria; Hata, Noby; Jain, Ameet; Peters, Terry M.

    2011-03-01

    A feature-based registration was developed to align biplane and tracked ultrasound images of the aortic root with a preoperative CT volume. In transcatheter aortic valve replacement, a prosthetic valve is inserted into the aortic annulus via a catheter. Poor anatomical visualization of the aortic root region can result in incorrect positioning, leading to significant morbidity and mortality. Registration of pre-operative CT to transesophageal ultrasound and fluoroscopy images is a major step towards providing augmented image guidance for this procedure. The proposed registration approach uses an iterative closest point algorithm to register a surface mesh generated from CT to 3D US points reconstructed from a single biplane US acquisition, or multiple tracked US images. The use of a single simultaneous acquisition biplane image eliminates reconstruction error introduced by cardiac gating and TEE probe tracking, creating potential for real-time intra-operative registration. A simple initialization procedure is used to minimize changes to operating room workflow. The algorithm is tested on images acquired from excised porcine hearts. Results demonstrate a clinically acceptable accuracy of 2.6mm and 5mm for tracked US to CT and biplane US to CT registration respectively.

  1. Guidelines, Criteria and Regulations for the Registration of Units and Qualifications for National Certificates and National Diplomas. Quality Assurance in Education and Training.

    ERIC Educational Resources Information Center

    New Zealand Qualifications Authority, Wellington.

    This booklet contains guidelines for the registration of units and qualifications in New Zealand's National Qualifications Framework, a system of education and employment qualifications. An introduction provides an overview of registration, including endorsement, evaluation, and reregistration. Section 2 focuses on registration of unit standards.…

  2. Deformable 3D-2D registration for CT and its application to low dose tomographic fluoroscopy

    NASA Astrophysics Data System (ADS)

    Flach, Barbara; Brehm, Marcus; Sawall, Stefan; Kachelrieß, Marc

    2014-12-01

    Many applications in medical imaging include image registration for matching of images from the same or different modalities. In the case of full data sampling, the respective reconstructed images are usually of such a good image quality that standard deformable volume-to-volume (3D-3D) registration approaches can be applied. But research in temporal-correlated image reconstruction and dose reductions increases the number of cases where rawdata are available from only few projection angles. Here, deteriorated image quality leads to non-acceptable deformable volume-to-volume registration results. Therefore a registration approach is required that is robust against a decreasing number of projections defining the target position. We propose a deformable volume-to-rawdata (3D-2D) registration method that aims at finding a displacement vector field maximizing the alignment of a CT volume and the acquired rawdata based on the sum of squared differences in rawdata domain. The registration is constrained by a regularization term in accordance with a fluid-based diffusion. Both cost function components, the rawdata fidelity and the regularization term, are optimized in an alternating manner. The matching criterion is optimized by a conjugate gradient descent for nonlinear functions, while the regularization is realized by convolution of the vector fields with Gaussian kernels. We validate the proposed method and compare it to the demons algorithm, a well-known 3D-3D registration method. The comparison is done for a range of 4-60 target projections using datasets from low dose tomographic fluoroscopy as an application example. The results show a high correlation to the ground truth target position without introducing artifacts even in the case of very few projections. In particular the matching in the rawdata domain is improved compared to the 3D-3D registration for the investigated range. The proposed volume-to-rawdata registration increases the robustness regarding sparse rawdata and provides more stable results than volume-to-volume approaches. By applying the proposed registration approach to low dose tomographic fluoroscopy it is possible to improve the temporal resolution and thus to increase the robustness of low dose tomographic fluoroscopy.

  3. SU-F-I-09: Improvement of Image Registration Using Total-Variation Based Noise Reduction Algorithms for Low-Dose CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukherjee, S; Farr, J; Merchant, T

    Purpose: To study the effect of total-variation based noise reduction algorithms to improve the image registration of low-dose CBCT for patient positioning in radiation therapy. Methods: In low-dose CBCT, the reconstructed image is degraded by excessive quantum noise. In this study, we developed a total-variation based noise reduction algorithm and studied the effect of the algorithm on noise reduction and image registration accuracy. To study the effect of noise reduction, we have calculated the peak signal-to-noise ratio (PSNR). To study the improvement of image registration, we performed image registration between volumetric CT and MV- CBCT images of different head-and-neck patientsmore » and calculated the mutual information (MI) and Pearson correlation coefficient (PCC) as a similarity metric. The PSNR, MI and PCC were calculated for both the noisy and noise-reduced CBCT images. Results: The algorithms were shown to be effective in reducing the noise level and improving the MI and PCC for the low-dose CBCT images tested. For the different head-and-neck patients, a maximum improvement of PSNR of 10 dB with respect to the noisy image was calculated. The improvement of MI and PCC was 9% and 2% respectively. Conclusion: Total-variation based noise reduction algorithm was studied to improve the image registration between CT and low-dose CBCT. The algorithm had shown promising results in reducing the noise from low-dose CBCT images and improving the similarity metric in terms of MI and PCC.« less

  4. Accuracy of computer-assisted navigation: significant augmentation by facial recognition software.

    PubMed

    Glicksman, Jordan T; Reger, Christine; Parasher, Arjun K; Kennedy, David W

    2017-09-01

    Over the past 20 years, image guidance navigation has been used with increasing frequency as an adjunct during sinus and skull base surgery. These devices commonly utilize surface registration, where varying pressure of the registration probe and loss of contact with the face during the skin tracing process can lead to registration inaccuracies, and the number of registration points incorporated is necessarily limited. The aim of this study was to evaluate the use of novel facial recognition software for image guidance registration. Consecutive adults undergoing endoscopic sinus surgery (ESS) were prospectively studied. Patients underwent image guidance registration via both conventional surface registration and facial recognition software. The accuracy of both registration processes were measured at the head of the middle turbinate (MTH), middle turbinate axilla (MTA), anterior wall of sphenoid sinus (SS), and nasal tip (NT). Forty-five patients were included in this investigation. Facial recognition was accurate to within a mean of 0.47 mm at the MTH, 0.33 mm at the MTA, 0.39 mm at the SS, and 0.36 mm at the NT. Facial recognition was more accurate than surface registration at the MTH by an average of 0.43 mm (p = 0.002), at the MTA by an average of 0.44 mm (p < 0.001), and at the SS by an average of 0.40 mm (p < 0.001). The integration of facial recognition software did not adversely affect registration time. In this prospective study, automated facial recognition software significantly improved the accuracy of image guidance registration when compared to conventional surface registration. © 2017 ARS-AAOA, LLC.

  5. Objective and expert-independent validation of retinal image registration algorithms by a projective imaging distortion model.

    PubMed

    Lee, Sangyeol; Reinhardt, Joseph M; Cattin, Philippe C; Abràmoff, Michael D

    2010-08-01

    Fundus camera imaging of the retina is widely used to diagnose and manage ophthalmologic disorders including diabetic retinopathy, glaucoma, and age-related macular degeneration. Retinal images typically have a limited field of view, and multiple images can be joined together using an image registration technique to form a montage with a larger field of view. A variety of methods for retinal image registration have been proposed, but evaluating such methods objectively is difficult due to the lack of a reference standard for the true alignment of the individual images that make up the montage. A method of generating simulated retinal images by modeling the geometric distortions due to the eye geometry and the image acquisition process is described in this paper. We also present a validation process that can be used for any retinal image registration method by tracing through the distortion path and assessing the geometric misalignment in the coordinate system of the reference standard. The proposed method can be used to perform an accuracy evaluation over the whole image, so that distortion in the non-overlapping regions of the montage components can be easily assessed. We demonstrate the technique by generating test image sets with a variety of overlap conditions and compare the accuracy of several retinal image registration models. Copyright 2010 Elsevier B.V. All rights reserved.

  6. Non-rigid registration of serial dedicated breast CT, longitudinal dedicated breast CT and PET/CT images using the diffeomorphic demons method.

    PubMed

    Santos, Jonathan; Chaudhari, Abhijit J; Joshi, Anand A; Ferrero, Andrea; Yang, Kai; Boone, John M; Badawi, Ramsey D

    2014-09-01

    Dedicated breast CT and PET/CT scanners provide detailed 3D anatomical and functional imaging data sets and are currently being investigated for applications in breast cancer management such as diagnosis, monitoring response to therapy and radiation therapy planning. Our objective was to evaluate the performance of the diffeomorphic demons (DD) non-rigid image registration method to spatially align 3D serial (pre- and post-contrast) dedicated breast computed tomography (CT), and longitudinally-acquired dedicated 3D breast CT and positron emission tomography (PET)/CT images. The algorithmic parameters of the DD method were optimized for the alignment of dedicated breast CT images using training data and fixed. The performance of the method for image alignment was quantitatively evaluated using three separate data sets; (1) serial breast CT pre- and post-contrast images of 20 women, (2) breast CT images of 20 women acquired before and after repositioning the subject on the scanner, and (3) dedicated breast PET/CT images of 7 women undergoing neo-adjuvant chemotherapy acquired pre-treatment and after 1 cycle of therapy. The DD registration method outperformed no registration (p < 0.001) and conventional affine registration (p ≤ 0.002) for serial and longitudinal breast CT and PET/CT image alignment. In spite of the large size of the imaging data, the computational cost of the DD method was found to be reasonable (3-5 min). Co-registration of dedicated breast CT and PET/CT images can be performed rapidly and reliably using the DD method. This is the first study evaluating the DD registration method for the alignment of dedicated breast CT and PET/CT images. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  7. SU-E-J-91: FFT Based Medical Image Registration Using a Graphics Processing Unit (GPU).

    PubMed

    Luce, J; Hoggarth, M; Lin, J; Block, A; Roeske, J

    2012-06-01

    To evaluate the efficiency gains obtained from using a Graphics Processing Unit (GPU) to perform a Fourier Transform (FT) based image registration. Fourier-based image registration involves obtaining the FT of the component images, and analyzing them in Fourier space to determine the translations and rotations of one image set relative to another. An important property of FT registration is that by enlarging the images (adding additional pixels), one can obtain translations and rotations with sub-pixel resolution. The expense, however, is an increased computational time. GPUs may decrease the computational time associated with FT image registration by taking advantage of their parallel architecture to perform matrix computations much more efficiently than a Central Processor Unit (CPU). In order to evaluate the computational gains produced by a GPU, images with known translational shifts were utilized. A program was written in the Interactive Data Language (IDL; Exelis, Boulder, CO) to performCPU-based calculations. Subsequently, the program was modified using GPU bindings (Tech-X, Boulder, CO) to perform GPU-based computation on the same system. Multiple image sizes were used, ranging from 256×256 to 2304×2304. The time required to complete the full algorithm by the CPU and GPU were benchmarked and the speed increase was defined as the ratio of the CPU-to-GPU computational time. The ratio of the CPU-to- GPU time was greater than 1.0 for all images, which indicates the GPU is performing the algorithm faster than the CPU. The smallest improvement, a 1.21 ratio, was found with the smallest image size of 256×256, and the largest speedup, a 4.25 ratio, was observed with the largest image size of 2304×2304. GPU programming resulted in a significant decrease in computational time associated with a FT image registration algorithm. The inclusion of the GPU may provide near real-time, sub-pixel registration capability. © 2012 American Association of Physicists in Medicine.

  8. PCA-based groupwise image registration for quantitative MRI.

    PubMed

    Huizinga, W; Poot, D H J; Guyader, J-M; Klaassen, R; Coolen, B F; van Kranenburg, M; van Geuns, R J M; Uitterdijk, A; Polfliet, M; Vandemeulebroucke, J; Leemans, A; Niessen, W J; Klein, S

    2016-04-01

    Quantitative magnetic resonance imaging (qMRI) is a technique for estimating quantitative tissue properties, such as the T1 and T2 relaxation times, apparent diffusion coefficient (ADC), and various perfusion measures. This estimation is achieved by acquiring multiple images with different acquisition parameters (or at multiple time points after injection of a contrast agent) and by fitting a qMRI signal model to the image intensities. Image registration is often necessary to compensate for misalignments due to subject motion and/or geometric distortions caused by the acquisition. However, large differences in image appearance make accurate image registration challenging. In this work, we propose a groupwise image registration method for compensating misalignment in qMRI. The groupwise formulation of the method eliminates the requirement of choosing a reference image, thus avoiding a registration bias. The method minimizes a cost function that is based on principal component analysis (PCA), exploiting the fact that intensity changes in qMRI can be described by a low-dimensional signal model, but not requiring knowledge on the specific acquisition model. The method was evaluated on 4D CT data of the lungs, and both real and synthetic images of five different qMRI applications: T1 mapping in a porcine heart, combined T1 and T2 mapping in carotid arteries, ADC mapping in the abdomen, diffusion tensor mapping in the brain, and dynamic contrast-enhanced mapping in the abdomen. Each application is based on a different acquisition model. The method is compared to a mutual information-based pairwise registration method and four other state-of-the-art groupwise registration methods. Registration accuracy is evaluated in terms of the precision of the estimated qMRI parameters, overlap of segmented structures, distance between corresponding landmarks, and smoothness of the deformation. In all qMRI applications the proposed method performed better than or equally well as competing methods, while avoiding the need to choose a reference image. It is also shown that the results of the conventional pairwise approach do depend on the choice of this reference image. We therefore conclude that our groupwise registration method with a similarity measure based on PCA is the preferred technique for compensating misalignments in qMRI. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Registration of organs with sliding interfaces and changing topologies

    NASA Astrophysics Data System (ADS)

    Berendsen, Floris F.; Kotte, Alexis N. T. J.; Viergever, Max A.; Pluim, Josien P. W.

    2014-03-01

    Smoothness and continuity assumptions on the deformation field in deformable image registration do not hold for applications where the imaged objects have sliding interfaces. Recent extensions to deformable image registration that accommodate for sliding motion of organs are limited to sliding motion along approximately planar surfaces or cannot model sliding that changes the topological configuration in case of multiple organs. We propose a new extension to free-form image registration that is not limited in this way. Our method uses a transformation model that consists of uniform B-spline transformations for each organ region separately, which is based on segmentation of one image. Since this model can create overlapping regions or gaps between regions, we introduce a penalty term that minimizes this undesired effect. The penalty term acts on the surfaces of the organ regions and is optimized simultaneously with the image similarity. To evaluate our method registrations were performed on publicly available inhale-exhale CT scans for which performances of other methods are known. Target registration errors are computed on dense landmark sets that are available with these datasets. On these data our method outperforms the other methods in terms of target registration error and, where applicable, also in terms of overlap and gap volumes. The approximation of the other methods of sliding motion along planar surfaces is reasonably well suited for the motion present in the lung data. The ability of our method to handle sliding along curved boundaries and for changing region topology configurations was demonstrated on synthetic images.

  10. 3D/2D image registration method for joint motion analysis using low-quality images from mini C-arm machines

    NASA Astrophysics Data System (ADS)

    Ghafurian, Soheil; Hacihaliloglu, Ilker; Metaxas, Dimitris N.; Tan, Virak; Li, Kang

    2017-03-01

    A 3D kinematic measurement of joint movement is crucial for orthopedic surgery assessment and diagnosis. This is usually obtained through a frame-by-frame registration of the 3D bone volume to a fluoroscopy video of the joint movement. The high cost of a high-quality fluoroscopy imaging system has hindered the access of many labs to this application. This is while the more affordable and low-dosage version, the mini C-arm, is not commonly used for this application due to low image quality. In this paper, we introduce a novel method for kinematic analysis of joint movement using the mini C-arm. In this method the bone of interest is recovered and isolated from the rest of the image using a non-rigid registration of an atlas to each frame. The 3D/2D registration is then performed using the weighted histogram of image gradients as an image feature. In our experiments, the registration error was 0.89 mm and 2.36° for human C2 vertebra. While the precision is still lacking behind a high quality fluoroscopy machine, it is a good starting point facilitating the use of mini C-arms for motion analysis making this application available to lower-budget environments. Moreover, the registration was highly resistant to the initial distance from the true registration, converging to the answer from anywhere within +/-90° of it.

  11. Analysis and correction of Landsat 4 and 5 Thematic Mapper Sensor Data

    NASA Technical Reports Server (NTRS)

    Bernstein, R.; Hanson, W. A.

    1985-01-01

    Procedures for the correction and registration and registration of Landsat TM image data are examined. The registration of Landsat-4 TM images of San Francisco to Landsat-5 TM images of the San Francisco using the interactive geometric correction program and the cross-correlation technique is described. The geometric correction program and cross-correlation results are presented. The corrections of the TM data to a map reference and to a cartographic database are discussed; geometric and cartographic analyses are applied to the registration results.

  12. 2D-3D rigid registration to compensate for prostate motion during 3D TRUS-guided biopsy.

    PubMed

    De Silva, Tharindu; Fenster, Aaron; Cool, Derek W; Gardi, Lori; Romagnoli, Cesare; Samarabandu, Jagath; Ward, Aaron D

    2013-02-01

    Three-dimensional (3D) transrectal ultrasound (TRUS)-guided systems have been developed to improve targeting accuracy during prostate biopsy. However, prostate motion during the procedure is a potential source of error that can cause target misalignments. The authors present an image-based registration technique to compensate for prostate motion by registering the live two-dimensional (2D) TRUS images acquired during the biopsy procedure to a preacquired 3D TRUS image. The registration must be performed both accurately and quickly in order to be useful during the clinical procedure. The authors implemented an intensity-based 2D-3D rigid registration algorithm optimizing the normalized cross-correlation (NCC) metric using Powell's method. The 2D TRUS images acquired during the procedure prior to biopsy gun firing were registered to the baseline 3D TRUS image acquired at the beginning of the procedure. The accuracy was measured by calculating the target registration error (TRE) using manually identified fiducials within the prostate; these fiducials were used for validation only and were not provided as inputs to the registration algorithm. They also evaluated the accuracy when the registrations were performed continuously throughout the biopsy by acquiring and registering live 2D TRUS images every second. This measured the improvement in accuracy resulting from performing the registration, continuously compensating for motion during the procedure. To further validate the method using a more challenging data set, registrations were performed using 3D TRUS images acquired by intentionally exerting different levels of ultrasound probe pressures in order to measure the performance of our algorithm when the prostate tissue was intentionally deformed. In this data set, biopsy scenarios were simulated by extracting 2D frames from the 3D TRUS images and registering them to the baseline 3D image. A graphics processing unit (GPU)-based implementation was used to improve the registration speed. They also studied the correlation between NCC and TREs. The root-mean-square (RMS) TRE of registrations performed prior to biopsy gun firing was found to be 1.87 ± 0.81 mm. This was an improvement over 4.75 ± 2.62 mm before registration. When the registrations were performed every second during the biopsy, the RMS TRE was reduced to 1.63 ± 0.51 mm. For 3D data sets acquired under different probe pressures, the RMS TRE was found to be 3.18 ± 1.6 mm. This was an improvement from 6.89 ± 4.1 mm before registration. With the GPU based implementation, the registrations were performed with a mean time of 1.1 s. The TRE showed a weak correlation with the similarity metric. However, the authors measured a generally convex shape of the metric around the ground truth, which may explain the rapid convergence of their algorithm to accurate results. Registration to compensate for prostate motion during 3D TRUS-guided biopsy can be performed with a measured accuracy of less than 2 mm and a speed of 1.1 s, which is an important step toward improving the targeting accuracy of a 3D TRUS-guided biopsy system.

  13. Localization accuracy from automatic and semi-automatic rigid registration of locally-advanced lung cancer targets during image-guided radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D.

    2012-01-15

    Purpose: To evaluate localization accuracy resulting from rigid registration of locally-advanced lung cancer targets using fully automatic and semi-automatic protocols for image-guided radiation therapy. Methods: Seventeen lung cancer patients, fourteen also presenting with involved lymph nodes, received computed tomography (CT) scans once per week throughout treatment under active breathing control. A physician contoured both lung and lymph node targets for all weekly scans. Various automatic and semi-automatic rigid registration techniques were then performed for both individual and simultaneous alignments of the primary gross tumor volume (GTV{sub P}) and involved lymph nodes (GTV{sub LN}) to simulate the localization process in image-guidedmore » radiation therapy. Techniques included ''standard'' (direct registration of weekly images to a planning CT), ''seeded'' (manual prealignment of targets to guide standard registration), ''transitive-based'' (alignment of pretreatment and planning CTs through one or more intermediate images), and ''rereferenced'' (designation of a new reference image for registration). Localization error (LE) was assessed as the residual centroid and border distances between targets from planning and weekly CTs after registration. Results: Initial bony alignment resulted in centroid LE of 7.3 {+-} 5.4 mm and 5.4 {+-} 3.4 mm for the GTV{sub P} and GTV{sub LN}, respectively. Compared to bony alignment, transitive-based and seeded registrations significantly reduced GTV{sub P} centroid LE to 4.7 {+-} 3.7 mm (p = 0.011) and 4.3 {+-} 2.5 mm (p < 1 x 10{sup -3}), respectively, but the smallest GTV{sub P} LE of 2.4 {+-} 2.1 mm was provided by rereferenced registration (p < 1 x 10{sup -6}). Standard registration significantly reduced GTV{sub LN} centroid LE to 3.2 {+-} 2.5 mm (p < 1 x 10{sup -3}) compared to bony alignment, with little additional gain offered by the other registration techniques. For simultaneous target alignment, centroid LE as low as 3.9 {+-} 2.7 mm and 3.8 {+-} 2.3 mm were achieved for the GTV{sub P} and GTV{sub LN}, respectively, using rereferenced registration. Conclusions: Target shape, volume, and configuration changes during radiation therapy limited the accuracy of standard rigid registration for image-guided localization in locally-advanced lung cancer. Significant error reductions were possible using other rigid registration techniques, with LE approaching the lower limit imposed by interfraction target variability throughout treatment.« less

  14. Automatic Image Registration of Multimodal Remotely Sensed Data with Global Shearlet Features

    NASA Technical Reports Server (NTRS)

    Murphy, James M.; Le Moigne, Jacqueline; Harding, David J.

    2015-01-01

    Automatic image registration is the process of aligning two or more images of approximately the same scene with minimal human assistance. Wavelet-based automatic registration methods are standard, but sometimes are not robust to the choice of initial conditions. That is, if the images to be registered are too far apart relative to the initial guess of the algorithm, the registration algorithm does not converge or has poor accuracy, and is thus not robust. These problems occur because wavelet techniques primarily identify isotropic textural features and are less effective at identifying linear and curvilinear edge features. We integrate the recently developed mathematical construction of shearlets, which is more effective at identifying sparse anisotropic edges, with an existing automatic wavelet-based registration algorithm. Our shearlet features algorithm produces more distinct features than wavelet features algorithms; the separation of edges from textures is even stronger than with wavelets. Our algorithm computes shearlet and wavelet features for the images to be registered, then performs least squares minimization on these features to compute a registration transformation. Our algorithm is two-staged and multiresolution in nature. First, a cascade of shearlet features is used to provide a robust, though approximate, registration. This is then refined by registering with a cascade of wavelet features. Experiments across a variety of image classes show an improved robustness to initial conditions, when compared to wavelet features alone.

  15. Automatic Image Registration of Multi-Modal Remotely Sensed Data with Global Shearlet Features

    PubMed Central

    Murphy, James M.; Le Moigne, Jacqueline; Harding, David J.

    2017-01-01

    Automatic image registration is the process of aligning two or more images of approximately the same scene with minimal human assistance. Wavelet-based automatic registration methods are standard, but sometimes are not robust to the choice of initial conditions. That is, if the images to be registered are too far apart relative to the initial guess of the algorithm, the registration algorithm does not converge or has poor accuracy, and is thus not robust. These problems occur because wavelet techniques primarily identify isotropic textural features and are less effective at identifying linear and curvilinear edge features. We integrate the recently developed mathematical construction of shearlets, which is more effective at identifying sparse anisotropic edges, with an existing automatic wavelet-based registration algorithm. Our shearlet features algorithm produces more distinct features than wavelet features algorithms; the separation of edges from textures is even stronger than with wavelets. Our algorithm computes shearlet and wavelet features for the images to be registered, then performs least squares minimization on these features to compute a registration transformation. Our algorithm is two-staged and multiresolution in nature. First, a cascade of shearlet features is used to provide a robust, though approximate, registration. This is then refined by registering with a cascade of wavelet features. Experiments across a variety of image classes show an improved robustness to initial conditions, when compared to wavelet features alone. PMID:29123329

  16. Accuracy assessment of fluoroscopy-transesophageal echocardiography registration

    NASA Astrophysics Data System (ADS)

    Lang, Pencilla; Seslija, Petar; Bainbridge, Daniel; Guiraudon, Gerard M.; Jones, Doug L.; Chu, Michael W.; Holdsworth, David W.; Peters, Terry M.

    2011-03-01

    This study assesses the accuracy of a new transesophageal (TEE) ultrasound (US) fluoroscopy registration technique designed to guide percutaneous aortic valve replacement. In this minimally invasive procedure, a valve is inserted into the aortic annulus via a catheter. Navigation and positioning of the valve is guided primarily by intra-operative fluoroscopy. Poor anatomical visualization of the aortic root region can result in incorrect positioning, leading to heart valve embolization, obstruction of the coronary ostia and acute kidney injury. The use of TEE US images to augment intra-operative fluoroscopy provides significant improvements to image-guidance. Registration is achieved using an image-based TEE probe tracking technique and US calibration. TEE probe tracking is accomplished using a single-perspective pose estimation algorithm. Pose estimation from a single image allows registration to be achieved using only images collected in standard OR workflow. Accuracy of this registration technique is assessed using three models: a point target phantom, a cadaveric porcine heart with implanted fiducials, and in-vivo porcine images. Results demonstrate that registration can be achieved with an RMS error of less than 1.5mm, which is within the clinical accuracy requirements of 5mm. US-fluoroscopy registration based on single-perspective pose estimation demonstrates promise as a method for providing guidance to percutaneous aortic valve replacement procedures. Future work will focus on real-time implementation and a visualization system that can be used in the operating room.

  17. Robust Global Image Registration Based on a Hybrid Algorithm Combining Fourier and Spatial Domain Techniques

    DTIC Science & Technology

    2012-09-01

    Robust global image registration based on a hybrid algorithm combining Fourier and spatial domain techniques Peter N. Crabtree, Collin Seanor...00-00-2012 to 00-00-2012 4. TITLE AND SUBTITLE Robust global image registration based on a hybrid algorithm combining Fourier and spatial domain...demonstrate performance of a hybrid algorithm . These results are from analysis of a set of images of an ISO 12233 [12] resolution chart captured in the

  18. Use of multiresolution wavelet feature pyramids for automatic registration of multisensor imagery

    NASA Technical Reports Server (NTRS)

    Zavorin, Ilya; Le Moigne, Jacqueline

    2005-01-01

    The problem of image registration, or the alignment of two or more images representing the same scene or object, has to be addressed in various disciplines that employ digital imaging. In the area of remote sensing, just like in medical imaging or computer vision, it is necessary to design robust, fast, and widely applicable algorithms that would allow automatic registration of images generated by various imaging platforms at the same or different times and that would provide subpixel accuracy. One of the main issues that needs to be addressed when developing a registration algorithm is what type of information should be extracted from the images being registered, to be used in the search for the geometric transformation that best aligns them. The main objective of this paper is to evaluate several wavelet pyramids that may be used both for invariant feature extraction and for representing images at multiple spatial resolutions to accelerate registration. We find that the bandpass wavelets obtained from the steerable pyramid due to Simoncelli performs best in terms of accuracy and consistency, while the low-pass wavelets obtained from the same pyramid give the best results in terms of the radius of convergence. Based on these findings, we propose a modification of a gradient-based registration algorithm that has recently been developed for medical data. We test the modified algorithm on several sets of real and synthetic satellite imagery.

  19. A Demons algorithm for image registration with locally adaptive regularization.

    PubMed

    Cahill, Nathan D; Noble, J Alison; Hawkes, David J

    2009-01-01

    Thirion's Demons is a popular algorithm for nonrigid image registration because of its linear computational complexity and ease of implementation. It approximately solves the diffusion registration problem by successively estimating force vectors that drive the deformation toward alignment and smoothing the force vectors by Gaussian convolution. In this article, we show how the Demons algorithm can be generalized to allow image-driven locally adaptive regularization in a manner that preserves both the linear complexity and ease of implementation of the original Demons algorithm. We show that the proposed algorithm exhibits lower target registration error and requires less computational effort than the original Demons algorithm on the registration of serial chest CT scans of patients with lung nodules.

  20. Image Registration of Cone-Beam Computer Tomography and Preprocedural Computer Tomography Aids in Localization of Adrenal Veins and Decreasing Radiation Dose in Adrenal Vein Sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busser, Wendy M. H., E-mail: wendy.busser@radboudumc.nl; Arntz, Mark J.; Jenniskens, Sjoerd F. M.

    2015-08-15

    PurposeWe assessed whether image registration of cone-beam computed tomography (CT) (CBCT) and contrast-enhanced CT (CE-CT) images indicating the locations of the adrenal veins can aid in increasing the success rate of first-attempts adrenal vein sampling (AVS) and therefore decreasing patient radiation dose.Materials and Methods CBCT scans were acquired in the interventional suite (Philips Allura Xper FD20) and rigidly registered to the vertebra in previously acquired CE-CT. Adrenal vein locations were marked on the CT image and superimposed with live fluoroscopy and digital-subtraction angiography (DSA) to guide the AVS. Seventeen first attempts at AVS were performed with image registration and retrospectivelymore » compared with 15 first attempts without image registration performed earlier by the same 2 interventional radiologists. First-attempt AVS was considered successful when both adrenal vein samples showed representative cortisol levels. Sampling time, dose-area product (DAP), number of DSA runs, fluoroscopy time, and skin dose were recorded.ResultsWithout image registration, the first attempt at sampling was successful in 8 of 15 procedures indicating a success rate of 53.3 %. This increased to 76.5 % (13 of 17) by adding CBCT and CE-CT image registration to AVS procedures (p = 0.266). DAP values (p = 0.001) and DSA runs (p = 0.026) decreased significantly by adding image registration guidance. Sampling and fluoroscopy times and skin dose showed no significant changes.ConclusionGuidance based on registration of CBCT and previously acquired diagnostic CE-CT can aid in enhancing localization of the adrenal veins thereby increasing the success rate of first-attempt AVS with a significant decrease in the number of used DSA runs and, consequently, radiation dose required.« less

  1. Comparison of arterial spin labeling registration strategies in the multi-center GENetic frontotemporal dementia initiative (GENFI).

    PubMed

    Mutsaerts, Henri J M M; Petr, Jan; Thomas, David L; De Vita, Enrico; Cash, David M; van Osch, Matthias J P; Golay, Xavier; Groot, Paul F C; Ourselin, Sebastien; van Swieten, John; Laforce, Robert; Tagliavini, Fabrizio; Borroni, Barbara; Galimberti, Daniela; Rowe, James B; Graff, Caroline; Pizzini, Francesca B; Finger, Elizabeth; Sorbi, Sandro; Castelo Branco, Miguel; Rohrer, Jonathan D; Masellis, Mario; MacIntosh, Bradley J

    2018-01-01

    To compare registration strategies to align arterial spin labeling (ASL) with 3D T1-weighted (T1w) images, with the goal of reducing the between-subject variability of cerebral blood flow (CBF) images. Multi-center 3T ASL data were collected at eight sites with four different sequences in the multi-center GENetic Frontotemporal dementia Initiative (GENFI) study. In a total of 48 healthy controls, we compared the following image registration options: (I) which images to use for registration (perfusion-weighted images [PWI] to the segmented gray matter (GM) probability map (pGM) (CBF-pGM) or M0 to T1w (M0-T1w); (II) which transformation to use (rigid-body or non-rigid); and (III) whether to mask or not (no masking, M0-based FMRIB software library Brain Extraction Tool [BET] masking). In addition to visual comparison, we quantified image similarity using the Pearson correlation coefficient (CC), and used the Mann-Whitney U rank sum test. CBF-pGM outperformed M0-T1w (CC improvement 47.2% ± 22.0%; P < 0.001), and the non-rigid transformation outperformed rigid-body (20.6% ± 5.3%; P < 0.001). Masking only improved the M0-T1w rigid-body registration (14.5% ± 15.5%; P = 0.007). The choice of image registration strategy impacts ASL group analyses. The non-rigid transformation is promising but requires validation. CBF-pGM rigid-body registration without masking can be used as a default strategy. In patients with expansive perfusion deficits, M0-T1w may outperform CBF-pGM in sequences with high effective spatial resolution. BET-masking only improves M0-T1w registration when the M0 image has sufficient contrast. 1 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2018;47:131-140. © 2017 International Society for Magnetic Resonance in Medicine.

  2. Monoplane 3D-2D registration of cerebral angiograms based on multi-objective stratified optimization

    NASA Astrophysics Data System (ADS)

    Aksoy, T.; Špiclin, Ž.; Pernuš, F.; Unal, G.

    2017-12-01

    Registration of 3D pre-interventional to 2D intra-interventional medical images has an increasingly important role in surgical planning, navigation and treatment, because it enables the physician to co-locate depth information given by pre-interventional 3D images with the live information in intra-interventional 2D images such as x-ray. Most tasks during image-guided interventions are carried out under a monoplane x-ray, which is a highly ill-posed problem for state-of-the-art 3D to 2D registration methods. To address the problem of rigid 3D-2D monoplane registration we propose a novel multi-objective stratified parameter optimization, wherein a small set of high-magnitude intensity gradients are matched between the 3D and 2D images. The stratified parameter optimization matches rotation templates to depth templates, first sampled from projected 3D gradients and second from the 2D image gradients, so as to recover 3D rigid-body rotations and out-of-plane translation. The objective for matching was the gradient magnitude correlation coefficient, which is invariant to in-plane translation. The in-plane translations are then found by locating the maximum of the gradient phase correlation between the best matching pair of rotation and depth templates. On twenty pairs of 3D and 2D images of ten patients undergoing cerebral endovascular image-guided intervention the 3D to monoplane 2D registration experiments were setup with a rather high range of initial mean target registration error from 0 to 100 mm. The proposed method effectively reduced the registration error to below 2 mm, which was further refined by a fast iterative method and resulted in a high final registration accuracy (0.40 mm) and high success rate (> 96%). Taking into account a fast execution time below 10 s, the observed performance of the proposed method shows a high potential for application into clinical image-guidance systems.

  3. Evaluating the utility of 3D TRUS image information in guiding intra-procedure registration for motion compensation

    NASA Astrophysics Data System (ADS)

    De Silva, Tharindu; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2014-03-01

    In targeted 3D transrectal ultrasound (TRUS)-guided biopsy, patient and prostate movement during the procedure can cause target misalignments that hinder accurate sampling of pre-planned suspicious tissue locations. Multiple solutions have been proposed for motion compensation via registration of intra-procedural TRUS images to a baseline 3D TRUS image acquired at the beginning of the biopsy procedure. While 2D TRUS images are widely used for intra-procedural guidance, some solutions utilize richer intra-procedural images such as bi- or multi-planar TRUS or 3D TRUS, acquired by specialized probes. In this work, we measured the impact of such richer intra-procedural imaging on motion compensation accuracy, to evaluate the tradeoff between cost and complexity of intra-procedural imaging versus improved motion compensation. We acquired baseline and intra-procedural 3D TRUS images from 29 patients at standard sextant-template biopsy locations. We used the planes extracted from the 3D intra-procedural scans to simulate 2D and 3D information available in different clinically relevant scenarios for registration. The registration accuracy was evaluated by calculating the target registration error (TRE) using manually identified homologous fiducial markers (micro-calcifications). Our results indicate that TRE improves gradually when the number of intra-procedural imaging planes used in registration is increased. Full 3D TRUS information helps the registration algorithm to robustly converge to more accurate solutions. These results can also inform the design of a fail-safe workflow during motion compensation in a system using a tracked 2D TRUS probe, by prescribing rotational acquisitions that can be performed quickly and easily by the physician immediately prior to needle targeting.

  4. Real-time three dimensional CT and MRI to guide interventions for congenital heart disease and acquired pulmonary vein stenosis.

    PubMed

    Suntharos, Patcharapong; Setser, Randolph M; Bradley-Skelton, Sharon; Prieto, Lourdes R

    2017-10-01

    To validate the feasibility and spatial accuracy of pre-procedural 3D images to 3D rotational fluoroscopy registration to guide interventional procedures in patients with congenital heart disease and acquired pulmonary vein stenosis. Cardiac interventions in patients with congenital and structural heart disease require complex catheter manipulation. Current technology allows registration of the anatomy obtained from 3D CT and/or MRI to be overlaid onto fluoroscopy. Thirty patients scheduled for interventional procedures from 12/2012 to 8/2015 were prospectively recruited. A C-arm CT using a biplane C-arm system (Artis zee, VC14H, Siemens Healthcare) was acquired to enable 3D3D registration with pre-procedural images. Following successful image fusion, the anatomic landmarks marked in pre-procedural images were overlaid on live fluoroscopy. The accuracy of image registration was determined by measuring the distance between overlay markers and a reference point in the image. The clinical utility of the registration was evaluated as either "High", "Medium" or "None". Seventeen patients with congenital heart disease and 13 with acquired pulmonary vein stenosis were enrolled. Accuracy and benefit of registration were not evaluated in two patients due to suboptimal images. The distance between the marker and the actual anatomical location was 0-2 mm in 18 (64%), 2-4 mm in 3 (11%) and >4 mm in 7 (25%) patients. 3D3D registration was highly beneficial in 18 (64%), intermediate in 3 (11%), and not beneficial in 7 (25%) patients. 3D3D registration can facilitate complex congenital and structural interventions. It may reduce procedure time, radiation and contrast dose.

  5. Automated retina identification based on multiscale elastic registration.

    PubMed

    Figueiredo, Isabel N; Moura, Susana; Neves, Júlio S; Pinto, Luís; Kumar, Sunil; Oliveira, Carlos M; Ramos, João D

    2016-12-01

    In this work we propose a novel method for identifying individuals based on retinal fundus image matching. The method is based on the image registration of retina blood vessels, since it is known that the retina vasculature of an individual is a signature, i.e., a distinctive pattern of the individual. The proposed image registration consists of a multiscale affine registration followed by a multiscale elastic registration. The major advantage of this particular two-step image registration procedure is that it is able to account for both rigid and non-rigid deformations either inherent to the retina tissues or as a result of the imaging process itself. Afterwards a decision identification measure, relying on a suitable normalized function, is defined to decide whether or not the pair of images belongs to the same individual. The method is tested on a data set of 21721 real pairs generated from a total of 946 retinal fundus images of 339 different individuals, consisting of patients followed in the context of different retinal diseases and also healthy patients. The evaluation of its performance reveals that it achieves a very low false rejection rate (FRR) at zero FAR (the false acceptance rate), equal to 0.084, as well as a low equal error rate (EER), equal to 0.053. Moreover, the tests performed by using only the multiscale affine registration, and discarding the multiscale elastic registration, clearly show the advantage of the proposed approach. The outcome of this study also indicates that the proposed method is reliable and competitive with other existing retinal identification methods, and forecasts its future appropriateness and applicability in real-life applications. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Application of tolerance limits to the characterization of image registration performance.

    PubMed

    Fedorov, Andriy; Wells, William M; Kikinis, Ron; Tempany, Clare M; Vangel, Mark G

    2014-07-01

    Deformable image registration is used increasingly in image-guided interventions and other applications. However, validation and characterization of registration performance remain areas that require further study. We propose an analysis methodology for deriving tolerance limits on the initial conditions for deformable registration that reliably lead to a successful registration. This approach results in a concise summary of the probability of registration failure, while accounting for the variability in the test data. The (β, γ) tolerance limit can be interpreted as a value of the input parameter that leads to successful registration outcome in at least 100β% of cases with the 100γ% confidence. The utility of the methodology is illustrated by summarizing the performance of a deformable registration algorithm evaluated in three different experimental setups of increasing complexity. Our examples are based on clinical data collected during MRI-guided prostate biopsy registered using publicly available deformable registration tool. The results indicate that the proposed methodology can be used to generate concise graphical summaries of the experiments, as well as a probabilistic estimate of the registration outcome for a future sample. Its use may facilitate improved objective assessment, comparison and retrospective stress-testing of deformable.

  7. Mass preserving registration for lung CT

    NASA Astrophysics Data System (ADS)

    Gorbunova, Vladlena; Lo, Pechin; Loeve, Martine; Tiddens, Harm A.; Sporring, Jon; Nielsen, Mads; de Bruijne, Marleen

    2009-02-01

    In this paper, we evaluate a novel image registration method on a set of expiratory-inspiratory pairs of computed tomography (CT) lung scans. A free-form multi resolution image registration technique is used to match two scans of the same subject. To account for the differences in the lung intensities due to differences in inspiration level, we propose to adjust the intensity of lung tissue according to the local expansion or compression. An image registration method without intensity adjustment is compared to the proposed method. Both approaches are evaluated on a set of 10 pairs of expiration and inspiration CT scans of children with cystic fibrosis lung disease. The proposed method with mass preserving adjustment results in significantly better alignment of the vessel trees. Analysis of local volume change for regions with trapped air compared to normally ventilated regions revealed larger differences between these regions in the case of mass preserving image registration, indicating that mass preserving registration is better at capturing localized differences in lung deformation.

  8. Multimodality Non-Rigid Image Registration for Planning, Targeting and Monitoring during CT-guided Percutaneous Liver Tumor Cryoablation

    PubMed Central

    Elhawary, Haytham; Oguro, Sota; Tuncali, Kemal; Morrison, Paul R.; Tatli, Servet; Shyn, Paul B.; Silverman, Stuart G.; Hata, Nobuhiko

    2010-01-01

    Rationale and Objectives To develop non-rigid image registration between pre-procedure contrast enhanced MR images and intra-procedure unenhanced CT images, to enhance tumor visualization and localization during CT-guided liver tumor cryoablation procedures. Materials and Methods After IRB approval, a non-rigid registration (NRR) technique was evaluated with different pre-processing steps and algorithm parameters and compared to a standard rigid registration (RR) approach. The Dice Similarity Coefficient (DSC), Target Registration Error (TRE), 95% Hausdorff distance (HD) and total registration time (minutes) were compared using a two-sided Student’s t-test. The entire registration method was then applied during five CT-guided liver cryoablation cases with the intra-procedural CT data transmitted directly from the CT scanner, with both accuracy and registration time evaluated. Results Selected optimal parameters for registration were section thickness of 5mm, cropping the field of view to 66% of its original size, manual segmentation of the liver, B-spline control grid of 5×5×5 and spatial sampling of 50,000 pixels. Mean 95% HD of 3.3mm (2.5x improvement compared to RR, p<0.05); mean DSC metric of 0.97 (13% increase); and mean TRE of 4.1mm (2.7x reduction) were measured. During the cryoablation procedure registration between the pre-procedure MR and the planning intra-procedure CT took a mean time of 10.6 minutes, the MR to targeting CT image took 4 minutes and MR to monitoring CT took 4.3 minutes. Mean registration accuracy was under 3.4mm. Conclusion Non-rigid registration allowed improved visualization of the tumor during interventional planning, targeting and evaluation of tumor coverage by the ice ball. Future work is focused on reducing segmentation time to make the method more clinically acceptable. PMID:20817574

  9. Development and application of pulmonary structure-function registration methods: towards pulmonary image-guidance tools for improved airway targeted therapies and outcomes

    NASA Astrophysics Data System (ADS)

    Guo, Fumin; Pike, Damien; Svenningsen, Sarah; Coxson, Harvey O.; Drozd, John J.; Yuan, Jing; Fenster, Aaron; Parraga, Grace

    2014-03-01

    Objectives: We aimed to develop a way to rapidly generate multi-modality (MRI-CT) pulmonary imaging structurefunction maps using novel non-rigid image registration methods. This objective is part of our overarching goal to provide an image processing pipeline to generate pulmonary structure-function maps and guide airway-targeted therapies. Methods: Anatomical 1H and functional 3He MRI were acquired in 5 healthy asymptomatic ex-smokers and 7 ex-smokers with chronic obstructive pulmonary disease (COPD) at inspiration breath-hold. Thoracic CT was performed within ten minutes of MRI using the same breath-hold volume. Landmark-based affine registration methods previously validated for imaging of COPD, was based on corresponding fiducial markers located in both CT and 1H MRI coronal slices and compared with shape-based CT-MRI non-rigid registration. Shape-based CT-MRI registration was developed by first identifying the shapes of the lung cavities manually, and then registering the two shapes using affine and thin-plate spline algorithms. We compared registration accuracy using the fiducial localization error (FLE) and target registration error (TRE). Results: For landmark-based registration, the TRE was 8.4±5.3 mm for whole lung and 7.8±4.6 mm for the R and L lungs registered independently (p=0.4). For shape-based registration, the TRE was 8.0±4.6 mm for whole lung as compared to 6.9±4.4 mm for the R and L lung registered independently and this difference was significant (p=0.01). The difference for shape-based (6.9±4.4 mm) and landmark-based R and L lung registration (7.8±4.6 mm) was also significant (p=.04) Conclusion: Shape-based registration TRE was significantly improved compared to landmark-based registration when considering L and R lungs independently.

  10. Research relative to automated multisensor image registration

    NASA Technical Reports Server (NTRS)

    Kanal, L. N.

    1983-01-01

    The basic aproaches to image registration are surveyed. Three image models are presented as models of the subpixel problem. A variety of approaches to the analysis of subpixel analysis are presented using these models.

  11. A simulator for evaluating methods for the detection of lesion-deficit associations

    NASA Technical Reports Server (NTRS)

    Megalooikonomou, V.; Davatzikos, C.; Herskovits, E. H.

    2000-01-01

    Although much has been learned about the functional organization of the human brain through lesion-deficit analysis, the variety of statistical and image-processing methods developed for this purpose precludes a closed-form analysis of the statistical power of these systems. Therefore, we developed a lesion-deficit simulator (LDS), which generates artificial subjects, each of which consists of a set of functional deficits, and a brain image with lesions; the deficits and lesions conform to predefined distributions. We used probability distributions to model the number, sizes, and spatial distribution of lesions, to model the structure-function associations, and to model registration error. We used the LDS to evaluate, as examples, the effects of the complexities and strengths of lesion-deficit associations, and of registration error, on the power of lesion-deficit analysis. We measured the numbers of recovered associations from these simulated data, as a function of the number of subjects analyzed, the strengths and number of associations in the statistical model, the number of structures associated with a particular function, and the prior probabilities of structures being abnormal. The number of subjects required to recover the simulated lesion-deficit associations was found to have an inverse relationship to the strength of associations, and to the smallest probability in the structure-function model. The number of structures associated with a particular function (i.e., the complexity of associations) had a much greater effect on the performance of the analysis method than did the total number of associations. We also found that registration error of 5 mm or less reduces the number of associations discovered by approximately 13% compared to perfect registration. The LDS provides a flexible framework for evaluating many aspects of lesion-deficit analysis.

  12. MIND Demons: Symmetric Diffeomorphic Deformable Registration of MR and CT for Image-Guided Spine Surgery.

    PubMed

    Reaungamornrat, Sureerat; De Silva, Tharindu; Uneri, Ali; Vogt, Sebastian; Kleinszig, Gerhard; Khanna, Akhil J; Wolinsky, Jean-Paul; Prince, Jerry L; Siewerdsen, Jeffrey H

    2016-11-01

    Intraoperative localization of target anatomy and critical structures defined in preoperative MR/CT images can be achieved through the use of multimodality deformable registration. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality-independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. The method, called MIND Demons, finds a deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the integrated velocity fields, a modality-insensitive similarity function suitable to multimodality images, and smoothness on the diffeomorphisms themselves. Direct optimization without relying on the exponential map and stationary velocity field approximation used in conventional diffeomorphic Demons is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, normalized MI (NMI) Demons, and MIND with a diffusion-based registration method (MIND-elastic). The method yielded sub-voxel invertibility (0.008 mm) and nonzero-positive Jacobian determinants. It also showed improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.7 mm compared to 11.3, 3.1, 5.6, and 2.4 mm for MI FFD, LMI FFD, NMI Demons, and MIND-elastic methods, respectively. Validation in clinical studies demonstrated realistic deformations with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine.

  13. MIND Demons: Symmetric Diffeomorphic Deformable Registration of MR and CT for Image-Guided Spine Surgery

    PubMed Central

    Reaungamornrat, Sureerat; De Silva, Tharindu; Uneri, Ali; Vogt, Sebastian; Kleinszig, Gerhard; Khanna, Akhil J; Wolinsky, Jean-Paul; Prince, Jerry L.

    2016-01-01

    Intraoperative localization of target anatomy and critical structures defined in preoperative MR/CT images can be achieved through the use of multimodality deformable registration. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality-independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. The method, called MIND Demons, finds a deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the integrated velocity fields, a modality-insensitive similarity function suitable to multimodality images, and smoothness on the diffeomorphisms themselves. Direct optimization without relying on the exponential map and stationary velocity field approximation used in conventional diffeomorphic Demons is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, normalized MI (NMI) Demons, and MIND with a diffusion-based registration method (MIND-elastic). The method yielded sub-voxel invertibility (0.008 mm) and nonzero-positive Jacobian determinants. It also showed improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.7 mm compared to 11.3, 3.1, 5.6, and 2.4 mm for MI FFD, LMI FFD, NMI Demons, and MIND-elastic methods, respectively. Validation in clinical studies demonstrated realistic deformations with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. PMID:27295656

  14. Final Technical Report for "Applied Mathematics Research: Simulation Based Optimization and Application to Electromagnetic Inverse Problems"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haber, Eldad

    2014-03-17

    The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequality constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.

  15. A MULTICORE BASED PARALLEL IMAGE REGISTRATION METHOD

    PubMed Central

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.

    2012-01-01

    Image registration is a crucial step for many image-assisted clinical applications such as surgery planning and treatment evaluation. In this paper we proposed a landmark based nonlinear image registration algorithm for matching 2D image pairs. The algorithm was shown to be effective and robust under conditions of large deformations. In landmark based registration, the most important step is establishing the correspondence among the selected landmark points. This usually requires an extensive search which is often computationally expensive. We introduced a nonregular data partition algorithm using the K-means clustering algorithm to group the landmarks based on the number of available processing cores. The step optimizes the memory usage and data transfer. We have tested our method using IBM Cell Broadband Engine (Cell/B.E.) platform. PMID:19964921

  16. Registration of interferometric SAR images

    NASA Technical Reports Server (NTRS)

    Lin, Qian; Vesecky, John F.; Zebker, Howard A.

    1992-01-01

    Interferometric synthetic aperture radar (INSAR) is a new way of performing topography mapping. Among the factors critical to mapping accuracy is the registration of the complex SAR images from repeated orbits. A new algorithm for registering interferometric SAR images is presented. A new figure of merit, the average fluctuation function of the phase difference image, is proposed to evaluate the fringe pattern quality. The process of adjusting the registration parameters according to the fringe pattern quality is optimized through a downhill simplex minimization algorithm. The results of applying the proposed algorithm to register two pairs of Seasat SAR images with a short baseline (75 m) and a long baseline (500 m) are shown. It is found that the average fluctuation function is a very stable measure of fringe pattern quality allowing very accurate registration.

  17. Virtual and augmented medical imaging environments: enabling technology for minimally invasive cardiac interventional guidance.

    PubMed

    Linte, Cristian A; White, James; Eagleson, Roy; Guiraudon, Gérard M; Peters, Terry M

    2010-01-01

    Virtual and augmented reality environments have been adopted in medicine as a means to enhance the clinician's view of the anatomy and facilitate the performance of minimally invasive procedures. Their value is truly appreciated during interventions where the surgeon cannot directly visualize the targets to be treated, such as during cardiac procedures performed on the beating heart. These environments must accurately represent the real surgical field and require seamless integration of pre- and intra-operative imaging, surgical tracking, and visualization technology in a common framework centered around the patient. This review begins with an overview of minimally invasive cardiac interventions, describes the architecture of a typical surgical guidance platform including imaging, tracking, registration and visualization, highlights both clinical and engineering accuracy limitations in cardiac image guidance, and discusses the translation of the work from the laboratory into the operating room together with typically encountered challenges.

  18. Deformable planning CT to cone-beam CT image registration in head-and-neck cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou Jidong; Guerrero, Mariana; Chen, Wenjuan

    2011-04-15

    Purpose: The purpose of this work was to implement and validate a deformable CT to cone-beam computed tomography (CBCT) image registration method in head-and-neck cancer to eventually facilitate automatic target delineation on CBCT. Methods: Twelve head-and-neck cancer patients underwent a planning CT and weekly CBCT during the 5-7 week treatment period. The 12 planning CT images (moving images) of these patients were registered to their weekly CBCT images (fixed images) via the symmetric force Demons algorithm and using a multiresolution scheme. Histogram matching was used to compensate for the intensity difference between the two types of images. Using nine knownmore » anatomic points as registration targets, the accuracy of the registration was evaluated using the target registration error (TRE). In addition, region-of-interest (ROI) contours drawn on the planning CT were morphed to the CBCT images and the volume overlap index (VOI) between registered contours and manually delineated contours was evaluated. Results: The mean TRE value of the nine target points was less than 3.0 mm, the slice thickness of the planning CT. Of the 369 target points evaluated for registration accuracy, the average TRE value was 2.6{+-}0.6 mm. The mean TRE for bony tissue targets was 2.4{+-}0.2 mm, while the mean TRE for soft tissue targets was 2.8{+-}0.2 mm. The average VOI between the registered and manually delineated ROI contours was 76.2{+-}4.6%, which is consistent with that reported in previous studies. Conclusions: The authors have implemented and validated a deformable image registration method to register planning CT images to weekly CBCT images in head-and-neck cancer cases. The accuracy of the TRE values suggests that they can be used as a promising tool for automatic target delineation on CBCT.« less

  19. Automated Registration of Multimodal Optic Disc Images: Clinical Assessment of Alignment Accuracy.

    PubMed

    Ng, Wai Siene; Legg, Phil; Avadhanam, Venkat; Aye, Kyaw; Evans, Steffan H P; North, Rachel V; Marshall, Andrew D; Rosin, Paul; Morgan, James E

    2016-04-01

    To determine the accuracy of automated alignment algorithms for the registration of optic disc images obtained by 2 different modalities: fundus photography and scanning laser tomography. Images obtained with the Heidelberg Retina Tomograph II and paired photographic optic disc images of 135 eyes were analyzed. Three state-of-the-art automated registration techniques Regional Mutual Information, rigid Feature Neighbourhood Mutual Information (FNMI), and nonrigid FNMI (NRFNMI) were used to align these image pairs. Alignment of each composite picture was assessed on a 5-point grading scale: "Fail" (no alignment of vessels with no vessel contact), "Weak" (vessels have slight contact), "Good" (vessels with <50% contact), "Very Good" (vessels with >50% contact), and "Excellent" (complete alignment). Custom software generated an image mosaic in which the modalities were interleaved as a series of alternate 5×5-pixel blocks. These were graded independently by 3 clinically experienced observers. A total of 810 image pairs were assessed. All 3 registration techniques achieved a score of "Good" or better in >95% of the image sets. NRFNMI had the highest percentage of "Excellent" (mean: 99.6%; range, 95.2% to 99.6%), followed by Regional Mutual Information (mean: 81.6%; range, 86.3% to 78.5%) and FNMI (mean: 73.1%; range, 85.2% to 54.4%). Automated registration of optic disc images by different modalities is a feasible option for clinical application. All 3 methods provided useful levels of alignment, but the NRFNMI technique consistently outperformed the others and is recommended as a practical approach to the automated registration of multimodal disc images.

  20. Combining variational and model-based techniques to register PET and MR images in hand osteoarthritis

    NASA Astrophysics Data System (ADS)

    Magee, Derek; Tanner, Steven F.; Waller, Michael; Tan, Ai Lyn; McGonagle, Dennis; Jeavons, Alan P.

    2010-08-01

    Co-registration of clinical images acquired using different imaging modalities and equipment is finding increasing use in patient studies. Here we present a method for registering high-resolution positron emission tomography (PET) data of the hand acquired using high-density avalanche chambers with magnetic resonance (MR) images of the finger obtained using a 'microscopy coil'. This allows the identification of the anatomical location of the PET radiotracer and thereby locates areas of active bone metabolism/'turnover'. Image fusion involving data acquired from the hand is demanding because rigid-body transformations cannot be employed to accurately register the images. The non-rigid registration technique that has been implemented in this study uses a variational approach to maximize the mutual information between images acquired using these different imaging modalities. A piecewise model of the fingers is employed to ensure that the methodology is robust and that it generates an accurate registration. Evaluation of the accuracy of the technique is tested using both synthetic data and PET and MR images acquired from patients with osteoarthritis. The method outperforms some established non-rigid registration techniques and results in a mean registration error that is less than approximately 1.5 mm in the vicinity of the finger joints.

  1. Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach.

    PubMed

    Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J; Stayman, J Webster; Zbijewski, Wojciech; Brock, Kristy K; Daly, Michael J; Chan, Harley; Irish, Jonathan C; Siewerdsen, Jeffrey H

    2011-04-01

    A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values ("intensity"). A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5 +/- 2.8) mm compared to (3.5 +/- 3.0) mm with rigid registration. A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.

  2. Morphological Feature Extraction for Automatic Registration of Multispectral Images

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.

  3. Geometry-aware multiscale image registration via OBBTree-based polyaffine log-demons.

    PubMed

    Seiler, Christof; Pennec, Xavier; Reyes, Mauricio

    2011-01-01

    Non-linear image registration is an important tool in many areas of image analysis. For instance, in morphometric studies of a population of brains, free-form deformations between images are analyzed to describe the structural anatomical variability. Such a simple deformation model is justified by the absence of an easy expressible prior about the shape changes. Applying the same algorithms used in brain imaging to orthopedic images might not be optimal due to the difference in the underlying prior on the inter-subject deformations. In particular, using an un-informed deformation prior often leads to local minima far from the expected solution. To improve robustness and promote anatomically meaningful deformations, we propose a locally affine and geometry-aware registration algorithm that automatically adapts to the data. We build upon the log-domain demons algorithm and introduce a new type of OBBTree-based regularization in the registration with a natural multiscale structure. The regularization model is composed of a hierarchy of locally affine transformations via their logarithms. Experiments on mandibles show improved accuracy and robustness when used to initialize the demons, and even similar performance by direct comparison to the demons, with a significantly lower degree of freedom. This closes the gap between polyaffine and non-rigid registration and opens new ways to statistically analyze the registration results.

  4. Robust registration of sparsely sectioned histology to ex-vivo MRI of temporal lobe resections

    NASA Astrophysics Data System (ADS)

    Goubran, Maged; Khan, Ali R.; Crukley, Cathie; Buchanan, Susan; Santyr, Brendan; deRibaupierre, Sandrine; Peters, Terry M.

    2012-02-01

    Surgical resection of epileptic foci is a typical treatment for drug-resistant epilepsy, however, accurate preoperative localization is challenging and often requires invasive sub-dural or intra-cranial electrode placement. The presence of cellular abnormalities in the resected tissue can be used to validate the effectiveness of multispectralMagnetic Resonance Imaging (MRI) in pre-operative foci localization and surgical planning. If successful, these techniques can lead to improved surgical outcomes and less invasive procedures. Towards this goal, a novel pipeline is presented here for post-operative imaging of temporal lobe specimens involving MRI and digital histology, and present and evaluate methods for bringing these images into spatial correspondence. The sparsely-sectioned histology images of resected tissue represents a challenge for 3D reconstruction which we address with a combined 3D and 2D rigid registration algorithm that alternates between slice-based and volume-based registration with the ex-vivo MRI. We also evaluate four methods for non-rigid within-plane registration using both images and fiducials, with the top performing method resulting in a target registration error of 0.87 mm. This work allows for the spatially-local comparison of histology with post-operative MRI and paves the way for eventual registration with pre-operative MRI images.

  5. Co-registration of cone beam CT and preoperative MRI for improved accuracy of electrode localization following cochlear implantation.

    PubMed

    Dragovic, A S; Stringer, A K; Campbell, L; Shaul, C; O'Leary, S J; Briggs, R J

    2018-05-01

    To investigate the clinical usefulness and practicality of co-registration of Cone Beam CT (CBCT) with preoperative Magnetic Resonance Imaging (MRI) for intracochlear localization of electrodes after cochlear implantation. Images of 20 adult patients who underwent CBCT after implantation were co-registered with preoperative MRI scans. Time taken for co-registration was recorded. The images were analysed by clinicians of varying levels of expertise to determine electrode position and ease of interpretation. After a short learning curve, the average co-registration time was 10.78 minutes (StdDev 2.37). All clinicians found the co-registered images easier to interpret than CBCT alone. The mean concordance of CBCT vs. co-registered image analysis between consultant otologists was 60% (17-100%) and 86% (60-100%), respectively. The sensitivity and specificity for CBCT to identify Scala Vestibuli insertion or translocation was 100 and 75%, respectively. The negative predictive value was 100%. CBCT should be performed following adult cochlear implantation for audit and quality control of surgical technique. If SV insertion or translocation is suspected, co-registration with preoperative MRI should be performed to enable easier analysis. There will be a learning curve for this process in terms of both the co-registration and the interpretation of images by clinicians.

  6. The use of virtual fiducials in image-guided kidney surgery

    NASA Astrophysics Data System (ADS)

    Glisson, Courtenay; Ong, Rowena; Simpson, Amber; Clark, Peter; Herrell, S. D.; Galloway, Robert

    2011-03-01

    The alignment of image-space to physical-space lies at the heart of all image-guided procedures. In intracranial surgery, point-based registrations can be used with either skin-affixed or bone-implanted extrinsic objects called fiducial markers. The advantages of point-based registration techniques are that they are robust, fast, and have a well developed mathematical foundation for the assessment of registration quality. In abdominal image-guided procedures such techniques have not been successful. It is difficult to accurately locate sufficient homologous intrinsic points in imagespace and physical-space, and the implantation of extrinsic fiducial markers would constitute "surgery before the surgery." Image-space to physical-space registration for abdominal organs has therefore been dominated by surfacebased registration techniques which are iterative, prone to local minima, sensitive to initial pose, and sensitive to percentage coverage of the physical surface. In our work in image-guided kidney surgery we have developed a composite approach using "virtual fiducials." In an open kidney surgery, the perirenal fat is removed and the surface of the kidney is dotted using a surgical marker. A laser range scanner (LRS) is used to obtain a surface representation and matching high definition photograph. A surface to surface registration is performed using a modified iterative closest point (ICP) algorithm. The dots are extracted from the high definition image and assigned the three dimensional values from the LRS pixels over which they lie. As the surgery proceeds, we can then use point-based registrations to re-register the spaces and track deformations due to vascular clamping and surgical tractions.

  7. Image-guided ex-vivo targeting accuracy using a laparoscopic tissue localization system

    NASA Astrophysics Data System (ADS)

    Bieszczad, Jerry; Friets, Eric; Knaus, Darin; Rauth, Thomas; Herline, Alan; Miga, Michael; Galloway, Robert; Kynor, David

    2007-03-01

    In image-guided surgery, discrete fiducials are used to determine a spatial registration between the location of surgical tools in the operating theater and the location of targeted subsurface lesions and critical anatomic features depicted in preoperative tomographic image data. However, the lack of readily localized anatomic landmarks has greatly hindered the use of image-guided surgery in minimally invasive abdominal procedures. To address these needs, we have previously described a laser-based system for localization of internal surface anatomy using conventional laparoscopes. During a procedure, this system generates a digitized, three-dimensional representation of visible anatomic surfaces in the abdominal cavity. This paper presents the results of an experiment utilizing an ex-vivo bovine liver to assess subsurface targeting accuracy achieved using our system. During the experiment, several radiopaque targets were inserted into the liver parenchyma. The location of each target was recorded using an optically-tracked insertion probe. The liver surface was digitized using our system, and registered with the liver surface extracted from post-procedure CT images. This surface-based registration was then used to transform the position of the inserted targets into the CT image volume. The target registration error (TRE) achieved using our surface-based registration (given a suitable registration algorithm initialization) was 2.4 mm +/- 1.0 mm. A comparable TRE (2.6 mm +/- 1.7 mm) was obtained using a registration based on traditional fiducial markers placed on the surface of the same liver. These results indicate the potential of fiducial-free, surface-to-surface registration for image-guided lesion targeting in minimally invasive abdominal surgery.

  8. SU-G-IeP2-06: Evaluation of Registration Accuracy for Cone-Beam CT Reconstruction Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, J; Wang, P; Zhang, H

    2016-06-15

    Purpose: Cone-beam (CB) computed tomography (CT) is used for image guidance during radiotherapy treatment delivery. Conventional Feldkamp and compressed sensing (CS) based CBCT recon-struction techniques are compared for image registration. This study is to evaluate the image registration accuracy of conventional and CS CBCT for head-and-neck (HN) patients. Methods: Ten HN patients with oropharyngeal tumors were retrospectively selected. Each HN patient had one planning CT (CTP) and three CBCTs were acquired during an adaptive radiotherapy proto-col. Each CBCT was reconstructed by both the conventional (CBCTCON) and compressed sens-ing (CBCTCS) methods. Two oncologists manually labeled 23 landmarks of normal tissue andmore » implanted gold markers on both the CTP and CBCTCON. Subsequently, landmarks on CTp were propagated to CBCTs, using a b-spline-based deformable image registration (DIR) and rigid registration (RR). The errors of these registration methods between two CBCT methods were calcu-lated. Results: For DIR, the mean distance between the propagated and the labeled landmarks was 2.8 mm ± 0.52 for CBCTCS, and 3.5 mm ± 0.75 for CBCTCON. For RR, the mean distance between the propagated and the labeled landmarks was 6.8 mm ± 0.92 for CBCTCS, and 8.7 mm ± 0.95 CBCTCON. Conclusion: This study has demonstrated that CS CBCT is more accurate than conventional CBCT in image registration by both rigid and non-rigid methods. It is potentially suggested that CS CBCT is an improved image modality for image guided adaptive applications.« less

  9. Combination of intensity-based image registration with 3D simulation in radiation therapy.

    PubMed

    Li, Pan; Malsch, Urban; Bendl, Rolf

    2008-09-07

    Modern techniques of radiotherapy like intensity modulated radiation therapy (IMRT) make it possible to deliver high dose to tumors of different irregular shapes at the same time sparing surrounding healthy tissue. However, internal tumor motion makes precise calculation of the delivered dose distribution challenging. This makes analysis of tumor motion necessary. One way to describe target motion is using image registration. Many registration methods have already been developed previously. However, most of them belong either to geometric approaches or to intensity approaches. Methods which take account of anatomical information and results of intensity matching can greatly improve the results of image registration. Based on this idea, a combined method of image registration followed by 3D modeling and simulation was introduced in this project. Experiments were carried out for five patients 4DCT lung datasets. In the 3D simulation, models obtained from images of end-exhalation were deformed to the state of end-inhalation. Diaphragm motions were around -25 mm in the cranial-caudal (CC) direction. To verify the quality of our new method, displacements of landmarks were calculated and compared with measurements in the CT images. Improvement of accuracy after simulations has been shown compared to the results obtained only by intensity-based image registration. The average improvement was 0.97 mm. The average Euclidean error of the combined method was around 3.77 mm. Unrealistic motions such as curl-shaped deformations in the results of image registration were corrected. The combined method required less than 30 min. Our method provides information about the deformation of the target volume, which we need for dose optimization and target definition in our planning system.

  10. [Application of elastic registration based on Demons algorithm in cone beam CT].

    PubMed

    Pang, Haowen; Sun, Xiaoyang

    2014-02-01

    We applied Demons and accelerated Demons elastic registration algorithm in radiotherapy cone beam CT (CBCT) images, We provided software support for real-time understanding of organ changes during radiotherapy. We wrote a 3D CBCT image elastic registration program using Matlab software, and we tested and verified the images of two patients with cervical cancer 3D CBCT images for elastic registration, based on the classic Demons algorithm, minimum mean square error (MSE) decreased 59.7%, correlation coefficient (CC) increased 11.0%. While for the accelerated Demons algorithm, MSE decreased 40.1%, CC increased 7.2%. The experimental verification with two methods of Demons algorithm obtained the desired results, but the small difference appeared to be lack of precision, and the total registration time was a little long. All these problems need to be further improved for accuracy and reducing of time.

  11. MREG V1.1 : a multi-scale image registration algorithm for SAR applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eichel, Paul H.

    2013-08-01

    MREG V1.1 is the sixth generation SAR image registration algorithm developed by the Signal Processing&Technology Department for Synthetic Aperture Radar applications. Like its predecessor algorithm REGI, it employs a powerful iterative multi-scale paradigm to achieve the competing goals of sub-pixel registration accuracy and the ability to handle large initial offsets. Since it is not model based, it allows for high fidelity tracking of spatially varying terrain-induced misregistration. Since it does not rely on image domain phase, it is equally adept at coherent and noncoherent image registration. This document provides a brief history of the registration processors developed by Dept. 5962more » leading up to MREG V1.1, a full description of the signal processing steps involved in the algorithm, and a user's manual with application specific recommendations for CCD, TwoColor MultiView, and SAR stereoscopy.« less

  12. 3D surface-based registration of ultrasound and histology in prostate cancer imaging.

    PubMed

    Schalk, Stefan G; Postema, Arnoud; Saidov, Tamerlan A; Demi, Libertario; Smeenge, Martijn; de la Rosette, Jean J M C H; Wijkstra, Hessel; Mischi, Massimo

    2016-01-01

    Several transrectal ultrasound (TRUS)-based techniques aiming at accurate localization of prostate cancer are emerging to improve diagnostics or to assist with focal therapy. However, precise validation prior to introduction into clinical practice is required. Histopathology after radical prostatectomy provides an excellent ground truth, but needs accurate registration with imaging. In this work, a 3D, surface-based, elastic registration method was developed to fuse TRUS images with histopathologic results. To maximize the applicability in clinical practice, no auxiliary sensors or dedicated hardware were used for the registration. The mean registration errors, measured in vitro and in vivo, were 1.5±0.2 and 2.1±0.5mm, respectively. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Prostate multimodality image registration based on B-splines and quadrature local energy.

    PubMed

    Mitra, Jhimli; Martí, Robert; Oliver, Arnau; Lladó, Xavier; Ghose, Soumya; Vilanova, Joan C; Meriaudeau, Fabrice

    2012-05-01

    Needle biopsy of the prostate is guided by Transrectal Ultrasound (TRUS) imaging. The TRUS images do not provide proper spatial localization of malignant tissues due to the poor sensitivity of TRUS to visualize early malignancy. Magnetic Resonance Imaging (MRI) has been shown to be sensitive for the detection of early stage malignancy, and therefore, a novel 2D deformable registration method that overlays pre-biopsy MRI onto TRUS images has been proposed. The registration method involves B-spline deformations with Normalized Mutual Information (NMI) as the similarity measure computed from the texture images obtained from the amplitude responses of the directional quadrature filter pairs. Registration accuracy of the proposed method is evaluated by computing the Dice Similarity coefficient (DSC) and 95% Hausdorff Distance (HD) values for 20 patients prostate mid-gland slices and Target Registration Error (TRE) for 18 patients only where homologous structures are visible in both the TRUS and transformed MR images. The proposed method and B-splines using NMI computed from intensities provide average TRE values of 2.64 ± 1.37 and 4.43 ± 2.77 mm respectively. Our method shows statistically significant improvement in TRE when compared with B-spline using NMI computed from intensities with Student's t test p = 0.02. The proposed method shows 1.18 times improvement over thin-plate splines registration with average TRE of 3.11 ± 2.18 mm. The mean DSC and the mean 95% HD values obtained with the proposed method of B-spline with NMI computed from texture are 0.943 ± 0.039 and 4.75 ± 2.40 mm respectively. The texture energy computed from the quadrature filter pairs provides better registration accuracy for multimodal images than raw intensities. Low TRE values of the proposed registration method add to the feasibility of it being used during TRUS-guided biopsy.

  14. Mammogram registration: a phantom-based evaluation of compressed breast thickness variation effects.

    PubMed

    Richard, Frédéric J P; Bakić, Predrag R; Maidment, Andrew D A

    2006-02-01

    The temporal comparison of mammograms is complex; a wide variety of factors can cause changes in image appearance. Mammogram registration is proposed as a method to reduce the effects of these changes and potentially to emphasize genuine alterations in breast tissue. Evaluation of such registration techniques is difficult since ground truth regarding breast deformations is not available in clinical mammograms. In this paper, we propose a systematic approach to evaluate sensitivity of registration methods to various types of changes in mammograms using synthetic breast images with known deformations. As a first step, images of the same simulated breasts with various amounts of simulated physical compression have been used to evaluate a previously described nonrigid mammogram registration technique. Registration performance is measured by calculating the average displacement error over a set of evaluation points identified in mammogram pairs. Applying appropriate thickness compensation and using a preferred order of the registered images, we obtained an average displacement error of 1.6 mm for mammograms with compression differences of 1-3 cm. The proposed methodology is applicable to analysis of other sources of mammogram differences and can be extended to the registration of multimodality breast data.

  15. Validation of an improved 'diffeomorphic demons' algorithm for deformable image registration in image-guided radiation therapy.

    PubMed

    Zhou, Lu; Zhou, Linghong; Zhang, Shuxu; Zhen, Xin; Yu, Hui; Zhang, Guoqian; Wang, Ruihao

    2014-01-01

    Deformable image registration (DIR) was widely used in radiation therapy, such as in automatic contour generation, dose accumulation, tumor growth or regression analysis. To achieve higher registration accuracy and faster convergence, an improved 'diffeomorphic demons' registration algorithm was proposed and validated. Based on Brox et al.'s gradient constancy assumption and Malis's efficient second-order minimization (ESM) algorithm, a grey value gradient similarity term and a transformation error term were added into the demons energy function, and a formula was derived to calculate the update of transformation field. The limited Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm was used to optimize the energy function so that the iteration number could be determined automatically. The proposed algorithm was validated using mathematically deformed images and physically deformed phantom images. Compared with the original 'diffeomorphic demons' algorithm, the registration method proposed achieve a higher precision and a faster convergence speed. Due to the influence of different scanning conditions in fractionated radiation, the density range of the treatment image and the planning image may be different. In such a case, the improved demons algorithm can achieve faster and more accurate radiotherapy.

  16. Experimental Evaluation of a Deformable Registration Algorithm for Motion Correction in PET-CT Guided Biopsy.

    PubMed

    Khare, Rahul; Sala, Guillaume; Kinahan, Paul; Esposito, Giuseppe; Banovac, Filip; Cleary, Kevin; Enquobahrie, Andinet

    2013-01-01

    Positron emission tomography computed tomography (PET-CT) images are increasingly being used for guidance during percutaneous biopsy. However, due to the physics of image acquisition, PET-CT images are susceptible to problems due to respiratory and cardiac motion, leading to inaccurate tumor localization, shape distortion, and attenuation correction. To address these problems, we present a method for motion correction that relies on respiratory gated CT images aligned using a deformable registration algorithm. In this work, we use two deformable registration algorithms and two optimization approaches for registering the CT images obtained over the respiratory cycle. The two algorithms are the BSpline and the symmetric forces Demons registration. In the first optmization approach, CT images at each time point are registered to a single reference time point. In the second approach, deformation maps are obtained to align each CT time point with its adjacent time point. These deformations are then composed to find the deformation with respect to a reference time point. We evaluate these two algorithms and optimization approaches using respiratory gated CT images obtained from 7 patients. Our results show that overall the BSpline registration algorithm with the reference optimization approach gives the best results.

  17. Respiratory motion correction for free-breathing 3D abdominal MRI using CNN-based image registration: a feasibility study.

    PubMed

    Lv, Jun; Yang, Ming; Zhang, Jue; Wang, Xiaoying

    2018-02-01

    Free-breathing abdomen imaging requires non-rigid motion registration of unavoidable respiratory motion in three-dimensional undersampled data sets. In this work, we introduce an image registration method based on the convolutional neural network (CNN) to obtain motion-free abdominal images throughout the respiratory cycle. Abdominal data were acquired from 10 volunteers using a 1.5 T MRI system. The respiratory signal was extracted from the central-space spokes, and the acquired data were reordered in three bins according to the corresponding breathing signal. Retrospective image reconstruction of the three near-motion free respiratory phases was performed using non-Cartesian iterative SENSE reconstruction. Then, we trained a CNN to analyse the spatial transform among the different bins. This network could generate the displacement vector field and be applied to perform registration on unseen image pairs. To demonstrate the feasibility of this registration method, we compared the performance of three different registration approaches for accurate image fusion of three bins: non-motion corrected (NMC), local affine registration method (LREG) and CNN. Visualization of coronal images indicated that LREG had caused broken blood vessels, while the vessels of the CNN were sharper and more consecutive. As shown in the sagittal view, compared to NMC and CNN, distorted and blurred liver contours were caused by LREG. At the same time, zoom-in axial images presented that the vessels were delineated more clearly by CNN than LREG. The statistical results of the signal-to-noise ratio, visual score, vessel sharpness and registration time over all volunteers were compared among the NMC, LREG and CNN approaches. The SNR indicated that the CNN acquired the best image quality (207.42 ± 96.73), which was better than NMC (116.67 ± 44.70) and LREG (187.93 ± 96.68). The image visual score agreed with SNR, marking CNN (3.85 ± 0.12) as the best, followed by LREG (3.43 ± 0.13) and NMC (2.55 ± 0.09). A vessel sharpness assessment yielded similar values between the CNN (0.81 ± 0.03) and LREG (0.80 ± 0.04), differentiating them from the NMC (0.78 ± 0.06). When compared with the LREG-based reconstruction, the CNN-based reconstruction reduces the registration time from 1 h to 1 min. Our preliminary results demonstrate the feasibility of the CNN-based approach, and this scheme outperforms the NMC- and LREG-based methods. Advances in knowledge: This method reduces the registration time from ~1 h to ~1 min, which has promising prospects for clinical use. To the best of our knowledge, this study shows the first convolutional neural network-based registration method to be applied in abdominal images.

  18. On the appropriate feature for general SAR image registration

    NASA Astrophysics Data System (ADS)

    Li, Dong; Zhang, Yunhua

    2012-09-01

    An investigation to the appropriate feature for SAR image registration is conducted. The commonly-used features such as tie points, Harris corner, the scale invariant feature transform (SIFT), and the speeded up robust feature (SURF) are comprehensively evaluated in terms of several criteria such as the geometrical invariance of feature, the extraction speed, the localization accuracy, the geometrical invariance of descriptor, the matching speed, the robustness to decorrelation, and the flexibility to image speckling. It is shown that SURF outperforms others. It is particularly indicated that SURF has good flexibility to image speckling because the Fast-Hessian detector of SURF has a potential relation with the refined Lee filter. It is recommended to perform SURF on the oversampled image with unaltered sampling step so as to improve the subpixel registration accuracy and speckle immunity. Thus SURF is more appropriate and competent for general SAR image registration.

  19. Using an Android application to assess registration strategies in open hepatic procedures: a planning and simulation tool

    NASA Astrophysics Data System (ADS)

    Doss, Derek J.; Heiselman, Jon S.; Collins, Jarrod A.; Weis, Jared A.; Clements, Logan W.; Geevarghese, Sunil K.; Miga, Michael I.

    2017-03-01

    Sparse surface digitization with an optically tracked stylus for use in an organ surface-based image-to-physical registration is an established approach for image-guided open liver surgery procedures. However, variability in sparse data collections during open hepatic procedures can produce disparity in registration alignments. In part, this variability arises from inconsistencies with the patterns and fidelity of collected intraoperative data. The liver lacks distinct landmarks and experiences considerable soft tissue deformation. Furthermore, data coverage of the organ is often incomplete or unevenly distributed. While more robust feature-based registration methodologies have been developed for image-guided liver surgery, it is still unclear how variation in sparse intraoperative data affects registration. In this work, we have developed an application to allow surgeons to study the performance of surface digitization patterns on registration. Given the intrinsic nature of soft-tissue, we incorporate realistic organ deformation when assessing fidelity of a rigid registration methodology. We report the construction of our application and preliminary registration results using four participants. Our preliminary results indicate that registration quality improves as users acquire more experience selecting patterns of sparse intraoperative surface data.

  20. [Registration and 3D rendering of serial tissue section images].

    PubMed

    Liu, Zhexing; Jiang, Guiping; Dong, Wu; Zhang, Yu; Xie, Xiaomian; Hao, Liwei; Wang, Zhiyuan; Li, Shuxiang

    2002-12-01

    It is an important morphological research method to reconstruct the 3D imaging from serial section tissue images. Registration of serial images is a key step to 3D reconstruction. Firstly, an introduction to the segmentation-counting registration algorithm is presented, which is based on the joint histogram. After thresholding of the two images to be registered, the criterion function is defined as counting in a specific region of the joint histogram, which greatly speeds up the alignment process. Then, the method is used to conduct the serial tissue image matching task, and lies a solid foundation for 3D rendering. Finally, preliminary surface rendering results are presented.

  1. Non-rigid registration of 3D ultrasound for neurosurgery using automatic feature detection and matching.

    PubMed

    Machado, Inês; Toews, Matthew; Luo, Jie; Unadkat, Prashin; Essayed, Walid; George, Elizabeth; Teodoro, Pedro; Carvalho, Herculano; Martins, Jorge; Golland, Polina; Pieper, Steve; Frisken, Sarah; Golby, Alexandra; Wells, William

    2018-06-04

    The brain undergoes significant structural change over the course of neurosurgery, including highly nonlinear deformation and resection. It can be informative to recover the spatial mapping between structures identified in preoperative surgical planning and the intraoperative state of the brain. We present a novel feature-based method for achieving robust, fully automatic deformable registration of intraoperative neurosurgical ultrasound images. A sparse set of local image feature correspondences is first estimated between ultrasound image pairs, after which rigid, affine and thin-plate spline models are used to estimate dense mappings throughout the image. Correspondences are derived from 3D features, distinctive generic image patterns that are automatically extracted from 3D ultrasound images and characterized in terms of their geometry (i.e., location, scale, and orientation) and a descriptor of local image appearance. Feature correspondences between ultrasound images are achieved based on a nearest-neighbor descriptor matching and probabilistic voting model similar to the Hough transform. Experiments demonstrate our method on intraoperative ultrasound images acquired before and after opening of the dura mater, during resection and after resection in nine clinical cases. A total of 1620 automatically extracted 3D feature correspondences were manually validated by eleven experts and used to guide the registration. Then, using manually labeled corresponding landmarks in the pre- and post-resection ultrasound images, we show that our feature-based registration reduces the mean target registration error from an initial value of 3.3 to 1.5 mm. This result demonstrates that the 3D features promise to offer a robust and accurate solution for 3D ultrasound registration and to correct for brain shift in image-guided neurosurgery.

  2. Calibration of 3D ultrasound to an electromagnetic tracking system

    NASA Astrophysics Data System (ADS)

    Lang, Andrew; Parthasarathy, Vijay; Jain, Ameet

    2011-03-01

    The use of electromagnetic (EM) tracking is an important guidance tool that can be used to aid procedures requiring accurate localization such as needle injections or catheter guidance. Using EM tracking, the information from different modalities can be easily combined using pre-procedural calibration information. These calibrations are performed individually, per modality, allowing different imaging systems to be mixed and matched according to the procedure at hand. In this work, a framework for the calibration of a 3D transesophageal echocardiography probe to EM tracking is developed. The complete calibration framework includes three required steps: data acquisition, needle segmentation, and calibration. Ultrasound (US) images of an EM tracked needle must be acquired with the position of the needles in each volume subsequently extracted by segmentation. The calibration transformation is determined through a registration between the segmented points and the recorded EM needle positions. Additionally, the speed of sound is compensated for since calibration is performed in water that has a different speed then is assumed by the US machine. A statistical validation framework has also been developed to provide further information related to the accuracy and consistency of the calibration. Further validation of the calibration showed an accuracy of 1.39 mm.

  3. Geodesic regression for image time-series.

    PubMed

    Niethammer, Marc; Huang, Yang; Vialard, François-Xavier

    2011-01-01

    Registration of image-time series has so far been accomplished (i) by concatenating registrations between image pairs, (ii) by solving a joint estimation problem resulting in piecewise geodesic paths between image pairs, (iii) by kernel based local averaging or (iv) by augmenting the joint estimation with additional temporal irregularity penalties. Here, we propose a generative model extending least squares linear regression to the space of images by using a second-order dynamic formulation for image registration. Unlike previous approaches, the formulation allows for a compact representation of an approximation to the full spatio-temporal trajectory through its initial values. The method also opens up possibilities to design image-based approximation algorithms. The resulting optimization problem is solved using an adjoint method.

  4. MRI signal intensity based B-spline nonrigid registration for pre- and intraoperative imaging during prostate brachytherapy.

    PubMed

    Oguro, Sota; Tokuda, Junichi; Elhawary, Haytham; Haker, Steven; Kikinis, Ron; Tempany, Clare M C; Hata, Nobuhiko

    2009-11-01

    To apply an intensity-based nonrigid registration algorithm to MRI-guided prostate brachytherapy clinical data and to assess its accuracy. A nonrigid registration of preoperative MRI to intraoperative MRI images was carried out in 16 cases using a Basis-Spline algorithm in a retrospective manner. The registration was assessed qualitatively by experts' visual inspection and quantitatively by measuring the Dice similarity coefficient (DSC) for total gland (TG), central gland (CG), and peripheral zone (PZ), the mutual information (MI) metric, and the fiducial registration error (FRE) between corresponding anatomical landmarks for both the nonrigid and a rigid registration method. All 16 cases were successfully registered in less than 5 min. After the nonrigid registration, DSC values for TG, CG, PZ were 0.91, 0.89, 0.79, respectively, the MI metric was -0.19 +/- 0.07 and FRE presented a value of 2.3 +/- 1.8 mm. All the metrics were significantly better than in the case of rigid registration, as determined by one-sided t-tests. The intensity-based nonrigid registration method using clinical data was demonstrated to be feasible and showed statistically improved metrics when compare to only rigid registration. The method is a valuable tool to integrate pre- and intraoperative images for brachytherapy.

  5. Towards radiological diagnosis of abdominal adhesions based on motion signatures derived from sequences of cine-MRI images.

    PubMed

    Fenner, John; Wright, Benjamin; Emberey, Jonathan; Spencer, Paul; Gillott, Richard; Summers, Angela; Hutchinson, Charles; Lawford, Pat; Brenchley, Paul; Bardhan, Karna Dev

    2014-06-01

    This paper reports novel development and preliminary application of an image registration technique for diagnosis of abdominal adhesions imaged with cine-MRI (cMRI). Adhesions can severely compromise the movement and physiological function of the abdominal contents, and their presence is difficult to detect. The image registration approach presented here is designed to expose anomalies in movement of the abdominal organs, providing a movement signature that is indicative of underlying structural abnormalities. Validation of the technique was performed using structurally based in vitro and in silico models, supported with Receiver Operating Characteristic (ROC) methods. For the more challenging cases presented to the small cohort of 4 observers, the AUC (area under curve) improved from a mean value of 0.67 ± 0.02 (without image registration assistance) to a value of 0.87 ± 0.02 when image registration support was included. Also, in these cases, a reduction in time to diagnosis was observed, decreasing by between 20% and 50%. These results provided sufficient confidence to apply the image registration diagnostic protocol to sample magnetic resonance imaging data from healthy volunteers as well as a patient suffering from encapsulating peritoneal sclerosis (an extreme form of adhesions) where immobilization of the gut by cocooning of the small bowel is observed. The results as a whole support the hypothesis that movement analysis using image registration offers a possible method for detecting underlying structural anomalies and encourages further investigation. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  6. Automated replication of cone beam CT-guided treatments in the Pinnacle(3) treatment planning system for adaptive radiotherapy.

    PubMed

    Hargrave, Catriona; Mason, Nicole; Guidi, Robyn; Miller, Julie-Anne; Becker, Jillian; Moores, Matthew; Mengersen, Kerrie; Poulsen, Michael; Harden, Fiona

    2016-03-01

    Time-consuming manual methods have been required to register cone-beam computed tomography (CBCT) images with plans in the Pinnacle(3) treatment planning system in order to replicate delivered treatments for adaptive radiotherapy. These methods rely on fiducial marker (FM) placement during CBCT acquisition or the image mid-point to localise the image isocentre. A quality assurance study was conducted to validate an automated CBCT-plan registration method utilising the Digital Imaging and Communications in Medicine (DICOM) Structure Set (RS) and Spatial Registration (RE) files created during online image-guided radiotherapy (IGRT). CBCTs of a phantom were acquired with FMs and predetermined setup errors using various online IGRT workflows. The CBCTs, DICOM RS and RE files were imported into Pinnacle(3) plans of the phantom and the resulting automated CBCT-plan registrations were compared to existing manual methods. A clinical protocol for the automated method was subsequently developed and tested retrospectively using CBCTs and plans for six bladder patients. The automated CBCT-plan registration method was successfully applied to thirty-four phantom CBCT images acquired with an online 0 mm action level workflow. Ten CBCTs acquired with other IGRT workflows required manual workarounds. This was addressed during the development and testing of the clinical protocol using twenty-eight patient CBCTs. The automated CBCT-plan registrations were instantaneous, replicating delivered treatments in Pinnacle(3) with errors of ±0.5 mm. These errors were comparable to mid-point-dependant manual registrations but superior to FM-dependant manual registrations. The automated CBCT-plan registration method quickly and reliably replicates delivered treatments in Pinnacle(3) for adaptive radiotherapy.

  7. Estimation of regional lung expansion via 3D image registration

    NASA Astrophysics Data System (ADS)

    Pan, Yan; Kumar, Dinesh; Hoffman, Eric A.; Christensen, Gary E.; McLennan, Geoffrey; Song, Joo Hyun; Ross, Alan; Simon, Brett A.; Reinhardt, Joseph M.

    2005-04-01

    A method is described to estimate regional lung expansion and related biomechanical parameters using multiple CT images of the lungs, acquired at different inflation levels. In this study, the lungs of two sheep were imaged utilizing a multi-detector row CT at different lung inflations in the prone and supine positions. Using the lung surfaces and the airway branch points for guidance, a 3D inverse consistent image registration procedure was used to match different lung volumes at each orientation. The registration was validated using a set of implanted metal markers. After registration, the Jacobian of the deformation field was computed to express regional expansion or contraction. The regional lung expansion at different pressures and different orientations are compared.

  8. Performance evaluations of demons and free form deformation algorithms for the liver region.

    PubMed

    Wang, Hui; Gong, Guanzhong; Wang, Hongjun; Li, Dengwang; Yin, Yong; Lu, Jie

    2014-04-01

    We investigated the influence of breathing motion on radiation therapy according to four- dimensional computed tomography (4D-CT) technology and indicated the registration of 4D-CT images was significant. The demons algorithm in two interpolation modes was compared to the FFD model algorithm to register the different phase images of 4D-CT in tumor tracking, using iodipin as verification. Linear interpolation was used in both mode 1 and mode 2. Mode 1 set outside pixels to nearest pixel, while mode 2 set outside pixels to zero. We used normalized mutual information (NMI), sum of squared differences, modified Hausdorff-distance, and registration speed to evaluate the performance of each algorithm. The average NMI after demons registration method in mode 1 improved 1.76% and 4.75% when compared to mode 2 and FFD model algorithm, respectively. Further, the modified Hausdorff-distance was no different between demons modes 1 and 2, but mode 1 was 15.2% lower than FFD. Finally, demons algorithm has the absolute advantage in registration speed. The demons algorithm in mode 1 was therefore found to be much more suitable for the registration of 4D-CT images. The subtractions of floating images and reference image before and after registration by demons further verified that influence of breathing motion cannot be ignored and the demons registration method is feasible.

  9. Image registration for multi-exposed HDRI and motion deblurring

    NASA Astrophysics Data System (ADS)

    Lee, Seok; Wey, Ho-Cheon; Lee, Seong-Deok

    2009-02-01

    In multi-exposure based image fusion task, alignment is an essential prerequisite to prevent ghost artifact after blending. Compared to usual matching problem, registration is more difficult when each image is captured under different photographing conditions. In HDR imaging, we use long and short exposure images, which have different brightness and there exist over/under satuated regions. In motion deblurring problem, we use blurred and noisy image pair and the amount of motion blur varies from one image to another due to the different exposure times. The main difficulty is that luminance levels of the two images are not in linear relationship and we cannot perfectly equalize or normalize the brightness of each image and this leads to unstable and inaccurate alignment results. To solve this problem, we applied probabilistic measure such as mutual information to represent similarity between images after alignment. In this paper, we discribed about the characteristics of multi-exposed input images in the aspect of registration and also analyzed the magnitude of camera hand shake. By exploiting the independence of luminance of mutual information, we proposed a fast and practically useful image registration technique in multiple capturing. Our algorithm can be applied to extreme HDR scenes and motion blurred scenes with over 90% success rate and its simplicity enables to be embedded in digital camera and mobile camera phone. The effectiveness of our registration algorithm is examined by various experiments on real HDR or motion deblurring cases using hand-held camera.

  10. Introduction to clinical and laboratory (small-animal) image registration and fusion.

    PubMed

    Zanzonico, Pat B; Nehmeh, Sadek A

    2006-01-01

    Imaging has long been a vital component of clinical medicine and, increasingly, of biomedical research in small-animals. Clinical and laboratory imaging modalities can be divided into two general categories, structural (or anatomical) and functional (or physiological). The latter, in particular, has spawned what has come to be known as "molecular imaging". Image registration and fusion have rapidly emerged as invaluable components of both clinical and small-animal imaging and has lead to the development and marketing of a variety of multi-modality, e.g. PET-CT, devices which provide registered and fused three-dimensional image sets. This paper briefly reviews the basics of image registration and fusion and available clinical and small-animal multi-modality instrumentation.

  11. SU-F-J-96: Comparison of Frame-Based and Mutual Information Registration Techniques for CT and MR Image Sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popple, R; Bredel, M; Brezovich, I

    Purpose: To compare the accuracy of CT-MR registration using a mutual information method with registration using a frame-based localizer box. Methods: Ten patients having the Leksell head frame and scanned with a modality specific localizer box were imported into the treatment planning system. The fiducial rods of the localizer box were contoured on both the MR and CT scans. The skull was contoured on the CT images. The MR and CT images were registered by two methods. The frame-based method used the transformation that minimized the mean square distance of the centroids of the contours of the fiducial rods frommore » a mathematical model of the localizer. The mutual information method used automated image registration tools in the TPS and was restricted to a volume-of-interest defined by the skull contours with a 5 mm margin. For each case, the two registrations were adjusted by two evaluation teams, each comprised of an experienced radiation oncologist and neurosurgeon, to optimize alignment in the region of the brainstem. The teams were blinded to the registration method. Results: The mean adjustment was 0.4 mm (range 0 to 2 mm) and 0.2 mm (range 0 to 1 mm) for the frame and mutual information methods, respectively. The median difference between the frame and mutual information registrations was 0.3 mm, but was not statistically significant using the Wilcoxon signed rank test (p=0.37). Conclusion: The difference between frame and mutual information registration techniques was neither statistically significant nor, for most applications, clinically important. These results suggest that mutual information is equivalent to frame-based image registration for radiosurgery. Work is ongoing to add additional evaluators and to assess the differences between evaluators.« less

  12. Efficient Multi-Atlas Registration using an Intermediate Template Image

    PubMed Central

    Dewey, Blake E.; Carass, Aaron; Blitz, Ari M.; Prince, Jerry L.

    2017-01-01

    Multi-atlas label fusion is an accurate but time-consuming method of labeling the human brain. Using an intermediate image as a registration target can allow researchers to reduce time constraints by storing the deformations required of the atlas images. In this paper, we investigate the effect of registration through an intermediate template image on multi-atlas label fusion and propose a novel registration technique to counteract the negative effects of through-template registration. We show that overall computation time can be decreased dramatically with minimal impact on final label accuracy and time can be exchanged for improved results in a predictable manner. We see almost complete recovery of Dice similarity over a simple through-template registration using the corrected method and still maintain a 3–4 times speed increase. Further, we evaluate the effectiveness of this method on brains of patients with normal-pressure hydrocephalus, where abnormal brain shape presents labeling difficulties, specifically the ventricular labels. Our correction method creates substantially better ventricular labeling than traditional methods and maintains the speed increase seen in healthy subjects. PMID:28943702

  13. Efficient multi-atlas registration using an intermediate template image

    NASA Astrophysics Data System (ADS)

    Dewey, Blake E.; Carass, Aaron; Blitz, Ari M.; Prince, Jerry L.

    2017-03-01

    Multi-atlas label fusion is an accurate but time-consuming method of labeling the human brain. Using an intermediate image as a registration target can allow researchers to reduce time constraints by storing the deformations required of the atlas images. In this paper, we investigate the effect of registration through an intermediate template image on multi-atlas label fusion and propose a novel registration technique to counteract the negative effects of through-template registration. We show that overall computation time can be decreased dramatically with minimal impact on final label accuracy and time can be exchanged for improved results in a predictable manner. We see almost complete recovery of Dice similarity over a simple through-template registration using the corrected method and still maintain a 3-4 times speed increase. Further, we evaluate the effectiveness of this method on brains of patients with normal-pressure hydrocephalus, where abnormal brain shape presents labeling difficulties, specifically the ventricular labels. Our correction method creates substantially better ventricular labeling than traditional methods and maintains the speed increase seen in healthy subjects.

  14. Efficient Method for Scalable Registration of Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Prouty, R.; LeMoigne, J.; Halem, M.

    2017-12-01

    The goal of this project is to build a prototype of a resource-efficient pipeline that will provide registration within subpixel accuracy of multitemporal Earth science data. Accurate registration of Earth-science data is imperative to proper data integration and seamless mosaicing of data from multiple times, sensors, and/or observation geometries. Modern registration methods make use of many arithmetic operations and sometimes require complete knowledge of the image domain. As such, while sensors become more advanced and are able to provide higher-resolution data, the memory resources required to properly register these data become prohibitive. The proposed pipeline employs a region of interest extraction algorithm in order to extract image subsets with high local feature density. These image subsets are then used to generate local solutions to the global registration problem. The local solutions are then 'globalized' to determine the deformation model that best solves the registration problem. The region of interest extraction and globalization routines are tested for robustness among the variety of scene-types and spectral locations provided by Earth-observing instruments such as Landsat, MODIS, or ASTER.

  15. Automated robust registration of grossly misregistered whole-slide images with varying stains

    NASA Astrophysics Data System (ADS)

    Litjens, G.; Safferling, K.; Grabe, N.

    2016-03-01

    Cancer diagnosis and pharmaceutical research increasingly depend on the accurate quantification of cancer biomarkers. Identification of biomarkers is usually performed through immunohistochemical staining of cancer sections on glass slides. However, combination of multiple biomarkers from a wide variety of immunohistochemically stained slides is a tedious process in traditional histopathology due to the switching of glass slides and re-identification of regions of interest by pathologists. Digital pathology now allows us to apply image registration algorithms to digitized whole-slides to align the differing immunohistochemical stains automatically. However, registration algorithms need to be robust to changes in color due to differing stains and severe changes in tissue content between slides. In this work we developed a robust registration methodology to allow for fast coarse alignment of multiple immunohistochemical stains to the base hematyoxylin and eosin stained image. We applied HSD color model conversion to obtain a less stain color dependent representation of the whole-slide images. Subsequently, optical density thresholding and connected component analysis were used to identify the relevant regions for registration. Template matching using normalized mutual information was applied to provide initial translation and rotation parameters, after which a cost function-driven affine registration was performed. The algorithm was validated using 40 slides from 10 prostate cancer patients, with landmark registration error as a metric. Median landmark registration error was around 180 microns, which indicates performance is adequate for practical application. None of the registrations failed, indicating the robustness of the algorithm.

  16. An automatic markerless registration method for neurosurgical robotics based on an optical camera.

    PubMed

    Meng, Fanle; Zhai, Fangwen; Zeng, Bowei; Ding, Hui; Wang, Guangzhi

    2018-02-01

    Current markerless registration methods for neurosurgical robotics use the facial surface to match the robot space with the image space, and acquisition of the facial surface usually requires manual interaction and constrains the patient to a supine position. To overcome these drawbacks, we propose a registration method that is automatic and does not constrain patient position. An optical camera attached to the robot end effector captures images around the patient's head from multiple views. Then, high coverage of the head surface is reconstructed from the images through multi-view stereo vision. Since the acquired head surface point cloud contains color information, a specific mark that is manually drawn on the patient's head prior to the capture procedure can be extracted to automatically accomplish coarse registration rather than using facial anatomic landmarks. Then, fine registration is achieved by registering the high coverage of the head surface without relying solely on the facial region, thus eliminating patient position constraints. The head surface was acquired by the camera with a good repeatability accuracy. The average target registration error of 8 different patient positions measured with targets inside a head phantom was [Formula: see text], while the mean surface registration error was [Formula: see text]. The method proposed in this paper achieves automatic markerless registration in multiple patient positions and guarantees registration accuracy inside the head. This method provides a new approach for establishing the spatial relationship between the image space and the robot space.

  17. Multi-Image Registration for an Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn

    2002-01-01

    An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.

  18. Digital Correction of Motion Artifacts in Microscopy Image Sequences Collected from Living Animals Using Rigid and Non-Rigid Registration

    PubMed Central

    Lorenz, Kevin S.; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.

    2013-01-01

    Digital image analysis is a fundamental component of quantitative microscopy. However, intravital microscopy presents many challenges for digital image analysis. In general, microscopy volumes are inherently anisotropic, suffer from decreasing contrast with tissue depth, lack object edge detail, and characteristically have low signal levels. Intravital microscopy introduces the additional problem of motion artifacts, resulting from respiratory motion and heartbeat from specimens imaged in vivo. This paper describes an image registration technique for use with sequences of intravital microscopy images collected in time-series or in 3D volumes. Our registration method involves both rigid and non-rigid components. The rigid registration component corrects global image translations, while the non-rigid component manipulates a uniform grid of control points defined by B-splines. Each control point is optimized by minimizing a cost function consisting of two parts: a term to define image similarity, and a term to ensure deformation grid smoothness. Experimental results indicate that this approach is promising based on the analysis of several image volumes collected from the kidney, lung, and salivary gland of living rodents. PMID:22092443

  19. The One to Multiple Automatic High Accuracy Registration of Terrestrial LIDAR and Optical Images

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Hu, C.; Xia, G.; Xue, H.

    2018-04-01

    The registration of ground laser point cloud and close-range image is the key content of high-precision 3D reconstruction of cultural relic object. In view of the requirement of high texture resolution in the field of cultural relic at present, The registration of point cloud and image data in object reconstruction will result in the problem of point cloud to multiple images. In the current commercial software, the two pairs of registration of the two kinds of data are realized by manually dividing point cloud data, manual matching point cloud and image data, manually selecting a two - dimensional point of the same name of the image and the point cloud, and the process not only greatly reduces the working efficiency, but also affects the precision of the registration of the two, and causes the problem of the color point cloud texture joint. In order to solve the above problems, this paper takes the whole object image as the intermediate data, and uses the matching technology to realize the automatic one-to-one correspondence between the point cloud and multiple images. The matching of point cloud center projection reflection intensity image and optical image is applied to realize the automatic matching of the same name feature points, and the Rodrigo matrix spatial similarity transformation model and weight selection iteration are used to realize the automatic registration of the two kinds of data with high accuracy. This method is expected to serve for the high precision and high efficiency automatic 3D reconstruction of cultural relic objects, which has certain scientific research value and practical significance.

  20. Analysis of deformable image registration accuracy using computational modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong Hualiang; Kim, Jinkoo; Chetty, Indrin J.

    2010-03-15

    Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results showmore » that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter selection for optimal accuracy is closely related to the intensity gradients of the underlying images. Also, the result that the DIR algorithms produce much lower errors in heterogeneous lung regions relative to homogeneous (low intensity gradient) regions, suggests that feature-based evaluation of deformable image registration accuracy must be viewed cautiously.« less

Top