Science.gov

Sample records for 2d image registration

  1. Reconstruction-based 3D/2D image registration.

    PubMed

    Tomazevic, Dejan; Likar, Bostjan; Pernus, Franjo

    2005-01-01

    In this paper we present a novel 3D/2D registration method, where first, a 3D image is reconstructed from a few 2D X-ray images and next, the preoperative 3D image is brought into the best possible spatial correspondence with the reconstructed image by optimizing a similarity measure. Because the quality of the reconstructed image is generally low, we introduce a novel asymmetric mutual information similarity measure, which is able to cope with low image quality as well as with different imaging modalities. The novel 3D/2D registration method has been evaluated using standardized evaluation methodology and publicly available 3D CT, 3DRX, and MR and 2D X-ray images of two spine phantoms, for which gold standard registrations were known. In terms of robustness, reliability and capture range the proposed method outperformed the gradient-based method and the method based on digitally reconstructed radiographs (DRRs).

  2. 2D/3D Image Registration using Regression Learning

    PubMed Central

    Chou, Chen-Rui; Frederick, Brandon; Mageras, Gig; Chang, Sha; Pizer, Stephen

    2013-01-01

    In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object’s 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called CLARET (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projec-tion(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region’s motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method’s application to Image-guided Radiation Therapy (IGRT) requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof. PMID:24058278

  3. A novel point cloud registration using 2D image features

    NASA Astrophysics Data System (ADS)

    Lin, Chien-Chou; Tai, Yen-Chou; Lee, Jhong-Jin; Chen, Yong-Sheng

    2017-01-01

    Since a 3D scanner only captures a scene of a 3D object at a time, a 3D registration for multi-scene is the key issue of 3D modeling. This paper presents a novel and an efficient 3D registration method based on 2D local feature matching. The proposed method transforms the point clouds into 2D bearing angle images and then uses the 2D feature based matching method, SURF, to find matching pixel pairs between two images. The corresponding points of 3D point clouds can be obtained by those pixel pairs. Since the corresponding pairs are sorted by their distance between matching features, only the top half of the corresponding pairs are used to find the optimal rotation matrix by the least squares approximation. In this paper, the optimal rotation matrix is derived by orthogonal Procrustes method (SVD-based approach). Therefore, the 3D model of an object can be reconstructed by aligning those point clouds with the optimal transformation matrix. Experimental results show that the accuracy of the proposed method is close to the ICP, but the computation cost is reduced significantly. The performance is six times faster than the generalized-ICP algorithm. Furthermore, while the ICP requires high alignment similarity of two scenes, the proposed method is robust to a larger difference of viewing angle.

  4. A review of 3D/2D registration methods for image-guided interventions.

    PubMed

    Markelj, P; Tomaževič, D; Likar, B; Pernuš, F

    2012-04-01

    Registration of pre- and intra-interventional data is one of the key technologies for image-guided radiation therapy, radiosurgery, minimally invasive surgery, endoscopy, and interventional radiology. In this paper, we survey those 3D/2D data registration methods that utilize 3D computer tomography or magnetic resonance images as the pre-interventional data and 2D X-ray projection images as the intra-interventional data. The 3D/2D registration methods are reviewed with respect to image modality, image dimensionality, registration basis, geometric transformation, user interaction, optimization procedure, subject, and object of registration.

  5. 3D-2D registration of cerebral angiograms: a method and evaluation on clinical images.

    PubMed

    Mitrovic, Uroš; Špiclin, Žiga; Likar, Boštjan; Pernuš, Franjo

    2013-08-01

    Endovascular image-guided interventions (EIGI) involve navigation of a catheter through the vasculature followed by application of treatment at the site of anomaly using live 2D projection images for guidance. 3D images acquired prior to EIGI are used to quantify the vascular anomaly and plan the intervention. If fused with the information of live 2D images they can also facilitate navigation and treatment. For this purpose 3D-2D image registration is required. Although several 3D-2D registration methods for EIGI achieve registration accuracy below 1 mm, their clinical application is still limited by insufficient robustness or reliability. In this paper, we propose a 3D-2D registration method based on matching a 3D vasculature model to intensity gradients of live 2D images. To objectively validate 3D-2D registration methods, we acquired a clinical image database of 10 patients undergoing cerebral EIGI and established "gold standard" registrations by aligning fiducial markers in 3D and 2D images. The proposed method had mean registration accuracy below 0.65 mm, which was comparable to tested state-of-the-art methods, and execution time below 1 s. With the highest rate of successful registrations and the highest capture range the proposed method was the most robust and thus a good candidate for application in EIGI.

  6. 3D/2D image registration: the impact of X-ray views and their number.

    PubMed

    Tomazevic, Dejan; Likar, Bostjan; Pernus, Franjo

    2007-01-01

    An important part of image-guided radiation therapy or surgery is registration of a three-dimensional (3D) preoperative image to two-dimensional (2D) images of the patient. It is expected that the accuracy and robustness of a 3D/2D image registration method do not depend solely on the registration method itself but also on the number and projections (views) of intraoperative images. In this study, we systematically investigate these factors by using registered image data, comprising of CT and X-ray images of a cadaveric lumbar spine phantom and the recently proposed 3D/2D registration method. The results indicate that the proportion of successful registrations (robustness) significantly increases when more X-ray images are used for registration.

  7. Comparison of simultaneous and sequential two-view registration for 3D/2D registration of vascular images.

    PubMed

    Pathak, Chetna; Van Horn, Mark; Weeks, Susan; Bullitt, Elizabeth

    2005-01-01

    Accurate 3D/2D vessel registration is complicated by issues of image quality, occlusion, and other problems. This study performs a quantitative comparison of 3D/2D vessel registration in which vessels segmented from preoperative CT or MR are registered with biplane x-ray angiograms by either a) simultaneous two-view registration with advance calculation of the relative pose of the two views, or b) sequential registration with each view. We conclude on the basis of phantom studies that, even in the absence of image errors, simultaneous two-view registration is more accurate than sequential registration. In more complex settings, including clinical conditions, the relative accuracy of simultaneous two-view registration is even greater.

  8. Automatic Masking for Robust 3D-2D Image Registration in Image-Guided Spine Surgery

    PubMed Central

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-01-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies. PMID:27335531

  9. Automatic Masking for Robust 3D-2D Image Registration in Image-Guided Spine Surgery.

    PubMed

    Ketcha, M D; De Silva, T; Uneri, A; Kleinszig, G; Vogt, S; Wolinsky, J-P; Siewerdsen, J H

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  10. Automatic masking for robust 3D-2D image registration in image-guided spine surgery

    NASA Astrophysics Data System (ADS)

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-03-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  11. A faster method for 3D/2D medical image registration--a simulation study.

    PubMed

    Birkfellner, Wolfgang; Wirth, Joachim; Burgstaller, Wolfgang; Baumann, Bernard; Staedele, Harald; Hammer, Beat; Gellrich, Niels Claudius; Jacob, Augustinus Ludwig; Regazzoni, Pietro; Messmer, Peter

    2003-08-21

    3D/2D patient-to-computed-tomography (CT) registration is a method to determine a transformation that maps two coordinate systems by comparing a projection image rendered from CT to a real projection image. Iterative variation of the CT's position between rendering steps finally leads to exact registration. Applications include exact patient positioning in radiation therapy, calibration of surgical robots, and pose estimation in computer-aided surgery. One of the problems associated with 3D/2D registration is the fact that finding a registration includes solving a minimization problem in six degrees of freedom (dof) in motion. This results in considerable time requirements since for each iteration step at least one volume rendering has to be computed. We show that by choosing an appropriate world coordinate system and by applying a 2D/2D registration method in each iteration step, the number of iterations can be grossly reduced from n6 to n5. Here, n is the number of discrete variations around a given coordinate. Depending on the configuration of the optimization algorithm, this reduces the total number of iterations necessary to at least 1/3 of it's original value. The method was implemented and extensively tested on simulated x-ray images of a tibia, a pelvis and a skull base. When using one projective image and a discrete full parameter space search for solving the optimization problem, average accuracy was found to be 1.0 +/- 0.6(degrees) and 4.1 +/- 1.9 (mm) for a registration in six parameters, and 1.0 +/- 0.7(degrees) and 4.2 +/- 1.6 (mm) when using the 5 + 1 dof method described in this paper. Time requirements were reduced by a factor 3.1. We conclude that this hardware-independent optimization of 3D/2D registration is a step towards increasing the acceptance of this promising method for a wide number of clinical applications.

  12. Model-based 3D/2D deformable registration of MR images.

    PubMed

    Marami, Bahram; Sirouspour, Shahin; Capson, David W

    2011-01-01

    A method is proposed for automatic registration of 3D preoperative magnetic resonance images of deformable tissue to a sequence of its 2D intraoperative images. The algorithm employs a dynamic continuum mechanics model of the deformation and similarity (distance) measures such as correlation ratio, mutual information or sum of squared differences for registration. The registration is solely based on information present in the 3D preoperative and 2D intraoperative images and does not require fiducial markers, feature extraction or image segmentation. Results of experiments with a biopsy training breast phantom show that the proposed method can perform well in the presence of large deformations. This is particularly useful for clinical applications such as MR-based breast biopsy where large tissue deformations occur.

  13. Simultaneous 3D–2D image registration and C-arm calibration: Application to endovascular image-guided interventions

    SciTech Connect

    Mitrović, Uroš; Pernuš, Franjo; Likar, Boštjan; Špiclin, Žiga

    2015-11-15

    Purpose: Three-dimensional to two-dimensional (3D–2D) image registration is a key to fusion and simultaneous visualization of valuable information contained in 3D pre-interventional and 2D intra-interventional images with the final goal of image guidance of a procedure. In this paper, the authors focus on 3D–2D image registration within the context of intracranial endovascular image-guided interventions (EIGIs), where the 3D and 2D images are generally acquired with the same C-arm system. The accuracy and robustness of any 3D–2D registration method, to be used in a clinical setting, is influenced by (1) the method itself, (2) uncertainty of initial pose of the 3D image from which registration starts, (3) uncertainty of C-arm’s geometry and pose, and (4) the number of 2D intra-interventional images used for registration, which is generally one and at most two. The study of these influences requires rigorous and objective validation of any 3D–2D registration method against a highly accurate reference or “gold standard” registration, performed on clinical image datasets acquired in the context of the intervention. Methods: The registration process is split into two sequential, i.e., initial and final, registration stages. The initial stage is either machine-based or template matching. The latter aims to reduce possibly large in-plane translation errors by matching a projection of the 3D vessel model and 2D image. In the final registration stage, four state-of-the-art intrinsic image-based 3D–2D registration methods, which involve simultaneous refinement of rigid-body and C-arm parameters, are evaluated. For objective validation, the authors acquired an image database of 15 patients undergoing cerebral EIGI, for which accurate gold standard registrations were established by fiducial marker coregistration. Results: Based on target registration error, the obtained success rates of 3D to a single 2D image registration after initial machine-based and

  14. 2D Ultrasound and 3D MR Image Registration of the Prostate for Brachytherapy Surgical Navigation

    PubMed Central

    Zhang, Shihui; Jiang, Shan; Yang, Zhiyong; Liu, Ranlu

    2015-01-01

    Abstract Two-dimensional (2D) ultrasound (US) images are widely used in minimally invasive prostate procedure for its noninvasive nature and convenience. However, the poor quality of US image makes it difficult to be used as guiding utility. To improve the limitation, we propose a multimodality image guided navigation module that registers 2D US images with magnetic resonance imaging (MRI) based on high quality preoperative models. A 2-step spatial registration method is used to complete the procedure which combines manual alignment and rapid mutual information (MI) optimize algorithm. In addition, a 3-dimensional (3D) reconstruction model of prostate with surrounding organs is employed to combine with the registered images to conduct the navigation. Registration accuracy is measured by calculating the target registration error (TRE). The results show that the error between the US and preoperative MR images of a polyvinyl alcohol hydrogel model phantom is 1.37 ± 0.14 mm, with a similar performance being observed in patient experiments. PMID:26448009

  15. 3D/2D Model-to-Image Registration for Quantitative Dietary Assessment.

    PubMed

    Chen, Hsin-Chen; Jia, Wenyan; Li, Zhaoxin; Sun, Yung-Nien; Sun, Mingui

    2012-12-31

    Image-based dietary assessment is important for health monitoring and management because it can provide quantitative and objective information, such as food volume, nutrition type, and calorie intake. In this paper, a new framework, 3D/2D model-to-image registration, is presented for estimating food volume from a single-view 2D image containing a reference object (i.e., a circular dining plate). First, the food is segmented from the background image based on Otsu's thresholding and morphological operations. Next, the food volume is obtained from a user-selected, 3D shape model. The position, orientation and scale of the model are optimized by a model-to-image registration process. Then, the circular plate in the image is fitted and its spatial information is used as constraints for solving the registration problem. Our method takes the global contour information of the shape model into account to obtain a reliable food volume estimate. Experimental results using regularly shaped test objects and realistically shaped food models with known volumes both demonstrate the effectiveness of our method.

  16. 2D imaging and 3D sensing data acquisition and mutual registration for painting conservation

    NASA Astrophysics Data System (ADS)

    Fontana, Raffaella; Gambino, Maria Chiara; Greco, Marinella; Marras, Luciano; Pampaloni, Enrico M.; Pelagotti, Anna; Pezzati, Luca; Poggi, Pasquale

    2004-12-01

    We describe the application of 2D and 3D data acquisition and mutual registration to the conservation of paintings. RGB color image acquisition, IR and UV fluorescence imaging, together with the more recent hyperspectral imaging (32 bands) are among the most useful techniques in this field. They generally are meant to provide information on the painting materials, on the employed techniques and on the object state of conservation. However, only when the various images are perfectly registered on each other and on the 3D model, no ambiguity is possible and safe conclusions may be drawn. We present the integration of 2D and 3D measurements carried out on two different paintings: "Madonna of the Yarnwinder" by Leonardo da Vinci, and "Portrait of Lionello d'Este", by Pisanello, both painted in the XV century.

  17. 2D imaging and 3D sensing data acquisition and mutual registration for painting conservation

    NASA Astrophysics Data System (ADS)

    Fontana, Raffaella; Gambino, Maria Chiara; Greco, Marinella; Marras, Luciano; Pampaloni, Enrico M.; Pelagotti, Anna; Pezzati, Luca; Poggi, Pasquale

    2005-01-01

    We describe the application of 2D and 3D data acquisition and mutual registration to the conservation of paintings. RGB color image acquisition, IR and UV fluorescence imaging, together with the more recent hyperspectral imaging (32 bands) are among the most useful techniques in this field. They generally are meant to provide information on the painting materials, on the employed techniques and on the object state of conservation. However, only when the various images are perfectly registered on each other and on the 3D model, no ambiguity is possible and safe conclusions may be drawn. We present the integration of 2D and 3D measurements carried out on two different paintings: "Madonna of the Yarnwinder" by Leonardo da Vinci, and "Portrait of Lionello d'Este", by Pisanello, both painted in the XV century.

  18. GPU accelerated generation of digitally reconstructed radiographs for 2-D/3-D image registration.

    PubMed

    Dorgham, Osama M; Laycock, Stephen D; Fisher, Mark H

    2012-09-01

    Recent advances in programming languages for graphics processing units (GPUs) provide developers with a convenient way of implementing applications which can be executed on the CPU and GPU interchangeably. GPUs are becoming relatively cheap, powerful, and widely available hardware components, which can be used to perform intensive calculations. The last decade of hardware performance developments shows that GPU-based computation is progressing significantly faster than CPU-based computation, particularly if one considers the execution of highly parallelisable algorithms. Future predictions illustrate that this trend is likely to continue. In this paper, we introduce a way of accelerating 2-D/3-D image registration by developing a hybrid system which executes on the CPU and utilizes the GPU for parallelizing the generation of digitally reconstructed radiographs (DRRs). Based on the advancements of the GPU over the CPU, it is timely to exploit the benefits of many-core GPU technology by developing algorithms for DRR generation. Although some previous work has investigated the rendering of DRRs using the GPU, this paper investigates approximations which reduce the computational overhead while still maintaining a quality consistent with that needed for 2-D/3-D registration with sufficient accuracy to be clinically acceptable in certain applications of radiation oncology. Furthermore, by comparing implementations of 2-D/3-D registration on the CPU and GPU, we investigate current performance and propose an optimal framework for PC implementations addressing the rigid registration problem. Using this framework, we are able to render DRR images from a 256×256×133 CT volume in ~24 ms using an NVidia GeForce 8800 GTX and in ~2 ms using NVidia GeForce GTX 580. In addition to applications requiring fast automatic patient setup, these levels of performance suggest image-guided radiation therapy at video frame rates is technically feasible using relatively low cost PC

  19. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes

    PubMed Central

    Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes. PMID:27019849

  20. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes.

    PubMed

    Zhong, Zichun; Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun; Mao, Weihua

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes.

  1. Registration of 2D x-ray images to 3D MRI by generating pseudo-CT data

    NASA Astrophysics Data System (ADS)

    van der Bom, M. J.; Pluim, J. P. W.; Gounis, M. J.; van de Kraats, E. B.; Sprinkhuizen, S. M.; Timmer, J.; Homan, R.; Bartels, L. W.

    2011-02-01

    Spatial and soft tissue information provided by magnetic resonance imaging can be very valuable during image-guided procedures, where usually only real-time two-dimensional (2D) x-ray images are available. Registration of 2D x-ray images to three-dimensional (3D) magnetic resonance imaging (MRI) data, acquired prior to the procedure, can provide optimal information to guide the procedure. However, registering x-ray images to MRI data is not a trivial task because of their fundamental difference in tissue contrast. This paper presents a technique that generates pseudo-computed tomography (CT) data from multi-spectral MRI acquisitions which is sufficiently similar to real CT data to enable registration of x-ray to MRI with comparable accuracy as registration of x-ray to CT. The method is based on a k-nearest-neighbors (kNN)-regression strategy which labels voxels of MRI data with CT Hounsfield Units. The regression method uses multi-spectral MRI intensities and intensity gradients as features to discriminate between various tissue types. The efficacy of using pseudo-CT data for registration of x-ray to MRI was tested on ex vivo animal data. 2D-3D registration experiments using CT and pseudo-CT data of multiple subjects were performed with a commonly used 2D-3D registration algorithm. On average, the median target registration error for registration of two x-ray images to MRI data was approximately 1 mm larger than for x-ray to CT registration. The authors have shown that pseudo-CT data generated from multi-spectral MRI facilitate registration of MRI to x-ray images. From the experiments it could be concluded that the accuracy achieved was comparable to that of registering x-ray images to CT data.

  2. Clinical Assessment of 2D/3D Registration Accuracy in 4 Major Anatomic Sites Using On-Board 2D Kilovoltage Images for 6D Patient Setup.

    PubMed

    Li, Guang; Yang, T Jonathan; Furtado, Hugo; Birkfellner, Wolfgang; Ballangrud, Åse; Powell, Simon N; Mechalakos, James

    2015-06-01

    To provide a comprehensive assessment of patient setup accuracy in 6 degrees of freedom (DOFs) using 2-dimensional/3-dimensional (2D/3D) image registration with on-board 2-dimensional kilovoltage (OB-2 DkV) radiographic images, we evaluated cranial, head and neck (HN), and thoracic and abdominal sites under clinical conditions. A fast 2D/3D image registration method using graphics processing unit GPU was modified for registration between OB-2 DkV and 3D simulation computed tomography (simCT) images, with 3D/3D registration as the gold standard for 6 DOF alignment. In 2D/3D registration, body roll rotation was obtained solely by matching orthogonal OB-2 DkV images with a series of digitally reconstructed radiographs (DRRs) from simCT with a small rotational increment along the gantry rotation axis. The window/level adjustments for optimal visualization of the bone in OB-2 DkV and DRRs were performed prior to registration. Ideal patient alignment at the isocenter was calculated and used as an initial registration position. In 3D/3D registration, cone-beam CT (CBCT) was aligned to simCT on bony structures using a bone density filter in 6DOF. Included in this retrospective study were 37 patients treated in 55 fractions with frameless stereotactic radiosurgery or stereotactic body radiotherapy for cranial and paraspinal cancer. A cranial phantom was used to serve as a control. In all cases, CBCT images were acquired for patient setup with subsequent OB-2 DkV verification. It was found that the accuracy of the 2D/3D registration was 0.0 ± 0.5 mm and 0.1° ± 0.4° in phantom. In patient, it is site dependent due to deformation of the anatomy: 0.2 ± 1.6 mm and -0.4° ± 1.2° on average for each dimension for the cranial site, 0.7 ± 1.6 mm and 0.3° ± 1.3° for HN, 0.7 ± 2.0 mm and -0.7° ± 1.1° for the thorax, and 1.1 ± 2.6 mm and -0.5° ± 1.9° for the abdomen. Anatomical deformation and presence of soft tissue in 2D/3D registration affect the consistency with

  3. Clinical Assessment of 2D/3D Registration Accuracy in 4 Major Anatomic Sites Using On-Board 2D Kilovoltage Images for 6D Patient Setup

    PubMed Central

    Li, Guang; Yang, T. Jonathan; Furtado, Hugo; Birkfellner, Wolfgang; Ballangrud, Åse; Powell, Simon N.; Mechalakos, James

    2015-01-01

    To provide a comprehensive assessment of patient setup accuracy in 6 degrees of freedom (DOFs) using 2-dimensional/3-dimensional (2D/3D) image registration with on-board 2-dimensional kilovoltage (OB-2DkV) radiographic images, we evaluated cranial, head and neck (HN), and thoracic and abdominal sites under clinical conditions. A fast 2D/3D image registration method using graphics processing unit GPU was modified for registration between OB-2DkV and 3D simulation computed tomography (simCT) images, with 3D/3D registration as the gold standard for 6DOF alignment. In 2D/3D registration, body roll rotation was obtained solely by matching orthogonal OB-2DkV images with a series of digitally reconstructed radiographs (DRRs) from simCT with a small rotational increment along the gantry rotation axis. The window/level adjustments for optimal visualization of the bone in OB-2DkV and DRRs were performed prior to registration. Ideal patient alignment at the isocenter was calculated and used as an initial registration position. In 3D/3D registration, cone-beam CT (CBCT) was aligned to simCT on bony structures using a bone density filter in 6DOF. Included in this retrospective study were 37 patients treated in 55 fractions with frameless stereotactic radiosurgery or stereotactic body radiotherapy for cranial and paraspinal cancer. A cranial phantom was used to serve as a control. In all cases, CBCT images were acquired for patient setup with subsequent OB-2DkV verification. It was found that the accuracy of the 2D/3D registration was 0.0 ± 0.5 mm and 0.1° ± 0.4° in phantom. In patient, it is site dependent due to deformation of the anatomy: 0.2 ± 1.6 mm and −0.4° ± 1.2° on average for each dimension for the cranial site, 0.7 ± 1.6 mm and 0.3° ± 1.3° for HN, 0.7 ± 2.0 mm and −0.7° ± 1.1° for the thorax, and 1.1 ± 2.6 mm and −0.5° ± 1.9° for the abdomen. Anatomical deformation and presence of soft tissue in 2D/3D registration affect the consistency with

  4. Ultrasound 2D strain estimator based on image registration for ultrasound elastography

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Torres, Mylin; Kirkpatrick, Stephanie; Curran, Walter J.; Liu, Tian

    2014-03-01

    In this paper, we present a new approach to calculate 2D strain through the registration of the pre- and post-compression (deformation) B-mode image sequences based on an intensity-based non-rigid registration algorithm (INRA). Compared with the most commonly used cross-correlation (CC) method, our approach is not constrained to any particular set of directions, and can overcome displacement estimation errors introduced by incoherent motion and variations in the signal under high compression. This INRA method was tested using phantom and in vivo data. The robustness of our approach was demonstrated in the axial direction as well as the lateral direction where the standard CC method frequently fails. In addition, our approach copes well under large compression (over 6%). In the phantom study, we computed the strain image under various compressions and calculated the signal-to-noise (SNR) and contrast-to-noise (CNS) ratios. The SNR and CNS values of the INRA method were much higher than those calculated from the CC-based method. Furthermore, the clinical feasibility of our approach was demonstrated with the in vivo data from patients with arm lymphedema.

  5. A 2D to 3D ultrasound image registration algorithm for robotically assisted laparoscopic radical prostatectomy

    NASA Astrophysics Data System (ADS)

    Esteghamatian, Mehdi; Pautler, Stephen E.; McKenzie, Charles A.; Peters, Terry M.

    2011-03-01

    Robotically assisted laparoscopic radical prostatectomy (RARP) is an effective approach to resect the diseased organ, with stereoscopic views of the targeted tissue improving the dexterity of the surgeons. However, since the laparoscopic view acquires only the surface image of the tissue, the underlying distribution of the cancer within the organ is not observed, making it difficult to make informed decisions on surgical margins and sparing of neurovascular bundles. One option to address this problem is to exploit registration to integrate the laparoscopic view with images of pre-operatively acquired dynamic contrast enhanced (DCE) MRI that can demonstrate the regions of malignant tissue within the prostate. Such a view potentially allows the surgeon to visualize the location of the malignancy with respect to the surrounding neurovascular structures, permitting a tissue-sparing strategy to be formulated directly based on the observed tumour distribution. If the tumour is close to the capsule, it may be determined that the adjacent neurovascular bundle (NVB) needs to be sacrificed within the surgical margin to ensure that any erupted tumour was resected. On the other hand, if the cancer is sufficiently far from the capsule, one or both NVBs may be spared. However, in order to realize such image integration, the pre-operative image needs to be fused with the laparoscopic view of the prostate. During the initial stages of the operation, the prostate must be tracked in real time so that the pre-operative MR image remains aligned with patient coordinate system. In this study, we propose and investigate a novel 2D to 3D ultrasound image registration algorithm to track the prostate motion with an accuracy of 2.68+/-1.31mm.

  6. Dynamic tracking of a deformable tissue based on 3D-2D MR-US image registration

    NASA Astrophysics Data System (ADS)

    Marami, Bahram; Sirouspour, Shahin; Fenster, Aaron; Capson, David W.

    2014-03-01

    Real-time registration of pre-operative magnetic resonance (MR) or computed tomography (CT) images with intra-operative Ultrasound (US) images can be a valuable tool in image-guided therapies and interventions. This paper presents an automatic method for dynamically tracking the deformation of a soft tissue based on registering pre-operative three-dimensional (3D) MR images to intra-operative two-dimensional (2D) US images. The registration algorithm is based on concepts in state estimation where a dynamic finite element (FE)- based linear elastic deformation model correlates the imaging data in the spatial and temporal domains. A Kalman-like filtering process estimates the unknown deformation states of the soft tissue using the deformation model and a measure of error between the predicted and the observed intra-operative imaging data. The error is computed based on an intensity-based distance metric, namely, modality independent neighborhood descriptor (MIND), and no segmentation or feature extraction from images is required. The performance of the proposed method is evaluated by dynamically deforming 3D pre-operative MR images of a breast phantom tissue based on real-time 2D images obtained from an US probe. Experimental results on different registration scenarios showed that deformation tracking converges in a few iterations. The average target registration error on the plane of 2D US images for manually selected fiducial points was between 0.3 and 1.5 mm depending on the size of deformation.

  7. 3D/2D model-to-image registration applied to TIPS surgery.

    PubMed

    Jomier, Julien; Bullitt, Elizabeth; Van Horn, Mark; Pathak, Chetna; Aylward, Stephen R

    2006-01-01

    We have developed a novel model-to-image registration technique which aligns a 3-dimensional model of vasculature with two semiorthogonal fluoroscopic projections. Our vascular registration method is used to intra-operatively initialize the alignment of a catheter and a preoperative vascular model in the context of image-guided TIPS (Transjugular, Intrahepatic, Portosystemic Shunt formation) surgery. Registration optimization is driven by the intensity information from the projection pairs at sample points along the centerlines of the model. Our algorithm shows speed, accuracy and consistency given clinical data.

  8. Registration of 2D to 3D joint images using phase-based mutual information

    NASA Astrophysics Data System (ADS)

    Dalvi, Rupin; Abugharbieh, Rafeef; Pickering, Mark; Scarvell, Jennie; Smith, Paul

    2007-03-01

    Registration of two dimensional to three dimensional orthopaedic medical image data has important applications particularly in the area of image guided surgery and sports medicine. Fluoroscopy to computer tomography (CT) registration is an important case, wherein digitally reconstructed radiographs derived from the CT data are registered to the fluoroscopy data. Traditional registration metrics such as intensity-based mutual information (MI) typically work well but often suffer from gross misregistration errors when the image to be registered contains a partial view of the anatomy visible in the target image. Phase-based MI provides a robust alternative similarity measure which, in addition to possessing the general robustness and noise immunity that MI provides, also employs local phase information in the registration process which makes it less susceptible to the aforementioned errors. In this paper, we propose using the complex wavelet transform for computing image phase information and incorporating that into a phase-based MI measure for image registration. Tests on a CT volume and 6 fluoroscopy images of the knee are presented. The femur and the tibia in the CT volume were individually registered to the fluoroscopy images using intensity-based MI, gradient-based MI and phase-based MI. Errors in the coordinates of fiducials present in the bone structures were used to assess the accuracy of the different registration schemes. Quantitative results demonstrate that the performance of intensity-based MI was the worst. Gradient-based MI performed slightly better, while phase-based MI results were the best consistently producing the lowest errors.

  9. Nonrigid Registration of 2-D and 3-D Dynamic Cell Nuclei Images for Improved Classification of Subcellular Particle Motion

    PubMed Central

    Kim, Il-Han; Chen, Yi-Chun M.; Spector, David L.; Eils, Roland; Rohr, Karl

    2012-01-01

    The observed motion of subcellular particles in fluorescence microscopy image sequences of live cells is generally a superposition of the motion and deformation of the cell and the motion of the particles. Decoupling the two types of movements to enable accurate classification of the particle motion requires the application of registration algorithms. We have developed an intensity-based approach for nonrigid registration of multi-channel microscopy image sequences of cell nuclei. First, based on 3-D synthetic images we demonstrate that cell nucleus deformations change the observed motion types of particles and that our approach allows to recover the original motion. Second, we have successfully applied our approach to register 2-D and 3-D real microscopy image sequences. A quantitative experimental comparison with previous approaches for nonrigid registration of cell microscopy has also been performed. PMID:20840894

  10. Self-calibration of cone-beam CT geometry using 3D–2D image registration

    PubMed Central

    Ouadah, S; Stayman, J W; Gang, G J; Ehtiati, T; Siewerdsen, J H

    2016-01-01

    Robotic C-arms are capable of complex orbits that can increase field of view, reduce artifacts, improve image quality, and/or reduce dose; however, it can be challenging to obtain accurate, reproducible geometric calibration required for image reconstruction for such complex orbits. This work presents a method for geometric calibration for an arbitrary source-detector orbit by registering 2D projection data to a previously acquired 3D image. It also yields a method by which calibration of simple circular orbits can be improved. The registration uses a normalized gradient information similarity metric and the covariance matrix adaptation-evolution strategy optimizer for robustness against local minima and changes in image content. The resulting transformation provides a ‘self-calibration’ of system geometry. The algorithm was tested in phantom studies using both a cone-beam CT (CBCT) test-bench and a robotic C-arm (Artis Zeego, Siemens Healthcare) for circular and non-circular orbits. Self-calibration performance was evaluated in terms of the full-width at half-maximum (FWHM) of the point spread function in CBCT reconstructions, the reprojection error (RPE) of steel ball bearings placed on each phantom, and the overall quality and presence of artifacts in CBCT images. In all cases, self-calibration improved the FWHM—e.g. on the CBCT bench, FWHM = 0.86 mm for conventional calibration compared to 0.65 mm for self-calibration (p < 0.001). Similar improvements were measured in RPE—e.g. on the robotic C-arm, RPE = 0.73 mm for conventional calibration compared to 0.55 mm for self-calibration (p < 0.001). Visible improvement was evident in CBCT reconstructions using self-calibration, particularly about high-contrast, high-frequency objects (e.g. temporal bone air cells and a surgical needle). The results indicate that self-calibration can improve even upon systems with presumably accurate geometric calibration and is applicable to situations where conventional

  11. Self-calibration of cone-beam CT geometry using 3D-2D image registration.

    PubMed

    Ouadah, S; Stayman, J W; Gang, G J; Ehtiati, T; Siewerdsen, J H

    2016-04-07

    Robotic C-arms are capable of complex orbits that can increase field of view, reduce artifacts, improve image quality, and/or reduce dose; however, it can be challenging to obtain accurate, reproducible geometric calibration required for image reconstruction for such complex orbits. This work presents a method for geometric calibration for an arbitrary source-detector orbit by registering 2D projection data to a previously acquired 3D image. It also yields a method by which calibration of simple circular orbits can be improved. The registration uses a normalized gradient information similarity metric and the covariance matrix adaptation-evolution strategy optimizer for robustness against local minima and changes in image content. The resulting transformation provides a 'self-calibration' of system geometry. The algorithm was tested in phantom studies using both a cone-beam CT (CBCT) test-bench and a robotic C-arm (Artis Zeego, Siemens Healthcare) for circular and non-circular orbits. Self-calibration performance was evaluated in terms of the full-width at half-maximum (FWHM) of the point spread function in CBCT reconstructions, the reprojection error (RPE) of steel ball bearings placed on each phantom, and the overall quality and presence of artifacts in CBCT images. In all cases, self-calibration improved the FWHM-e.g. on the CBCT bench, FWHM  =  0.86 mm for conventional calibration compared to 0.65 mm for self-calibration (p  <  0.001). Similar improvements were measured in RPE-e.g. on the robotic C-arm, RPE  =  0.73 mm for conventional calibration compared to 0.55 mm for self-calibration (p  <  0.001). Visible improvement was evident in CBCT reconstructions using self-calibration, particularly about high-contrast, high-frequency objects (e.g. temporal bone air cells and a surgical needle). The results indicate that self-calibration can improve even upon systems with presumably accurate geometric calibration and is

  12. Self-calibration of cone-beam CT geometry using 3D-2D image registration

    NASA Astrophysics Data System (ADS)

    Ouadah, S.; Stayman, J. W.; Gang, G. J.; Ehtiati, T.; Siewerdsen, J. H.

    2016-04-01

    Robotic C-arms are capable of complex orbits that can increase field of view, reduce artifacts, improve image quality, and/or reduce dose; however, it can be challenging to obtain accurate, reproducible geometric calibration required for image reconstruction for such complex orbits. This work presents a method for geometric calibration for an arbitrary source-detector orbit by registering 2D projection data to a previously acquired 3D image. It also yields a method by which calibration of simple circular orbits can be improved. The registration uses a normalized gradient information similarity metric and the covariance matrix adaptation-evolution strategy optimizer for robustness against local minima and changes in image content. The resulting transformation provides a ‘self-calibration’ of system geometry. The algorithm was tested in phantom studies using both a cone-beam CT (CBCT) test-bench and a robotic C-arm (Artis Zeego, Siemens Healthcare) for circular and non-circular orbits. Self-calibration performance was evaluated in terms of the full-width at half-maximum (FWHM) of the point spread function in CBCT reconstructions, the reprojection error (RPE) of steel ball bearings placed on each phantom, and the overall quality and presence of artifacts in CBCT images. In all cases, self-calibration improved the FWHM—e.g. on the CBCT bench, FWHM  =  0.86 mm for conventional calibration compared to 0.65 mm for self-calibration (p  <  0.001). Similar improvements were measured in RPE—e.g. on the robotic C-arm, RPE  =  0.73 mm for conventional calibration compared to 0.55 mm for self-calibration (p  <  0.001). Visible improvement was evident in CBCT reconstructions using self-calibration, particularly about high-contrast, high-frequency objects (e.g. temporal bone air cells and a surgical needle). The results indicate that self-calibration can improve even upon systems with presumably accurate geometric calibration and is

  13. Robust initialization of 2D-3D image registration using the projection-slice theorem and phase correlation

    SciTech Connect

    Bom, M. J. van der; Bartels, L. W.; Gounis, M. J.; Homan, R.; Timmer, J.; Viergever, M. A.; Pluim, J. P. W.

    2010-04-15

    Purpose: The image registration literature comprises many methods for 2D-3D registration for which accuracy has been established in a variety of applications. However, clinical application is limited by a small capture range. Initial offsets outside the capture range of a registration method will not converge to a successful registration. Previously reported capture ranges, defined as the 95% success range, are in the order of 4-11 mm mean target registration error. In this article, a relatively computationally inexpensive and robust estimation method is proposed with the objective to enlarge the capture range. Methods: The method uses the projection-slice theorem in combination with phase correlation in order to estimate the transform parameters, which provides an initialization of the subsequent registration procedure. Results: The feasibility of the method was evaluated by experiments using digitally reconstructed radiographs generated from in vivo 3D-RX data. With these experiments it was shown that the projection-slice theorem provides successful estimates of the rotational transform parameters for perspective projections and in case of translational offsets. The method was further tested on ex vivo ovine x-ray data. In 95% of the cases, the method yielded successful estimates for initial mean target registration errors up to 19.5 mm. Finally, the method was evaluated as an initialization method for an intensity-based 2D-3D registration method. The uninitialized and initialized registration experiments had success rates of 28.8% and 68.6%, respectively. Conclusions: The authors have shown that the initialization method based on the projection-slice theorem and phase correlation yields adequate initializations for existing registration methods, thereby substantially enlarging the capture range of these methods.

  14. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    NASA Astrophysics Data System (ADS)

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  15. Known-Component 3D-2D Registration for Image Guidance and Quality Assurance in Spine Surgery Pedicle Screw Placement

    PubMed Central

    Uneri, A.; Stayman, J. W.; De Silva, T.; Wang, A. S.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Wolinsky, J.-P.; Gokaslan, Z. L.; Siewerdsen, J. H.

    2015-01-01

    Purpose To extend the functionality of radiographic/fluoroscopic imaging systems already within standard spine surgery workflow to: 1) provide guidance of surgical device analogous to an external tracking system; and 2) provide intraoperative quality assurance (QA) of the surgical product. Methods Using fast, robust 3D-2D registration in combination with 3D models of known components (surgical devices), the 3D pose determination was solved to relate known components to 2D projection images and 3D preoperative CT in near-real-time. Exact and parametric models of the components were used as input to the algorithm to evaluate the effects of model fidelity. The proposed algorithm employs the covariance matrix adaptation evolution strategy (CMA-ES) to maximize gradient correlation (GC) between measured projections and simulated forward projections of components. Geometric accuracy was evaluated in a spine phantom in terms of target registration error at the tool tip (TREx), and angular deviation (TREϕ) from planned trajectory. Results Transpedicle surgical devices (probe tool and spine screws) were successfully guided with TREx <2 mm and TREϕ<0.5° given projection views separated by at least >30° (easily accommodated on a mobile C-arm). QA of the surgical product based on 3D-2D registration demonstrated the detection of pedicle screw breach with TREx <1 mm, demonstrating a trend of improved accuracy correlated to the fidelity of the component model employed. Conclusions 3D-2D registration combined with 3D models of known surgical components provides a novel method for near-real-time guidance and quality assurance using a mobile C-arm without external trackers or fiducial markers. Ongoing work includes determination of optimal views based on component shape and trajectory, improved robustness to anatomical deformation, and expanded preclinical testing in spine and intracranial surgeries. PMID:26028805

  16. Known-component 3D-2D registration for image guidance and quality assurance in spine surgery pedicle screw placement

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Stayman, J. W.; De Silva, T.; Wang, A. S.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Wolinsky, J.-P.; Gokaslan, Z. L.; Siewerdsen, J. H.

    2015-03-01

    Purpose. To extend the functionality of radiographic / fluoroscopic imaging systems already within standard spine surgery workflow to: 1) provide guidance of surgical device analogous to an external tracking system; and 2) provide intraoperative quality assurance (QA) of the surgical product. Methods. Using fast, robust 3D-2D registration in combination with 3D models of known components (surgical devices), the 3D pose determination was solved to relate known components to 2D projection images and 3D preoperative CT in near-real-time. Exact and parametric models of the components were used as input to the algorithm to evaluate the effects of model fidelity. The proposed algorithm employs the covariance matrix adaptation evolution strategy (CMA-ES) to maximize gradient correlation (GC) between measured projections and simulated forward projections of components. Geometric accuracy was evaluated in a spine phantom in terms of target registration error at the tool tip (TREx), and angular deviation (TREΦ) from planned trajectory. Results. Transpedicle surgical devices (probe tool and spine screws) were successfully guided with TREx<2 mm and TREΦ <0.5° given projection views separated by at least >30° (easily accommodated on a mobile C-arm). QA of the surgical product based on 3D-2D registration demonstrated the detection of pedicle screw breach with TREx<1 mm, demonstrating a trend of improved accuracy correlated to the fidelity of the component model employed. Conclusions. 3D-2D registration combined with 3D models of known surgical components provides a novel method for near-real-time guidance and quality assurance using a mobile C-arm without external trackers or fiducial markers. Ongoing work includes determination of optimal views based on component shape and trajectory, improved robustness to anatomical deformation, and expanded preclinical testing in spine and intracranial surgeries.

  17. Development of fast patient position verification software using 2D-3D image registration and its clinical experience

    PubMed Central

    Mori, Shinichiro; Kumagai, Motoki; Miki, Kentaro; Fukuhara, Riki; Haneishi, Hideaki

    2015-01-01

    To improve treatment workflow, we developed a graphic processing unit (GPU)-based patient positional verification software application and integrated it into carbon-ion scanning beam treatment. Here, we evaluated the basic performance of the software. The algorithm provides 2D/3D registration matching using CT and orthogonal X-ray flat panel detector (FPD) images. The participants were 53 patients with tumors of the head and neck, prostate or lung receiving carbon-ion beam treatment. 2D/3D-ITchi-Gime (ITG) calculation accuracy was evaluated in terms of computation time and registration accuracy. Registration calculation was determined using the similarity measurement metrics gradient difference (GD), normalized mutual information (NMI), zero-mean normalized cross-correlation (ZNCC), and their combination. Registration accuracy was dependent on the particular metric used. Representative examples were determined to have target registration error (TRE) = 0.45 ± 0.23 mm and angular error (AE) = 0.35 ± 0.18° with ZNCC + GD for a head and neck tumor; TRE = 0.12 ± 0.07 mm and AE = 0.16 ± 0.07° with ZNCC for a pelvic tumor; and TRE = 1.19 ± 0.78 mm and AE = 0.83 ± 0.61° with ZNCC for lung tumor. Calculation time was less than 7.26 s.The new registration software has been successfully installed and implemented in our treatment process. We expect that it will improve both treatment workflow and treatment accuracy. PMID:26081313

  18. Development of fast patient position verification software using 2D-3D image registration and its clinical experience.

    PubMed

    Mori, Shinichiro; Kumagai, Motoki; Miki, Kentaro; Fukuhara, Riki; Haneishi, Hideaki

    2015-09-01

    To improve treatment workflow, we developed a graphic processing unit (GPU)-based patient positional verification software application and integrated it into carbon-ion scanning beam treatment. Here, we evaluated the basic performance of the software. The algorithm provides 2D/3D registration matching using CT and orthogonal X-ray flat panel detector (FPD) images. The participants were 53 patients with tumors of the head and neck, prostate or lung receiving carbon-ion beam treatment. 2D/3D-ITchi-Gime (ITG) calculation accuracy was evaluated in terms of computation time and registration accuracy. Registration calculation was determined using the similarity measurement metrics gradient difference (GD), normalized mutual information (NMI), zero-mean normalized cross-correlation (ZNCC), and their combination. Registration accuracy was dependent on the particular metric used. Representative examples were determined to have target registration error (TRE) = 0.45 ± 0.23 mm and angular error (AE) = 0.35 ± 0.18° with ZNCC + GD for a head and neck tumor; TRE = 0.12 ± 0.07 mm and AE = 0.16 ± 0.07° with ZNCC for a pelvic tumor; and TRE = 1.19 ± 0.78 mm and AE = 0.83 ± 0.61° with ZNCC for lung tumor. Calculation time was less than 7.26 s.The new registration software has been successfully installed and implemented in our treatment process. We expect that it will improve both treatment workflow and treatment accuracy.

  19. Registration of dynamic multiview 2D ultrasound and late gadolinium enhanced images of the heart: Application to hypertrophic cardiomyopathy characterization.

    PubMed

    Betancur, Julián; Simon, Antoine; Halbert, Edgar; Tavard, François; Carré, François; Hernández, Alfredo; Donal, Erwan; Schnell, Frédéric; Garreau, Mireille

    2016-02-01

    Describing and analyzing heart multiphysics requires the acquisition and fusion of multisensor cardiac images. Multisensor image fusion enables a combined analysis of these heterogeneous modalities. We propose to register intra-patient multiview 2D+t ultrasound (US) images with multiview late gadolinium-enhanced (LGE) images acquired during cardiac magnetic resonance imaging (MRI), in order to fuse mechanical and tissue state information. The proposed procedure registers both US and LGE to cine MRI. The correction of slice misalignment and the rigid registration of multiview LGE and cine MRI are studied, to select the most appropriate similarity measure. It showed that mutual information performs the best for LGE slice misalignment correction and for LGE and cine registration. Concerning US registration, dynamic endocardial contours resulting from speckle tracking echocardiography were exploited in a geometry-based dynamic registration. We propose the use of an adapted dynamic time warping procedure to synchronize cardiac dynamics in multiview US and cine MRI. The registration of US and LGE MRI was evaluated on a dataset of patients with hypertrophic cardiomyopathy. A visual assessment of 330 left ventricular regions from US images of 28 patients resulted in 92.7% of regions successfully aligned with cardiac structures in LGE. Successfully-aligned regions were then used to evaluate the abilities of strain indicators to predict the presence of fibrosis. Longitudinal peak-strain and peak-delay of aligned left ventricular regions were computed from corresponding regional strain curves from US. The Mann-Withney test proved that the expected values of these indicators change between the populations of regions with and without fibrosis (p < 0.01). ROC curves otherwise proved that the presence of fibrosis is one factor amongst others which modifies longitudinal peak-strain and peak-delay.

  20. Effect of segmentation errors on 3D-to-2D registration of implant models in X-ray images.

    PubMed

    Mahfouz, Mohamed R; Hoff, William A; Komistek, Richard D; Dennis, Douglas A

    2005-02-01

    In many biomedical applications, it is desirable to estimate the three-dimensional (3D) position and orientation (pose) of a metallic rigid object (such as a knee or hip implant) from its projection in a two-dimensional (2D) X-ray image. If the geometry of the object is known, as well as the details of the image formation process, then the pose of the object with respect to the sensor can be determined. A common method for 3D-to-2D registration is to first segment the silhouette contour from the X-ray image; that is, identify all points in the image that belong to the 2D silhouette and not to the background. This segmentation step is then followed by a search for the 3D pose that will best match the observed contour with a predicted contour. Although the silhouette of a metallic object is often clearly visible in an X-ray image, adjacent tissue and occlusions can make the exact location of the silhouette contour difficult to determine in places. Occlusion can occur when another object (such as another implant component) partially blocks the view of the object of interest. In this paper, we argue that common methods for segmentation can produce errors in the location of the 2D contour, and hence errors in the resulting 3D estimate of the pose. We show, on a typical fluoroscopy image of a knee implant component, that interactive and automatic methods for segmentation result in segmented contours that vary significantly. We show how the variability in the 2D contours (quantified by two different metrics) corresponds to variability in the 3D poses. Finally, we illustrate how traditional segmentation methods can fail completely in the (not uncommon) cases of images with occlusion.

  1. Registration of 2D C-Arm and 3D CT Images for a C-Arm Image-Assisted Navigation System for Spinal Surgery

    PubMed Central

    Chang, Chih-Ju; Lin, Geng-Li; Tse, Alex; Chu, Hong-Yu; Tseng, Ching-Shiow

    2015-01-01

    C-Arm image-assisted surgical navigation system has been broadly applied to spinal surgery. However, accurate path planning on the C-Arm AP-view image is difficult. This research studies 2D-3D image registration methods to obtain the optimum transformation matrix between C-Arm and CT image frames. Through the transformation matrix, the surgical path planned on preoperative CT images can be transformed and displayed on the C-Arm images for surgical guidance. The positions of surgical instruments will also be displayed on both CT and C-Arm in the real time. Five similarity measure methods of 2D-3D image registration including Normalized Cross-Correlation, Gradient Correlation, Pattern Intensity, Gradient Difference Correlation, and Mutual Information combined with three optimization methods including Powell's method, Downhill simplex algorithm, and genetic algorithm are applied to evaluate their performance in converge range, efficiency, and accuracy. Experimental results show that the combination of Normalized Cross-Correlation measure method with Downhill simplex algorithm obtains maximum correlation and similarity in C-Arm and Digital Reconstructed Radiograph (DRR) images. Spine saw bones are used in the experiment to evaluate 2D-3D image registration accuracy. The average error in displacement is 0.22 mm. The success rate is approximately 90% and average registration time takes 16 seconds. PMID:27018859

  2. Robust 3D–2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation

    PubMed Central

    Otake, Yoshito; Wang, Adam S; Stayman, J Webster; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Khanna, A Jay; Gokaslan, Ziya L; Siewerdsen, Jeffrey H

    2016-01-01

    We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with `success' defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial run) the

  3. Robust 3D-2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation

    NASA Astrophysics Data System (ADS)

    Otake, Yoshito; Wang, Adam S.; Webster Stayman, J.; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Khanna, A. Jay; Gokaslan, Ziya L.; Siewerdsen, Jeffrey H.

    2013-12-01

    We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with ‘success’ defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial

  4. Robust 3D-2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation.

    PubMed

    Otake, Yoshito; Wang, Adam S; Webster Stayman, J; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Khanna, A Jay; Gokaslan, Ziya L; Siewerdsen, Jeffrey H

    2013-12-07

    We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with 'success' defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial run) the

  5. Intraoperative evaluation of device placement in spine surgery using known-component 3D-2D image registration.

    PubMed

    Uneri, A; De Silva, T; Goerres, J; Jacobson, M W; Ketcha, M D; Reaungamornrat, S; Kleinszig, G; Vogt, S; Khanna, A J; Osgood, G M; Wolinsky, J-P; Siewerdsen, J H

    2017-04-21

    Intraoperative x-ray radiography/fluoroscopy is commonly used to assess the placement of surgical devices in the operating room (e.g. spine pedicle screws), but qualitative interpretation can fail to reliably detect suboptimal delivery and/or breach of adjacent critical structures. We present a 3D-2D image registration method wherein intraoperative radiographs are leveraged in combination with prior knowledge of the patient and surgical components for quantitative assessment of device placement and more rigorous quality assurance (QA) of the surgical product. The algorithm is based on known-component registration (KC-Reg) in which patient-specific preoperative CT and parametric component models are used. The registration performs optimization of gradient similarity, removes the need for offline geometric calibration of the C-arm, and simultaneously solves for multiple component bodies, thereby allowing QA in a single step (e.g. spinal construct with 4-20 screws). Performance was tested in a spine phantom, and first clinical results are reported for QA of transpedicle screws delivered in a patient undergoing thoracolumbar spine surgery. Simultaneous registration of ten pedicle screws (five contralateral pairs) demonstrated mean target registration error (TRE) of 1.1  ±  0.1 mm at the screw tip and 0.7  ±  0.4° in angulation when a prior geometric calibration was used. The calibration-free formulation, with the aid of component collision constraints, achieved TRE of 1.4  ±  0.6 mm. In all cases, a statistically significant improvement (p  <  0.05) was observed for the simultaneous solutions in comparison to previously reported sequential solution of individual components. Initial application in clinical data in spine surgery demonstrated TRE of 2.7  ±  2.6 mm and 1.5  ±  0.8°. The KC-Reg algorithm offers an independent check and quantitative QA of the surgical product using radiographic/fluoroscopic views

  6. A neural network-based 2D/3D image registration quality evaluator for pediatric patient setup in external beam radiotherapy.

    PubMed

    Wu, Jian; Su, Zhong; Li, Zuofeng

    2016-01-01

    Our purpose was to develop a neural network-based registration quality evaluator (RQE) that can improve the 2D/3D image registration robustness for pediatric patient setup in external beam radiotherapy. Orthogonal daily setup X-ray images of six pediatric patients with brain tumors receiving proton therapy treatments were retrospectively registered with their treatment planning computed tomography (CT) images. A neural network-based pattern classifier was used to determine whether a registration solution was successful based on geometric features of the similarity measure values near the point-of-solution. Supervised training and test datasets were generated by rigidly registering a pair of orthogonal daily setup X-ray images to the treatment planning CT. The best solution for each registration task was selected from 50 optimizing attempts that differed only by the randomly generated initial transformation parameters. The distance from each individual solution to the best solution in the normalized parametrical space was compared to a user-defined error tolerance to determine whether that solution was acceptable. A supervised training was then used to train the RQE. Performance of the RQE was evaluated using test dataset consisting of registration results that were not used in training. The RQE was integrated with our in-house 2D/3D registration system and its performance was evaluated using the same patient dataset. With an optimized sampling step size (i.e., 5 mm) in the feature space, the RQE has the sensitivity and the specificity in the ranges of 0.865-0.964 and 0.797-0.990, respectively, when used to detect registration error with mean voxel displacement (MVD) greater than 1 mm. The trial-to-acceptance ratio of the integrated 2D/3D registration system, for all patients, is equal to 1.48. The final acceptance ratio is 92.4%. The proposed RQE can potentially be used in a 2D/3D rigid image registration system to improve the overall robustness by rejecting

  7. Automatic localization of target vertebrae in spine surgery using fast CT-to-fluoroscopy (3D-2D) image registration

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Schafer, S.; Stayman, J. W.; Zbijewski, W.; Kleinszig, G.; Graumann, R.; Khanna, A. J.; Siewerdsen, J. H.

    2012-02-01

    Localization of target vertebrae is an essential step in minimally invasive spine surgery, with conventional methods relying on "level counting" - i.e., manual counting of vertebrae under fluoroscopy starting from readily identifiable anatomy (e.g., the sacrum). The approach requires an undesirable level of radiation, time, and is prone to counting errors due to the similar appearance of vertebrae in projection images; wrong-level surgery occurs in 1 of every ~3000 cases. This paper proposes a method to automatically localize target vertebrae in x-ray projections using 3D-2D registration between preoperative CT (in which vertebrae are preoperatively labeled) and intraoperative fluoroscopy. The registration uses an intensity-based approach with a gradient-based similarity metric and the CMA-ES algorithm for optimization. Digitally reconstructed radiographs (DRRs) and a robust similarity metric are computed on GPU to accelerate the process. Evaluation in clinical CT data included 5,000 PA and LAT projections randomly perturbed to simulate human variability in setup of mobile intraoperative C-arm. The method demonstrated 100% success for PA view (projection error: 0.42mm) and 99.8% success for LAT view (projection error: 0.37mm). Initial implementation on GPU provided automatic target localization within about 3 sec, with further improvement underway via multi-GPU. The ability to automatically label vertebrae in fluoroscopy promises to streamline surgical workflow, improve patient safety, and reduce wrong-site surgeries, especially in large patients for whom manual methods are time consuming and error prone.

  8. Position tracking of moving liver lesion based on real-time registration between 2D ultrasound and 3D preoperative images

    SciTech Connect

    Weon, Chijun; Hyun Nam, Woo; Lee, Duhgoon; Ra, Jong Beom; Lee, Jae Young

    2015-01-15

    Purpose: Registration between 2D ultrasound (US) and 3D preoperative magnetic resonance (MR) (or computed tomography, CT) images has been studied recently for US-guided intervention. However, the existing techniques have some limits, either in the registration speed or the performance. The purpose of this work is to develop a real-time and fully automatic registration system between two intermodal images of the liver, and subsequently an indirect lesion positioning/tracking algorithm based on the registration result, for image-guided interventions. Methods: The proposed position tracking system consists of three stages. In the preoperative stage, the authors acquire several 3D preoperative MR (or CT) images at different respiratory phases. Based on the transformations obtained from nonrigid registration of the acquired 3D images, they then generate a 4D preoperative image along the respiratory phase. In the intraoperative preparatory stage, they properly attach a 3D US transducer to the patient’s body and fix its pose using a holding mechanism. They then acquire a couple of respiratory-controlled 3D US images. Via the rigid registration of these US images to the 3D preoperative images in the 4D image, the pose information of the fixed-pose 3D US transducer is determined with respect to the preoperative image coordinates. As feature(s) to use for the rigid registration, they may choose either internal liver vessels or the inferior vena cava. Since the latter is especially useful in patients with a diffuse liver disease, the authors newly propose using it. In the intraoperative real-time stage, they acquire 2D US images in real-time from the fixed-pose transducer. For each US image, they select candidates for its corresponding 2D preoperative slice from the 4D preoperative MR (or CT) image, based on the predetermined pose information of the transducer. The correct corresponding image is then found among those candidates via real-time 2D registration based on a

  9. Registration of 2D cardiac images to real-time 3D ultrasound volumes for 3D stress echocardiography

    NASA Astrophysics Data System (ADS)

    Leung, K. Y. Esther; van Stralen, Marijn; Voormolen, Marco M.; van Burken, Gerard; Nemes, Attila; ten Cate, Folkert J.; Geleijnse, Marcel L.; de Jong, Nico; van der Steen, Antonius F. W.; Reiber, Johan H. C.; Bosch, Johan G.

    2006-03-01

    Three-dimensional (3D) stress echocardiography is a novel technique for diagnosing cardiac dysfunction, by comparing wall motion of the left ventricle under different stages of stress. For quantitative comparison of this motion, it is essential to register the ultrasound data. We propose an intensity based rigid registration method to retrieve two-dimensional (2D) four-chamber (4C), two-chamber, and short-axis planes from the 3D data set acquired in the stress stage, using manually selected 2D planes in the rest stage as reference. The algorithm uses the Nelder-Mead simplex optimization to find the optimal transformation of one uniform scaling, three rotation, and three translation parameters. We compared registration using the SAD, SSD, and NCC metrics, performed on four resolution levels of a Gaussian pyramid. The registration's effectiveness was assessed by comparing the 3D positions of the registered apex and mitral valve midpoints and 4C direction with the manually selected results. The registration was tested on data from 20 patients. Best results were found using the NCC metric on data downsampled with factor two: mean registration errors were 8.1mm, 5.4mm, and 8.0° in the apex position, mitral valve position, and 4C direction respectively. The errors were close to the interobserver (7.1mm, 3.8mm, 7.4°) and intraobserver variability (5.2mm, 3.3mm, 7.0°), and better than the error before registration (9.4mm, 9.0mm, 9.9°). We demonstrated that the registration algorithm visually and quantitatively improves the alignment of rest and stress data sets, performing similar to manual alignment. This will improve automated analysis in 3D stress echocardiography.

  10. Effects of x-ray and CT image enhancements on the robustness and accuracy of a rigid 3D/2D image registration.

    PubMed

    Kim, Jinkoo; Yin, Fang-Fang; Zhao, Yang; Kim, Jae Ho

    2005-04-01

    A rigid body three-dimensional/two-dimensional (3D/2D) registration method has been implemented using mutual information, gradient ascent, and 3D texturemap-based digitally reconstructed radiographs. Nine combinations of commonly used x-ray and computed tomography (CT) image enhancement methods, including window leveling, histogram equalization, and adaptive histogram equalization, were examined to assess their effects on accuracy and robustness of the registration method. From a set of experiments using an anthropomorphic chest phantom, we were able to draw several conclusions. First, the CT and x-ray preprocessing combination with the widest attraction range was the one that linearly stretched the histograms onto the entire display range on both CT and x-ray images. The average attraction ranges of this combination were 71.3 mm and 61.3 deg in the translation and rotation dimensions, respectively, and the average errors were 0.12 deg and 0.47 mm. Second, the combination of the CT image with tissue and bone information and the x-ray images with adaptive histogram equalization also showed subvoxel accuracy, especially the best in the translation dimensions. However, its attraction ranges were the smallest among the examined combinations (on average 36 mm and 19 deg). Last the bone-only information on the CT image did not show convergency property to the correct registration.

  11. Rotation invariance principles in 2D/3D registration

    NASA Astrophysics Data System (ADS)

    Birkfellner, Wolfgang; Wirth, Joachim; Burgstaller, Wolfgang; Baumann, Bernard; Staedele, Harald; Hammer, Beat; Gellrich, Niels C.; Jacob, Augustinus L.; Regazzoni, Pietro; Messmer, Peter

    2003-05-01

    2D/3D patient-to-computed tomography (CT) registration is a method to determine a transformation that maps two coordinate systems by comparing a projection image rendered from CT to a real projection image. Applications include exact patient positioning in radiation therapy, calibration of surgical robots, and pose estimation in computer-aided surgery. One of the problems associated with 2D/3D registration is the fast that finding a registration includes sovling a minimization problem in six degrees-of-freedom in motion. This results in considerable time expenses since for each iteration step at least one volume rendering has to be computed. We show that by choosing an appropriate world coordinate system and by applying a 2D/2D registration method in each iteration step, the number of iterations can be grossly reduced from n6 to n5. Here, n is the number of discrete variations aroudn a given coordinate. Depending on the configuration of the optimization algorithm, this reduces the total number of iterations necessary to at least 1/3 of its original value. The method was implemented and extensively tested on simulated x-ray images of a pelvis. We conclude that this hardware-indepenent optimization of 2D/3D registration is a step towards increasing the acceptance of this promising method for a wide number of clinical applications.

  12. WE-AB-BRA-01: 3D-2D Image Registration for Target Localization in Spine Surgery: Comparison of Similarity Metrics Against Robustness to Content Mismatch

    SciTech Connect

    De Silva, T; Ketcha, M; Siewerdsen, J H; Uneri, A; Reaungamornrat, S; Vogt, S; Kleinszig, G; Lo, S F; Wolinsky, J P; Gokaslan, Z L; Aygun, N

    2015-06-15

    Purpose: In image-guided spine surgery, mapping 3D preoperative images to 2D intraoperative images via 3D-2D registration can provide valuable assistance in target localization. However, the presence of surgical instrumentation, hardware implants, and soft-tissue resection/displacement causes mismatches in image content, confounding existing registration methods. Manual/semi-automatic methods to mask such extraneous content is time consuming, user-dependent, error prone, and disruptive to clinical workflow. We developed and evaluated 2 novel similarity metrics within a robust registration framework to overcome such challenges in target localization. Methods: An IRB-approved retrospective study in 19 spine surgery patients included 19 preoperative 3D CT images and 50 intraoperative mobile radiographs in cervical, thoracic, and lumbar spine regions. A neuroradiologist provided truth definition of vertebral positions in CT and radiography. 3D-2D registration was performed using the CMA-ES optimizer with 4 gradient-based image similarity metrics: (1) gradient information (GI); (2) gradient correlation (GC); (3) a novel variant referred to as gradient orientation (GO); and (4) a second variant referred to as truncated gradient correlation (TGC). Registration accuracy was evaluated in terms of the projection distance error (PDE) of the vertebral levels. Results: Conventional similarity metrics were susceptible to gross registration error and failure modes associated with the presence of surgical instrumentation: for GI, the median PDE and interquartile range was 33.0±43.6 mm; similarly for GC, PDE = 23.0±92.6 mm respectively. The robust metrics GO and TGC, on the other hand, demonstrated major improvement in PDE (7.6 ±9.4 mm and 8.1± 18.1 mm, respectively) and elimination of gross failure modes. Conclusion: The proposed GO and TGC similarity measures improve registration accuracy and robustness to gross failure in the presence of strong image content mismatch. Such

  13. Fully automated 2D-3D registration and verification.

    PubMed

    Varnavas, Andreas; Carrell, Tom; Penney, Graeme

    2015-12-01

    Clinical application of 2D-3D registration technology often requires a significant amount of human interaction during initialisation and result verification. This is one of the main barriers to more widespread clinical use of this technology. We propose novel techniques for automated initial pose estimation of the 3D data and verification of the registration result, and show how these techniques can be combined to enable fully automated 2D-3D registration, particularly in the case of a vertebra based system. The initialisation method is based on preoperative computation of 2D templates over a wide range of 3D poses. These templates are used to apply the Generalised Hough Transform to the intraoperative 2D image and the sought 3D pose is selected with the combined use of the generated accumulator arrays and a Gradient Difference Similarity Measure. On the verification side, two algorithms are proposed: one using normalised features based on the similarity value and the other based on the pose agreement between multiple vertebra based registrations. The proposed methods are employed here for CT to fluoroscopy registration and are trained and tested with data from 31 clinical procedures with 417 low dose, i.e. low quality, high noise interventional fluoroscopy images. When similarity value based verification is used, the fully automated system achieves a 95.73% correct registration rate, whereas a no registration result is produced for the remaining 4.27% of cases (i.e. incorrect registration rate is 0%). The system also automatically detects input images outside its operating range.

  14. 3D-2D image registration for target localization in spine surgery: investigation of similarity metrics providing robustness to content mismatch.

    PubMed

    De Silva, T; Uneri, A; Ketcha, M D; Reaungamornrat, S; Kleinszig, G; Vogt, S; Aygun, N; Lo, S-F; Wolinsky, J-P; Siewerdsen, J H

    2016-04-21

    In image-guided spine surgery, robust three-dimensional to two-dimensional (3D-2D) registration of preoperative computed tomography (CT) and intraoperative radiographs can be challenged by the image content mismatch associated with the presence of surgical instrumentation and implants as well as soft-tissue resection or deformation. This work investigates image similarity metrics in 3D-2D registration offering improved robustness against mismatch, thereby improving performance and reducing or eliminating the need for manual masking. The performance of four gradient-based image similarity metrics (gradient information (GI), gradient correlation (GC), gradient information with linear scaling (GS), and gradient orientation (GO)) with a multi-start optimization strategy was evaluated in an institutional review board-approved retrospective clinical study using 51 preoperative CT images and 115 intraoperative mobile radiographs. Registrations were tested with and without polygonal masks as a function of the number of multistarts employed during optimization. Registration accuracy was evaluated in terms of the projection distance error (PDE) and assessment of failure modes (PDE  >  30 mm) that could impede reliable vertebral level localization. With manual polygonal masking and 200 multistarts, the GC and GO metrics exhibited robust performance with 0% gross failures and median PDE < 6.4 mm (±4.4 mm interquartile range (IQR)) and a median runtime of 84 s (plus upwards of 1-2 min for manual masking). Excluding manual polygonal masks and decreasing the number of multistarts to 50 caused the GC-based registration to fail at a rate of >14%; however, GO maintained robustness with a 0% gross failure rate. Overall, the GI, GC, and GS metrics were susceptible to registration errors associated with content mismatch, but GO provided robust registration (median PDE  =  5.5 mm, 2.6 mm IQR) without manual masking and with an improved runtime (29.3 s). The GO metric improved

  15. 3D-2D image registration for target localization in spine surgery: investigation of similarity metrics providing robustness to content mismatch

    NASA Astrophysics Data System (ADS)

    De Silva, T.; Uneri, A.; Ketcha, M. D.; Reaungamornrat, S.; Kleinszig, G.; Vogt, S.; Aygun, N.; Lo, S.-F.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-04-01

    In image-guided spine surgery, robust three-dimensional to two-dimensional (3D-2D) registration of preoperative computed tomography (CT) and intraoperative radiographs can be challenged by the image content mismatch associated with the presence of surgical instrumentation and implants as well as soft-tissue resection or deformation. This work investigates image similarity metrics in 3D-2D registration offering improved robustness against mismatch, thereby improving performance and reducing or eliminating the need for manual masking. The performance of four gradient-based image similarity metrics (gradient information (GI), gradient correlation (GC), gradient information with linear scaling (GS), and gradient orientation (GO)) with a multi-start optimization strategy was evaluated in an institutional review board-approved retrospective clinical study using 51 preoperative CT images and 115 intraoperative mobile radiographs. Registrations were tested with and without polygonal masks as a function of the number of multistarts employed during optimization. Registration accuracy was evaluated in terms of the projection distance error (PDE) and assessment of failure modes (PDE  >  30 mm) that could impede reliable vertebral level localization. With manual polygonal masking and 200 multistarts, the GC and GO metrics exhibited robust performance with 0% gross failures and median PDE  <  6.4 mm (±4.4 mm interquartile range (IQR)) and a median runtime of 84 s (plus upwards of 1-2 min for manual masking). Excluding manual polygonal masks and decreasing the number of multistarts to 50 caused the GC-based registration to fail at a rate of  >14% however, GO maintained robustness with a 0% gross failure rate. Overall, the GI, GC, and GS metrics were susceptible to registration errors associated with content mismatch, but GO provided robust registration (median PDE  =  5.5 mm, 2.6 mm IQR) without manual masking and with an improved

  16. 3D–2D image registration for target localization in spine surgery: investigation of similarity metrics providing robustness to content mismatch

    PubMed Central

    De Silva, T; Uneri, A; Ketcha, M D; Reaungamornrat, S; Kleinszig, G; Vogt, S; Aygun, N; Lo, S-F; Wolinsky, J-P; Siewerdsen, J H

    2016-01-01

    In image-guided spine surgery, robust three-dimensional to two-dimensional (3D–2D) registration of preoperative computed tomography (CT) and intraoperative radiographs can be challenged by the image content mismatch associated with the presence of surgical instrumentation and implants as well as soft-tissue resection or deformation. This work investigates image similarity metrics in 3D–2D registration offering improved robustness against mismatch, thereby improving performance and reducing or eliminating the need for manual masking. The performance of four gradient-based image similarity metrics (gradient information (GI), gradient correlation (GC), gradient information with linear scaling (GS), and gradient orientation (GO)) with a multi-start optimization strategy was evaluated in an institutional review board-approved retrospective clinical study using 51 preoperative CT images and 115 intraoperative mobile radiographs. Registrations were tested with and without polygonal masks as a function of the number of multistarts employed during optimization. Registration accuracy was evaluated in terms of the projection distance error (PDE) and assessment of failure modes (PDE > 30 mm) that could impede reliable vertebral level localization. With manual polygonal masking and 200 multistarts, the GC and GO metrics exhibited robust performance with 0% gross failures and median PDE < 6.4 mm (±4.4 mm interquartile range (IQR)) and a median runtime of 84 s (plus upwards of 1–2 min for manual masking). Excluding manual polygonal masks and decreasing the number of multistarts to 50 caused the GC-based registration to fail at a rate of >14%; however, GO maintained robustness with a 0% gross failure rate. Overall, the GI, GC, and GS metrics were susceptible to registration errors associated with content mismatch, but GO provided robust registration (median PDE = 5.5 mm, 2.6 mm IQR) without manual masking and with an improved runtime (29.3 s). The GO metric improved the

  17. Intraoperative image-based multiview 2D/3D registration for image-guided orthopaedic surgery: incorporation of fiducial-based C-arm tracking and GPU-acceleration.

    PubMed

    Otake, Yoshito; Armand, Mehran; Armiger, Robert S; Kutzer, Michael D; Basafa, Ehsan; Kazanzides, Peter; Taylor, Russell H

    2012-04-01

    Intraoperative patient registration may significantly affect the outcome of image-guided surgery (IGS). Image-based registration approaches have several advantages over the currently dominant point-based direct contact methods and are used in some industry solutions in image-guided radiation therapy with fixed X-ray gantries. However, technical challenges including geometric calibration and computational cost have precluded their use with mobile C-arms for IGS. We propose a 2D/3D registration framework for intraoperative patient registration using a conventional mobile X-ray imager combining fiducial-based C-arm tracking and graphics processing unit (GPU)-acceleration. The two-stage framework 1) acquires X-ray images and estimates relative pose between the images using a custom-made in-image fiducial, and 2) estimates the patient pose using intensity-based 2D/3D registration. Experimental validations using a publicly available gold standard dataset, a plastic bone phantom and cadaveric specimens have been conducted. The mean target registration error (mTRE) was 0.34 ± 0.04 mm (success rate: 100%, registration time: 14.2 s) for the phantom with two images 90° apart, and 0.99 ± 0.41 mm (81%, 16.3 s) for the cadaveric specimen with images 58.5° apart. The experimental results showed the feasibility of the proposed registration framework as a practical alternative for IGS routines.

  18. Intraoperative Image-based Multiview 2D/3D Registration for Image-Guided Orthopaedic Surgery: Incorporation of Fiducial-Based C-Arm Tracking and GPU-Acceleration

    PubMed Central

    Armand, Mehran; Armiger, Robert S.; Kutzer, Michael D.; Basafa, Ehsan; Kazanzides, Peter; Taylor, Russell H.

    2012-01-01

    Intraoperative patient registration may significantly affect the outcome of image-guided surgery (IGS). Image-based registration approaches have several advantages over the currently dominant point-based direct contact methods and are used in some industry solutions in image-guided radiation therapy with fixed X-ray gantries. However, technical challenges including geometric calibration and computational cost have precluded their use with mobile C-arms for IGS. We propose a 2D/3D registration framework for intraoperative patient registration using a conventional mobile X-ray imager combining fiducial-based C-arm tracking and graphics processing unit (GPU)-acceleration. The two-stage framework 1) acquires X-ray images and estimates relative pose between the images using a custom-made in-image fiducial, and 2) estimates the patient pose using intensity-based 2D/3D registration. Experimental validations using a publicly available gold standard dataset, a plastic bone phantom and cadaveric specimens have been conducted. The mean target registration error (mTRE) was 0.34 ± 0.04 mm (success rate: 100%, registration time: 14.2 s) for the phantom with two images 90° apart, and 0.99 ± 0.41 mm (81%, 16.3 s) for the cadaveric specimen with images 58.5° apart. The experimental results showed the feasibility of the proposed registration framework as a practical alternative for IGS routines. PMID:22113773

  19. A frequency-based approach to locate common structure for 2D-3D intensity-based registration of setup images in prostate radiotherapy

    SciTech Connect

    Munbodh, Reshma; Chen Zhe; Jaffray, David A.; Moseley, Douglas J.; Knisely, Jonathan P. S.; Duncan, James S.

    2007-07-15

    In many radiotherapy clinics, geometric uncertainties in the delivery of 3D conformal radiation therapy and intensity modulated radiation therapy of the prostate are reduced by aligning the patient's bony anatomy in the planning 3D CT to corresponding bony anatomy in 2D portal images acquired before every treatment fraction. In this paper, we seek to determine if there is a frequency band within the portal images and the digitally reconstructed radiographs (DRRs) of the planning CT in which bony anatomy predominates over non-bony anatomy such that portal images and DRRs can be suitably filtered to achieve high registration accuracy in an automated 2D-3D single portal intensity-based registration framework. Two similarity measures, mutual information and the Pearson correlation coefficient were tested on carefully collected gold-standard data consisting of a kilovoltage cone-beam CT (CBCT) and megavoltage portal images in the anterior-posterior (AP) view of an anthropomorphic phantom acquired under clinical conditions at known poses, and on patient data. It was found that filtering the portal images and DRRs during the registration considerably improved registration performance. Without filtering, the registration did not always converge while with filtering it always converged to an accurate solution. For the pose-determination experiments conducted on the anthropomorphic phantom with the correlation coefficient, the mean (and standard deviation) of the absolute errors in recovering each of the six transformation parameters were {theta}{sub x}:0.18(0.19) deg., {theta}{sub y}:0.04(0.04) deg., {theta}{sub z}:0.04(0.02) deg., t{sub x}:0.14(0.15) mm, t{sub y}:0.09(0.05) mm, and t{sub z}:0.49(0.40) mm. The mutual information-based registration with filtered images also resulted in similarly small errors. For the patient data, visual inspection of the superimposed registered images showed that they were correctly aligned in all instances. The results presented in this

  20. A computerized framework for monitoring four-dimensional dose distributions during stereotactic body radiation therapy using a portal dose image-based 2D/3D registration approach.

    PubMed

    Nakamoto, Takahiro; Arimura, Hidetaka; Nakamura, Katsumasa; Shioyama, Yoshiyuki; Mizoguchi, Asumi; Hirose, Taka-Aki; Honda, Hiroshi; Umezu, Yoshiyuki; Nakamura, Yasuhiko; Hirata, Hideki

    2015-03-01

    A computerized framework for monitoring four-dimensional (4D) dose distributions during stereotactic body radiation therapy based on a portal dose image (PDI)-based 2D/3D registration approach has been proposed in this study. Using the PDI-based registration approach, simulated 4D "treatment" CT images were derived from the deformation of 3D planning CT images so that a 2D planning PDI could be similar to a 2D dynamic clinical PDI at a breathing phase. The planning PDI was calculated by applying a dose calculation algorithm (a pencil beam convolution algorithm) to the geometry of the planning CT image and a virtual water equivalent phantom. The dynamic clinical PDIs were estimated from electronic portal imaging device (EPID) dynamic images including breathing phase data obtained during a treatment. The parameters of the affine transformation matrix were optimized based on an objective function and a gamma pass rate using a Levenberg-Marquardt (LM) algorithm. The proposed framework was applied to the EPID dynamic images of ten lung cancer patients, which included 183 frames (mean: 18.3 per patient). The 4D dose distributions during the treatment time were successfully obtained by applying the dose calculation algorithm to the simulated 4D "treatment" CT images. The mean±standard deviation (SD) of the percentage errors between the prescribed dose and the estimated dose at an isocenter for all cases was 3.25±4.43%. The maximum error for the ten cases was 14.67% (prescribed dose: 1.50Gy, estimated dose: 1.72Gy), and the minimum error was 0.00%. The proposed framework could be feasible for monitoring the 4D dose distribution and dose errors within a patient's body during treatment.

  1. Interactive initialization of 2D/3D rigid registration

    SciTech Connect

    Gong, Ren Hui; Güler, Özgür; Kürklüoglu, Mustafa; Lovejoy, John; Yaniv, Ziv

    2013-12-15

    Purpose: Registration is one of the key technical components in an image-guided navigation system. A large number of 2D/3D registration algorithms have been previously proposed, but have not been able to transition into clinical practice. The authors identify the primary reason for the lack of adoption with the prerequisite for a sufficiently accurate initial transformation, mean target registration error of about 10 mm or less. In this paper, the authors present two interactive initialization approaches that provide the desired accuracy for x-ray/MR and x-ray/CT registration in the operating room setting. Methods: The authors have developed two interactive registration methods based on visual alignment of a preoperative image, MR, or CT to intraoperative x-rays. In the first approach, the operator uses a gesture based interface to align a volume rendering of the preoperative image to multiple x-rays. The second approach uses a tracked tool available as part of a navigation system. Preoperatively, a virtual replica of the tool is positioned next to the anatomical structures visible in the volumetric data. Intraoperatively, the physical tool is positioned in a similar manner and subsequently used to align a volume rendering to the x-ray images using an augmented reality (AR) approach. Both methods were assessed using three publicly available reference data sets for 2D/3D registration evaluation. Results: In the authors' experiments, the authors show that for x-ray/MR registration, the gesture based method resulted in a mean target registration error (mTRE) of 9.3 ± 5.0 mm with an average interaction time of 146.3 ± 73.0 s, and the AR-based method had mTREs of 7.2 ± 3.2 mm with interaction times of 44 ± 32 s. For x-ray/CT registration, the gesture based method resulted in a mTRE of 7.4 ± 5.0 mm with an average interaction time of 132.1 ± 66.4 s, and the AR-based method had mTREs of 8.3 ± 5.0 mm with interaction times of 58 ± 52 s. Conclusions: Based on the

  2. SU-E-J-13: Six Degree of Freedom Image Fusion Accuracy for Cranial Target Localization On the Varian Edge Stereotactic Radiosurgery System: Comparison Between 2D/3D and KV CBCT Image Registration

    SciTech Connect

    Xu, H; Song, K; Chetty, I; Kim, J; Wen, N

    2015-06-15

    Purpose: To determine the 6 degree of freedom systematic deviations between 2D/3D and CBCT image registration with various imaging setups and fusion algorithms on the Varian Edge Linac. Methods: An anthropomorphic head phantom with radio opaque targets embedded was scanned with CT slice thicknesses of 0.8, 1, 2, and 3mm. The 6 DOF systematic errors were assessed by comparing 2D/3D (kV/MV with CT) with 3D/3D (CBCT with CT) image registrations with different offset positions, similarity measures, image filters, and CBCT slice thicknesses (1 and 2 mm). The 2D/3D registration accuracy of 51 fractions for 26 cranial SRS patients was also evaluated by analyzing 2D/3D pre-treatment verification taken after 3D/3D image registrations. Results: The systematic deviations of 2D/3D image registration using kV- kV, MV-kV and MV-MV image pairs were within ±0.3mm and ±0.3° for translations and rotations with 95% confidence interval (CI) for a reference CT with 0.8 mm slice thickness. No significant difference (P>0.05) on target localization was observed between 0.8mm, 1mm, and 2mm CT slice thicknesses with CBCT slice thicknesses of 1mm and 2mm. With 3mm CT slice thickness, both 2D/3D and 3D/3D registrations performed less accurately in longitudinal direction than thinner CT slice thickness (0.60±0.12mm and 0.63±0.07mm off, respectively). Using content filter and using similarity measure of pattern intensity instead of mutual information, improved the 2D/3D registration accuracy significantly (P=0.02 and P=0.01, respectively). For the patient study, means and standard deviations of residual errors were 0.09±0.32mm, −0.22±0.51mm and −0.07±0.32mm in VRT, LNG and LAT directions, respectively, and 0.12°±0.46°, −0.12°±0.39° and 0.06°±0.28° in RTN, PITCH, and ROLL directions, respectively. 95% CI of translational and rotational deviations were comparable to those in phantom study. Conclusion: 2D/3D image registration provided on the Varian Edge radiosurgery, 6 DOF

  3. FIRE: an open-software suite for real-time 2D/3D image registration for image guided radiotherapy research

    NASA Astrophysics Data System (ADS)

    Furtado, H.; Gendrin, C.; Spoerk, J.; Steiner, E.; Underwood, T.; Kuenzler, T.; Georg, D.; Birkfellner, W.

    2016-03-01

    Radiotherapy treatments have changed at a tremendously rapid pace. Dose delivered to the tumor has escalated while organs at risk (OARs) are better spared. The impact of moving tumors during dose delivery has become higher due to very steep dose gradients. Intra-fractional tumor motion has to be managed adequately to reduce errors in dose delivery. For tumors with large motion such as tumors in the lung, tracking is an approach that can reduce position uncertainty. Tumor tracking approaches range from purely image intensity based techniques to motion estimation based on surrogate tracking. Research efforts are often based on custom designed software platforms which take too much time and effort to develop. To address this challenge we have developed an open software platform especially focusing on tumor motion management. FLIRT is a freely available open-source software platform. The core method for tumor tracking is purely intensity based 2D/3D registration. The platform is written in C++ using the Qt framework for the user interface. The performance critical methods are implemented on the graphics processor using the CUDA extension. One registration can be as fast as 90ms (11Hz). This is suitable to track tumors moving due to respiration (~0.3Hz) or heartbeat (~1Hz). Apart from focusing on high performance, the platform is designed to be flexible and easy to use. Current use cases range from tracking feasibility studies, patient positioning and method validation. Such a framework has the potential of enabling the research community to rapidly perform patient studies or try new methods.

  4. 2D/3D registration algorithm for lung brachytherapy

    SciTech Connect

    Zvonarev, P. S.; Farrell, T. J.; Hunter, R.; Wierzbicki, M.; Hayward, J. E.; Sur, R. K.

    2013-02-15

    Purpose: A 2D/3D registration algorithm is proposed for registering orthogonal x-ray images with a diagnostic CT volume for high dose rate (HDR) lung brachytherapy. Methods: The algorithm utilizes a rigid registration model based on a pixel/voxel intensity matching approach. To achieve accurate registration, a robust similarity measure combining normalized mutual information, image gradient, and intensity difference was developed. The algorithm was validated using a simple body and anthropomorphic phantoms. Transfer catheters were placed inside the phantoms to simulate the unique image features observed during treatment. The algorithm sensitivity to various degrees of initial misregistration and to the presence of foreign objects, such as ECG leads, was evaluated. Results: The mean registration error was 2.2 and 1.9 mm for the simple body and anthropomorphic phantoms, respectively. The error was comparable to the interoperator catheter digitization error of 1.6 mm. Preliminary analysis of data acquired from four patients indicated a mean registration error of 4.2 mm. Conclusions: Results obtained using the proposed algorithm are clinically acceptable especially considering the complications normally encountered when imaging during lung HDR brachytherapy.

  5. Effective incorporation of spatial information in a mutual information based 3D-2D registration of a CT volume to X-ray images.

    PubMed

    Zheng, Guoyan

    2008-01-01

    This paper addresses the problem of estimating the 3D rigid pose of a CT volume of an object from its 2D X-ray projections. We use maximization of mutual information, an accurate similarity measure for multi-modal and mono-modal image registration tasks. However, it is known that the standard mutual information measure only takes intensity values into account without considering spatial information and its robustness is questionable. In this paper, instead of directly maximizing mutual information, we propose to use a variational approximation derived from the Kullback-Leibler bound. Spatial information is then incorporated into this variational approximation using a Markov random field model. The newly derived similarity measure has a least-squares form and can be effectively minimized by a multi-resolution Levenberg-Marquardt optimizer. Experimental results are presented on X-ray and CT datasets of a plastic phantom and a cadaveric spine segment.

  6. 3D-2D ultrasound feature-based registration for navigated prostate biopsy: a feasibility study.

    PubMed

    Selmi, Sonia Y; Promayon, Emmanuel; Troccaz, Jocelyne

    2016-08-01

    The aim of this paper is to describe a 3D-2D ultrasound feature-based registration method for navigated prostate biopsy and its first results obtained on patient data. A system combining a low-cost tracking system and a 3D-2D registration algorithm was designed. The proposed 3D-2D registration method combines geometric and image-based distances. After extracting features from ultrasound images, 3D and 2D features within a defined distance are matched using an intensity-based function. The results are encouraging and show acceptable errors with simulated transforms applied on ultrasound volumes from real patients.

  7. Effective incorporating spatial information in a mutual information based 3D-2D registration of a CT volume to X-ray images.

    PubMed

    Zheng, Guoyan

    2010-10-01

    This paper addresses the problem of estimating the 3D rigid poses of a CT volume of an object from its 2D X-ray projection(s). We use maximization of mutual information, an accurate similarity measure for multi-modal and mono-modal image registration tasks. However, it is known that the standard mutual information measures only take intensity values into account without considering spatial information and their robustness is questionable. In this paper, instead of directly maximizing mutual information, we propose to use a variational approximation derived from the Kullback-Leibler bound. Spatial information is then incorporated into this variational approximation using a Markov random field model. The newly derived similarity measure has a least-squares form and can be effectively minimized by a multi-resolution Levenberg-Marquardt optimizer. Experiments were conducted on datasets from two applications: (a) intra-operative patient pose estimation from a limited number (e.g. 2) of calibrated fluoroscopic images, and (b) post-operative cup orientation estimation from a single standard X-ray radiograph with/without gonadal shielding. The experiment on intra-operative patient pose estimation showed a mean target registration accuracy of 0.8mm and a capture range of 11.5mm, while the experiment on estimating the post-operative cup orientation from a single X-ray radiograph showed a mean accuracy below 2 degrees for both anteversion and inclination. More importantly, results from both experiments demonstrated that the newly derived similarity measures were robust to occlusions in the X-ray image(s).

  8. "Gold standard" data for evaluation and comparison of 3D/2D registration methods.

    PubMed

    Tomazevic, Dejan; Likar, Bostjan; Pernus, Franjo

    2004-01-01

    Evaluation and comparison of registration techniques for image-guided surgery is an important problem that has received little attention in the literature. In this paper we address the challenging problem of generating reliable "gold standard" data for use in evaluating the accuracy of 3D/2D registrations. We have devised a cadaveric lumbar spine phantom with fiducial markers and established highly accurate correspondences between 3D CT and MR images and 18 2D X-ray images. The expected target registration errors for target points on the pedicles are less than 0.26 mm for CT-to-X-ray registration and less than 0.42 mm for MR-to-X-ray registration. As such, the "gold standard" data, which has been made publicly available on the Internet (http://lit.fe.uni-lj.si/Downloads/downloads.asp), is useful for evaluation and comparison of 3D/2D image registration methods.

  9. Kinematic analysis of healthy hips during weight-bearing activities by 3D-to-2D model-to-image registration technique.

    PubMed

    Hara, Daisuke; Nakashima, Yasuharu; Hamai, Satoshi; Higaki, Hidehiko; Ikebe, Satoru; Shimoto, Takeshi; Hirata, Masanobu; Kanazawa, Masayuki; Kohno, Yusuke; Iwamoto, Yukihide

    2014-01-01

    Dynamic hip kinematics during weight-bearing activities were analyzed for six healthy subjects. Continuous X-ray images of gait, chair-rising, squatting, and twisting were taken using a flat panel X-ray detector. Digitally reconstructed radiographic images were used for 3D-to-2D model-to-image registration technique. The root-mean-square errors associated with tracking the pelvis and femur were less than 0.3 mm and 0.3° for translations and rotations. For gait, chair-rising, and squatting, the maximum hip flexion angles averaged 29.6°, 81.3°, and 102.4°, respectively. The pelvis was tilted anteriorly around 4.4° on average during full gait cycle. For chair-rising and squatting, the maximum absolute value of anterior/posterior pelvic tilt averaged 12.4°/11.7° and 10.7°/10.8°, respectively. Hip flexion peaked on the way of movement due to further anterior pelvic tilt during both chair-rising and squatting. For twisting, the maximum absolute value of hip internal/external rotation averaged 29.2°/30.7°. This study revealed activity dependent kinematics of healthy hip joints with coordinated pelvic and femoral dynamic movements. Kinematics' data during activities of daily living may provide important insight as to the evaluating kinematics of pathological and reconstructed hips.

  10. Model-based measurement of food portion size for image-based dietary assessment using 3D/2D registration

    NASA Astrophysics Data System (ADS)

    Chen, Hsin-Chen; Jia, Wenyan; Yue, Yaofeng; Li, Zhaoxin; Sun, Yung-Nien; Fernstrom, John D.; Sun, Mingui

    2013-10-01

    Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographic image of food contained on a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image-based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image.

  11. Model-based measurement of food portion size for image-based dietary assessment using 3D/2D registration

    PubMed Central

    Chen, Hsin-Chen; Jia, Wenyan; Yue, Yaofeng; Li, Zhaoxin; Sun, Yung-Nien; Fernstrom, John D.; Sun, Mingui

    2013-01-01

    Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes, and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographical image of food contained in a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc.) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image. PMID:24223474

  12. Model-based measurement of food portion size for image-based dietary assessment using 3D/2D registration.

    PubMed

    Chen, Hsin-Chen; Jia, Wenyan; Yue, Yaofeng; Li, Zhaoxin; Sun, Yung-Nien; Fernstrom, John D; Sun, Mingui

    2013-10-01

    Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes, and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographical image of food contained in a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc.) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image.

  13. Projection-slice theorem based 2D-3D registration

    NASA Astrophysics Data System (ADS)

    van der Bom, M. J.; Pluim, J. P. W.; Homan, R.; Timmer, J.; Bartels, L. W.

    2007-03-01

    In X-ray guided procedures, the surgeon or interventionalist is dependent on his or her knowledge of the patient's specific anatomy and the projection images acquired during the procedure by a rotational X-ray source. Unfortunately, these X-ray projections fail to give information on the patient's anatomy in the dimension along the projection axis. It would be very profitable to provide the surgeon or interventionalist with a 3D insight of the patient's anatomy that is directly linked to the X-ray images acquired during the procedure. In this paper we present a new robust 2D-3D registration method based on the Projection-Slice Theorem. This theorem gives us a relation between the pre-operative 3D data set and the interventional projection images. Registration is performed by minimizing a translation invariant similarity measure that is applied to the Fourier transforms of the images. The method was tested by performing multiple exhaustive searches on phantom data of the Circle of Willis and on a post-mortem human skull. Validation was performed visually by comparing the test projections to the ones that corresponded to the minimal value of the similarity measure. The Projection-Slice Theorem Based method was shown to be very effective and robust, and provides capture ranges up to 62 degrees. Experiments have shown that the method is capable of retrieving similar results when translations are applied to the projection images.

  14. Reconstruction of 3D lung models from 2D planning data sets for Hodgkin's lymphoma patients using combined deformable image registration and navigator channels

    SciTech Connect

    Ng, Angela; Nguyen, Thao-Nguyen; Moseley, Joanne L.; Hodgson, David C.; Sharpe, Michael B.; Brock, Kristy K.

    2010-03-15

    Purpose: Late complications (cardiac toxicities, secondary lung, and breast cancer) remain a significant concern in the radiation treatment of Hodgkin's lymphoma (HL). To address this issue, predictive dose-risk models could potentially be used to estimate radiotherapy-related late toxicities. This study investigates the use of deformable image registration (DIR) and navigator channels (NCs) to reconstruct 3D lung models from 2D radiographic planning images, in order to retrospectively calculate the treatment dose exposure to HL patients treated with 2D planning, which are now experiencing late effects. Methods: Three-dimensional planning CT images of 52 current HL patients were acquired. 12 image sets were used to construct a male and a female population lung model. 23 ''Reference'' images were used to generate lung deformation adaptation templates, constructed by deforming the population model into each patient-specific lung geometry using a biomechanical-based DIR algorithm, MORFEUS. 17 ''Test'' patients were used to test the accuracy of the reconstruction technique by adapting existing templates using 2D digitally reconstructed radiographs. The adaptation process included three steps. First, a Reference patient was matched to a Test patient by thorax measurements. Second, four NCs (small regions of interest) were placed on the lung boundary to calculate 1D differences in lung edges. Third, the Reference lung model was adapted to the Test patient's lung using the 1D edge differences. The Reference-adapted Test model was then compared to the 3D lung contours of the actual Test patient by computing their percentage volume overlap (POL) and Dice coefficient. Results: The average percentage overlapping volumes and Dice coefficient expressed as a percentage between the adapted and actual Test models were found to be 89.2{+-}3.9% (Right lung=88.8%; Left lung=89.6%) and 89.3{+-}2.7% (Right=88.5%; Left=90.2%), respectively. Paired T-tests demonstrated that the

  15. Device and methods for "gold standard" registration of clinical 3D and 2D cerebral angiograms

    NASA Astrophysics Data System (ADS)

    Madan, Hennadii; Likar, Boštjan; Pernuš, Franjo; Å piclin, Žiga

    2015-03-01

    Translation of any novel and existing 3D-2D image registration methods into clinical image-guidance systems is limited due to lack of their objective validation on clinical image datasets. The main reason is that, besides the calibration of the 2D imaging system, a reference or "gold standard" registration is very difficult to obtain on clinical image datasets. In the context of cerebral endovascular image-guided interventions (EIGIs), we present a calibration device in the form of a headband with integrated fiducial markers and, secondly, propose an automated pipeline comprising 3D and 2D image processing, analysis and annotation steps, the result of which is a retrospective calibration of the 2D imaging system and an optimal, i.e., "gold standard" registration of 3D and 2D images. The device and methods were used to create the "gold standard" on 15 datasets of 3D and 2D cerebral angiograms, whereas each dataset was acquired on a patient undergoing EIGI for either aneurysm coiling or embolization of arteriovenous malformation. The use of the device integrated seamlessly in the clinical workflow of EIGI. While the automated pipeline eliminated all manual input or interactive image processing, analysis or annotation. In this way, the time to obtain the "gold standard" was reduced from 30 to less than one minute and the "gold standard" of 3D-2D registration on all 15 datasets of cerebral angiograms was obtained with a sub-0.1 mm accuracy.

  16. Correspondenceless 3D-2D registration based on expectation conditional maximization

    NASA Astrophysics Data System (ADS)

    Kang, X.; Taylor, R. H.; Armand, M.; Otake, Y.; Yau, W. P.; Cheung, P. Y. S.; Hu, Y.

    2011-03-01

    3D-2D registration is a fundamental task in image guided interventions. Due to the physics of the X-ray imaging, however, traditional point based methods meet new challenges, where the local point features are indistinguishable, creating difficulties in establishing correspondence between 2D image feature points and 3D model points. In this paper, we propose a novel method to accomplish 3D-2D registration without known correspondences. Given a set of 3D and 2D unmatched points, this is achieved by introducing correspondence probabilities that we model as a mixture model. By casting it into the expectation conditional maximization framework, without establishing one-to-one point correspondences, we can iteratively refine the registration parameters. The method has been tested on 100 real X-ray images. The experiments showed that the proposed method accurately estimated the rotations (< 1°) and in-plane (X-Y plane) translations (< 1 mm).

  17. Image Registration Workshop Proceedings

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline (Editor)

    1997-01-01

    Automatic image registration has often been considered as a preliminary step for higher-level processing, such as object recognition or data fusion. But with the unprecedented amounts of data which are being and will continue to be generated by newly developed sensors, the very topic of automatic image registration has become and important research topic. This workshop presents a collection of very high quality work which has been grouped in four main areas: (1) theoretical aspects of image registration; (2) applications to satellite imagery; (3) applications to medical imagery; and (4) image registration for computer vision research.

  18. Gradient-based 3D-2D registration of cerebral angiograms

    NASA Astrophysics Data System (ADS)

    Mitrović, Uroš; Markelj, Primož; Likar, Boštjan; Miloševič, Zoran; Pernuš, Franjo

    2011-03-01

    Endovascular treatment of cerebral aneurysms and arteriovenous malformations (AVM) involves navigation of a catheter through the femoral artery and vascular system into the brain and into the aneurysm or AVM. Intra-interventional navigation utilizes digital subtraction angiography (DSA) to visualize vascular structures and X-ray fluoroscopy to localize the endovascular components. Due to the two-dimensional (2D) nature of the intra-interventional images, navigation through a complex three-dimensional (3D) structure is a demanding task. Registration of pre-interventional MRA, CTA, or 3D-DSA images and intra-interventional 2D DSA images can greatly enhance visualization and navigation. As a consequence of better navigation in 3D, the amount of required contrast medium and absorbed dose could be significantly reduced. In the past, development and evaluation of 3D-2D registration methods received considerable attention. Several validation image databases and evaluation criteria were created and made publicly available in the past. However, applications of 3D-2D registration methods to cerebral angiograms and their validation are rather scarce. In this paper, the 3D-2D robust gradient reconstruction-based (RGRB) registration algorithm is applied to CTA and DSA images and analyzed. For the evaluation purposes five image datasets, each comprised of a 3D CTA and several 2D DSA-like digitally reconstructed radiographs (DRRs) generated from the CTA, with accurate gold standard registrations were created. A total of 4000 registrations on these five datasets resulted in mean mTRE values between 0.07 and 0.59 mm, capture ranges between 6 and 11 mm and success rates between 61 and 88% using a failure threshold of 2 mm.

  19. Robust initialization for 2D/3D registration of knee implant models to single-plane fluoroscopy

    NASA Astrophysics Data System (ADS)

    Hermans, J.; Claes, P.; Bellemans, J.; Vandermeulen, D.; Suetens, P.

    2007-03-01

    A fully automated initialization method is proposed for the 2D/3D registration of 3D CAD models of knee implant components to a single-plane calibrated fluoroscopy. The algorithm matches edge segments, detected in the fluoroscopy image, with pre-computed libraries of expected 2D silhouettes of the implant components. Each library entry represents a different combination of out-of-plane registration transformation parameters. Library matching is performed by computing point-based 2D/2D registrations in between each library entry and each detected edge segment in the fluoroscopy image, resulting in an estimate of the in-plane registration transformation parameters. Point correspondences for registration are established by template matching of the bending patterns on the contours. A matching score for each individual 2D/2D registration is computed by evaluating the transformed library entry in an edge-encoded (characteristic) image, which is derived from the original fluoroscopy image. A matching scores accumulator is introduced to select and suggest one or more initial pose estimates. The proposed method is robust against occlusions and partial segmentations. Validation results are shown on simulated fluoroscopy images. In all cases a library match is found for each implant component which is very similar to the shape information in the fluoroscopy. The feasibility of the proposed method is demonstrated by initializing an intensity-based 2D/3D registration method with the automatically obtained estimation of the registration transformation parameters.

  20. Self-Calibration of Cone-Beam CT Geometry Using 3D-2D Image Registration: Development and Application to Task-Based Imaging with a Robotic C-Arm

    PubMed Central

    Ouadah, S.; Stayman, J. W.; Gang, G.; Uneri, A.; Ehtiati, T.; Siewerdsen, J. H.

    2015-01-01

    Purpose Robotic C-arm systems are capable of general noncircular orbits whose trajectories can be driven by the particular imaging task. However obtaining accurate calibrations for reconstruction in such geometries can be a challenging problem. This work proposes a method to perform a unique geometric calibration of an arbitrary C-arm orbit by registering 2D projections to a previously acquired 3D image to determine the transformation parameters representing the system geometry. Methods Experiments involved a cone-beam CT (CBCT) bench system, a robotic C-arm, and three phantoms. A robust 3D-2D registration process was used to compute the 9 degree of freedom (DOF) transformation between each projection and an existing 3D image by maximizing normalized gradient information with a digitally reconstructed radiograph (DRR) of the 3D volume. The quality of the resulting “self-calibration” was evaluated in terms of the agreement with an established calibration method using a BB phantom as well as image quality in the resulting CBCT reconstruction. Results The self-calibration yielded CBCT images without significant difference in spatial resolution from the standard (“true”) calibration methods (p-value >0.05 for all three phantoms), and the differences between CBCT images reconstructed using the “self” and “true” calibration methods were on the order of 10−3 mm−1. Maximum error in magnification was 3.2%, and back-projection ray placement was within 0.5 mm. Conclusion The proposed geometric “self” calibration provides a means for 3D imaging on general non-circular orbits in CBCT systems for which a geometric calibration is either not available or not reproducible. The method forms the basis of advanced “task-based” 3D imaging methods now in development for robotic C-arms. PMID:26388661

  1. Self-calibration of cone-beam CT geometry using 3D-2D image registration: development and application to tasked-based imaging with a robotic C-arm

    NASA Astrophysics Data System (ADS)

    Ouadah, S.; Stayman, J. W.; Gang, G.; Uneri, A.; Ehtiati, T.; Siewerdsen, J. H.

    2015-03-01

    Purpose: Robotic C-arm systems are capable of general noncircular orbits whose trajectories can be driven by the particular imaging task. However obtaining accurate calibrations for reconstruction in such geometries can be a challenging problem. This work proposes a method to perform a unique geometric calibration of an arbitrary C-arm orbit by registering 2D projections to a previously acquired 3D image to determine the transformation parameters representing the system geometry. Methods: Experiments involved a cone-beam CT (CBCT) bench system, a robotic C-arm, and three phantoms. A robust 3D-2D registration process was used to compute the 9 degree of freedom (DOF) transformation between each projection and an existing 3D image by maximizing normalized gradient information with a digitally reconstructed radiograph (DRR) of the 3D volume. The quality of the resulting "self-calibration" was evaluated in terms of the agreement with an established calibration method using a BB phantom as well as image quality in the resulting CBCT reconstruction. Results: The self-calibration yielded CBCT images without significant difference in spatial resolution from the standard ("true") calibration methods (p-value >0.05 for all three phantoms), and the differences between CBCT images reconstructed using the "self" and "true" calibration methods were on the order of 10-3 mm-1. Maximum error in magnification was 3.2%, and back-projection ray placement was within 0.5 mm. Conclusion: The proposed geometric "self" calibration provides a means for 3D imaging on general noncircular orbits in CBCT systems for which a geometric calibration is either not available or not reproducible. The method forms the basis of advanced "task-based" 3D imaging methods now in development for robotic C-arms.

  2. 3D-2D registration of cerebral angiograms based on vessel directions and intensity gradients

    NASA Astrophysics Data System (ADS)

    Mitrovic, Uroš; Špiclin, Žiga; Štern, Darko; Markelj, Primož; Likar, Boštjan; Miloševic, Zoran; Pernuš, Franjo

    2012-02-01

    Endovascular treatment of cerebral aneurysms and arteriovenous malformations (AVM) involves navigation of a catheter through the femoral artery and vascular system to the site of pathology. Intra-interventional navigation is done under the guidance of one or at most two two-dimensional (2D) X-ray fluoroscopic images or 2D digital subtracted angiograms (DSA). Due to the projective nature of 2D images, the interventionist needs to mentally reconstruct the position of the catheter in respect to the three-dimensional (3D) patient vasculature, which is not a trivial task. By 3D-2D registration of pre-interventional 3D images like CTA, MRA or 3D-DSA and intra-interventional 2D images, intra-interventional tools such as catheters can be visualized on the 3D model of patient vasculature, allowing easier and faster navigation. Such a navigation may consequently lead to the reduction of total ionizing dose and delivered contrast medium. In the past, development and evaluation of 3D-2D registration methods for endovascular treatments received considerable attention. The main drawback of these methods is that they have to be initialized rather close to the correct position as they mostly have a rather small capture range. In this paper, a novel registration method that has a higher capture range and success rate is proposed. The proposed method and a state-of-the-art method were tested and evaluated on synthetic and clinical 3D-2D image-pairs. The results on both databases indicate that although the proposed method was slightly less accurate, it significantly outperformed the state-of-the-art 3D-2D registration method in terms of robustness measured by capture range and success rate.

  3. Validation for 2D/3D registration I: A new gold standard data set

    SciTech Connect

    Pawiro, S. A.; Markelj, P.; Pernus, F.; Gendrin, C.; Figl, M.; Weber, C.; Kainberger, F.; Noebauer-Huhmann, I.; Bergmeister, H.; Stock, M.; Georg, D.; Bergmann, H.; Birkfellner, W.

    2011-03-15

    Purpose: In this article, the authors propose a new gold standard data set for the validation of two-dimensional/three-dimensional (2D/3D) and 3D/3D image registration algorithms. Methods: A gold standard data set was produced using a fresh cadaver pig head with attached fiducial markers. The authors used several imaging modalities common in diagnostic imaging or radiotherapy, which include 64-slice computed tomography (CT), magnetic resonance imaging using Tl, T2, and proton density sequences, and cone beam CT imaging data. Radiographic data were acquired using kilovoltage and megavoltage imaging techniques. The image information reflects both anatomy and reliable fiducial marker information and improves over existing data sets by the level of anatomical detail, image data quality, and soft-tissue content. The markers on the 3D and 2D image data were segmented using ANALYZE 10.0 (AnalyzeDirect, Inc., Kansas City, KN) and an in-house software. Results: The projection distance errors and the expected target registration errors over all the image data sets were found to be less than 2.71 and 1.88 mm, respectively. Conclusions: The gold standard data set, obtained with state-of-the-art imaging technology, has the potential to improve the validation of 2D/3D and 3D/3D registration algorithms for image guided therapy.

  4. Locally adaptive 2D-3D registration using vascular structure model for liver catheterization.

    PubMed

    Kim, Jihye; Lee, Jeongjin; Chung, Jin Wook; Shin, Yeong-Gil

    2016-03-01

    Two-dimensional-three-dimensional (2D-3D) registration between intra-operative 2D digital subtraction angiography (DSA) and pre-operative 3D computed tomography angiography (CTA) can be used for roadmapping purposes. However, through the projection of 3D vessels, incorrect intersections and overlaps between vessels are produced because of the complex vascular structure, which makes it difficult to obtain the correct solution of 2D-3D registration. To overcome these problems, we propose a registration method that selects a suitable part of a 3D vascular structure for a given DSA image and finds the optimized solution to the partial 3D structure. The proposed algorithm can reduce the registration errors because it restricts the range of the 3D vascular structure for the registration by using only the relevant 3D vessels with the given DSA. To search for the appropriate 3D partial structure, we first construct a tree model of the 3D vascular structure and divide it into several subtrees in accordance with the connectivity. Then, the best matched subtree with the given DSA image is selected using the results from the coarse registration between each subtree and the vessels in the DSA image. Finally, a fine registration is conducted to minimize the difference between the selected subtree and the vessels of the DSA image. In experimental results obtained using 10 clinical datasets, the average distance errors in the case of the proposed method were 2.34±1.94mm. The proposed algorithm converges faster and produces more correct results than the conventional method in evaluations on patient datasets.

  5. Image registration by parts

    NASA Technical Reports Server (NTRS)

    Chalermwat, Prachya; El-Ghazawi, Tarek; LeMoigne, Jacqueline

    1997-01-01

    In spite of the large number of different image registration techniques, most of these techniques use the correlation operation to match spatial image characteristics. Correlation is known to be one of the most computationally intensive operations and its computational needs grow rapidly with the increase in the image sizes. In this article, we show that, in many cases, it might be sufficient to determine image transformations by considering only one or several parts of the image rather than the entire image, which could result in substantial computational savings. This paper introduces the concept of registration by parts and investigates its viability. It describes alternative techniques for such image registration by parts and presents early empirical results that address the underlying trade-offs.

  6. 3D-2D registration for surgical guidance: effect of projection view angles on registration accuracy.

    PubMed

    Uneri, A; Otake, Y; Wang, A S; Kleinszig, G; Vogt, S; Khanna, A J; Siewerdsen, J H

    2014-01-20

    An algorithm for intensity-based 3D-2D registration of CT and x-ray projections is evaluated, specifically using single- or dual-projection views to provide 3D localization. The registration framework employs the gradient information similarity metric and covariance matrix adaptation evolution strategy to solve for the patient pose in six degrees of freedom. Registration performance was evaluated in an anthropomorphic phantom and cadaver, using C-arm projection views acquired at angular separation, Δθ, ranging from ∼0°-180° at variable C-arm magnification. Registration accuracy was assessed in terms of 2D projection distance error and 3D target registration error (TRE) and compared to that of an electromagnetic (EM) tracker. The results indicate that angular separation as small as Δθ ∼10°-20° achieved TRE <2 mm with 95% confidence, comparable or superior to that of the EM tracker. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers and manual registration.

  7. 3D-2D registration for surgical guidance: effect of projection view angles on registration accuracy

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Otake, Y.; Wang, A. S.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Siewerdsen, J. H.

    2014-01-01

    An algorithm for intensity-based 3D-2D registration of CT and x-ray projections is evaluated, specifically using single- or dual-projection views to provide 3D localization. The registration framework employs the gradient information similarity metric and covariance matrix adaptation evolution strategy to solve for the patient pose in six degrees of freedom. Registration performance was evaluated in an anthropomorphic phantom and cadaver, using C-arm projection views acquired at angular separation, Δθ, ranging from ˜0°-180° at variable C-arm magnification. Registration accuracy was assessed in terms of 2D projection distance error and 3D target registration error (TRE) and compared to that of an electromagnetic (EM) tracker. The results indicate that angular separation as small as Δθ ˜10°-20° achieved TRE <2 mm with 95% confidence, comparable or superior to that of the EM tracker. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers and manual registration.

  8. 3D–2D registration for surgical guidance: effect of projection view angles on registration accuracy

    PubMed Central

    Uneri, A; Otake, Y; Wang, A S; Kleinszig, G; Vogt, S; Khanna, A J; Siewerdsen, J H

    2016-01-01

    An algorithm for intensity-based 3D–2D registration of CT and x-ray projections is evaluated, specifically using single- or dual-projection views to provide 3D localization. The registration framework employs the gradient information similarity metric and covariance matrix adaptation evolution strategy to solve for the patient pose in six degrees of freedom. Registration performance was evaluated in an anthropomorphic phantom and cadaver, using C-arm projection views acquired at angular separation, Δθ, ranging from ~0°–180° at variable C-arm magnification. Registration accuracy was assessed in terms of 2D projection distance error and 3D target registration error (TRE) and compared to that of an electromagnetic (EM) tracker. The results indicate that angular separation as small as Δθ ~10°–20° achieved TRE <2 mm with 95% confidence, comparable or superior to that of the EM tracker. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers and manual registration. PMID:24351769

  9. Staring 2-D hadamard transform spectral imager

    DOEpatents

    Gentry, Stephen M.; Wehlburg, Christine M.; Wehlburg, Joseph C.; Smith, Mark W.; Smith, Jody L.

    2006-02-07

    A staring imaging system inputs a 2D spatial image containing multi-frequency spectral information. This image is encoded in one dimension of the image with a cyclic Hadamarid S-matrix. The resulting image is detecting with a spatial 2D detector; and a computer applies a Hadamard transform to recover the encoded image.

  10. Registration of 3D+t coronary CTA and monoplane 2D+t X-ray angiography.

    PubMed

    Metz, Coert T; Schaap, Michiel; Klein, Stefan; Baka, Nora; Neefjes, Lisan A; Schultz, Carl J; Niessen, Wiro J; van Walsum, Theo

    2013-05-01

    A method for registering preoperative 3D+t coronary CTA with intraoperative monoplane 2D+t X-ray angiography images is proposed to improve image guidance during minimally invasive coronary interventions. The method uses a patient-specific dynamic coronary model, which is derived from the CTA scan by centerline extraction and motion estimation. The dynamic coronary model is registered with the 2D+t X-ray sequence, considering multiple X-ray time points concurrently, while taking breathing induced motion into account. Evaluation was performed on 26 datasets of 17 patients by comparing projected model centerlines with manually annotated centerlines in the X-ray images. The proposed 3D+t/2D+t registration method performed better than a 3D/2D registration method with respect to the accuracy and especially the robustness of the registration. Registration with a median error of 1.47 mm was achieved.

  11. Evaluation of low-dose limits in 3D-2D rigid registration for surgical guidance

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Wang, A. S.; Otake, Y.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Gallia, G. L.; Gokaslan, Z. L.; Siewerdsen, J. H.

    2014-09-01

    An algorithm for intensity-based 3D-2D registration of CT and C-arm fluoroscopy is evaluated for use in surgical guidance, specifically considering the low-dose limits of the fluoroscopic x-ray projections. The registration method is based on a framework using the covariance matrix adaptation evolution strategy (CMA-ES) to identify the 3D patient pose that maximizes the gradient information similarity metric. Registration performance was evaluated in an anthropomorphic head phantom emulating intracranial neurosurgery, using target registration error (TRE) to characterize accuracy and robustness in terms of 95% confidence upper bound in comparison to that of an infrared surgical tracking system. Three clinical scenarios were considered: (1) single-view image + guidance, wherein a single x-ray projection is used for visualization and 3D-2D guidance; (2) dual-view image + guidance, wherein one projection is acquired for visualization, combined with a second (lower-dose) projection acquired at a different C-arm angle for 3D-2D guidance; and (3) dual-view guidance, wherein both projections are acquired at low dose for the purpose of 3D-2D guidance alone (not visualization). In each case, registration accuracy was evaluated as a function of the entrance surface dose associated with the projection view(s). Results indicate that images acquired at a dose as low as 4 μGy (approximately one-tenth the dose of a typical fluoroscopic frame) were sufficient to provide TRE comparable or superior to that of conventional surgical tracking, allowing 3D-2D guidance at a level of dose that is at most 10% greater than conventional fluoroscopy (scenario #2) and potentially reducing the dose to approximately 20% of the level in a conventional fluoroscopically guided procedure (scenario #3).

  12. Evaluation of low-dose limits in 3D-2D rigid registration for surgical guidance.

    PubMed

    Uneri, A; Wang, A S; Otake, Y; Kleinszig, G; Vogt, S; Khanna, A J; Gallia, G L; Gokaslan, Z L; Siewerdsen, J H

    2014-09-21

    An algorithm for intensity-based 3D-2D registration of CT and C-arm fluoroscopy is evaluated for use in surgical guidance, specifically considering the low-dose limits of the fluoroscopic x-ray projections. The registration method is based on a framework using the covariance matrix adaptation evolution strategy (CMA-ES) to identify the 3D patient pose that maximizes the gradient information similarity metric. Registration performance was evaluated in an anthropomorphic head phantom emulating intracranial neurosurgery, using target registration error (TRE) to characterize accuracy and robustness in terms of 95% confidence upper bound in comparison to that of an infrared surgical tracking system. Three clinical scenarios were considered: (1) single-view image+guidance, wherein a single x-ray projection is used for visualization and 3D-2D guidance; (2) dual-view image+guidance, wherein one projection is acquired for visualization, combined with a second (lower-dose) projection acquired at a different C-arm angle for 3D-2D guidance; and (3) dual-view guidance, wherein both projections are acquired at low dose for the purpose of 3D-2D guidance alone (not visualization). In each case, registration accuracy was evaluated as a function of the entrance surface dose associated with the projection view(s). Results indicate that images acquired at a dose as low as 4 μGy (approximately one-tenth the dose of a typical fluoroscopic frame) were sufficient to provide TRE comparable or superior to that of conventional surgical tracking, allowing 3D-2D guidance at a level of dose that is at most 10% greater than conventional fluoroscopy (scenario #2) and potentially reducing the dose to approximately 20% of the level in a conventional fluoroscopically guided procedure (scenario #3).

  13. 2D/3D registration for X-ray guided bronchoscopy using distance map classification.

    PubMed

    Xu, Di; Xu, Sheng; Herzka, Daniel A; Yung, Rex C; Bergtholdt, Martin; Gutierrez, Luis F; McVeigh, Elliot R

    2010-01-01

    In X-ray guided bronchoscopy of peripheral pulmonary lesions, airways and nodules are hardly visible in X-ray images. Transbronchial biopsy of peripheral lesions is often carried out blindly, resulting in degraded diagnostic yield. One solution of this problem is to superimpose the lesions and airways segmented from preoperative 3D CT images onto 2D X-ray images. A feature-based 2D/3D registration method is proposed for the image fusion between the datasets of the two imaging modalities. Two stereo X-ray images are used in the algorithm to improve the accuracy and robustness of the registration. The algorithm extracts the edge features of the bony structures from both CT and X-ray images. The edge points from the X-ray images are categorized into eight groups based on the orientation information of their image gradients. An orientation dependent Euclidean distance map is generated for each group of X-ray feature points. The distance map is then applied to the edge points of the projected CT images whose gradient orientations are compatible with the distance map. The CT and X-ray images are registered by matching the boundaries of the projected CT segmentations to the closest edges of the X-ray images after the orientation constraint is satisfied. Phantom and clinical studies were carried out to validate the algorithm's performance, showing a registration accuracy of 4.19(± 0.5) mm with 48.39(± 9.6) seconds registration time. The algorithm was also evaluated on clinical data, showing promising registration accuracy and robustness.

  14. Automatic pose initialization for accurate 2D/3D registration applied to abdominal aortic aneurysm endovascular repair

    NASA Astrophysics Data System (ADS)

    Miao, Shun; Lucas, Joseph; Liao, Rui

    2012-02-01

    Minimally invasive abdominal aortic aneurysm (AAA) stenting can be greatly facilitated by overlaying the preoperative 3-D model of the abdominal aorta onto the intra-operative 2-D X-ray images. Accurate 2-D/3-D registration in 3-D space makes the 2-D/3-D overlay robust to the change of C-Arm angulations. By far, the 2-D/3-D registration methods based on simulated X-ray projection images using multiple image planes have been shown to be able to provide satisfactory 3-D registration accuracy. However, one drawback of the intensity-based 2-D/3-D registration methods is that the similarity measure is usually highly non-convex and hence the optimizer can easily be trapped into local minima. User interaction therefore is often needed in the initialization of the position of the 3-D model in order to get a successful 2-D/3-D registration. In this paper, a novel 3-D pose initialization technique is proposed, as an extension of our previously proposed bi-plane 2-D/3-D registration method for AAA intervention [4]. The proposed method detects vessel bifurcation points and spine centerline in both 2-D and 3-D images, and utilizes landmark information to bring the 3-D volume into a 15mm capture range. The proposed landmark detection method was validated on real dataset, and is shown to be able to provide a good initialization for 2-D/3-D registration in [4], thus making the workflow fully automatic.

  15. Intensity-based 2D 3D registration for lead localization in robot guided deep brain stimulation.

    PubMed

    Hunsche, Stefan; Sauner, Dieter; Majdoub, Faycal El; Neudorfer, Clemens; Poggenborg, Jörg; Goßmann, Axel; Maarouf, Mohammad

    2017-03-21

    Intraoperative assessment of lead localization has become a standard procedure during deep brain stimulation surgery in many centers, allowing immediate verification of targeting accuracy and, if necessary, adjustment of the trajectory. The most suitable imaging modality to determine lead positioning, however, remains controversially discussed. Current approaches entail the implementation of computed tomography and magnetic resonance imaging. In the present study, we adopted the technique of intensity-based 2D 3D registration that is commonly employed in stereotactic radiotherapy and spinal surgery. For this purpose, intraoperatively acquired 2D x-ray images were fused with preoperative 3D computed tomography (CT) data to verify lead placement during stereotactic robot assisted surgery. Accuracy of lead localization determined from 2D 3D registration was compared to conventional 3D 3D registration in a subsequent patient study. The mean Euclidian distance of lead coordinates estimated from intensity-based 2D 3D registration versus flat-panel detector CT 3D 3D registration was 0.7 mm  ±  0.2 mm. Maximum values of these distances amounted to 1.2 mm. To further investigate 2D 3D registration a simulation study was conducted, challenging two observers to visually assess artificially generated 2D 3D registration errors. 95% of deviation simulations, which were visually assessed as sufficient, had a registration error below 0.7 mm. In conclusion, 2D 3D intensity-based registration revealed high accuracy and reliability during robot guided stereotactic neurosurgery and holds great potential as a low dose, cost effective means for intraoperative lead localization.

  16. Intensity-based 2D 3D registration for lead localization in robot guided deep brain stimulation

    NASA Astrophysics Data System (ADS)

    Hunsche, Stefan; Sauner, Dieter; El Majdoub, Faycal; Neudorfer, Clemens; Poggenborg, Jörg; Goßmann, Axel; Maarouf, Mohammad

    2017-03-01

    Intraoperative assessment of lead localization has become a standard procedure during deep brain stimulation surgery in many centers, allowing immediate verification of targeting accuracy and, if necessary, adjustment of the trajectory. The most suitable imaging modality to determine lead positioning, however, remains controversially discussed. Current approaches entail the implementation of computed tomography and magnetic resonance imaging. In the present study, we adopted the technique of intensity-based 2D 3D registration that is commonly employed in stereotactic radiotherapy and spinal surgery. For this purpose, intraoperatively acquired 2D x-ray images were fused with preoperative 3D computed tomography (CT) data to verify lead placement during stereotactic robot assisted surgery. Accuracy of lead localization determined from 2D 3D registration was compared to conventional 3D 3D registration in a subsequent patient study. The mean Euclidian distance of lead coordinates estimated from intensity-based 2D 3D registration versus flat-panel detector CT 3D 3D registration was 0.7 mm  ±  0.2 mm. Maximum values of these distances amounted to 1.2 mm. To further investigate 2D 3D registration a simulation study was conducted, challenging two observers to visually assess artificially generated 2D 3D registration errors. 95% of deviation simulations, which were visually assessed as sufficient, had a registration error below 0.7 mm. In conclusion, 2D 3D intensity-based registration revealed high accuracy and reliability during robot guided stereotactic neurosurgery and holds great potential as a low dose, cost effective means for intraoperative lead localization.

  17. 3D/2D registration and segmentation of scoliotic vertebrae using statistical models.

    PubMed

    Benameur, Said; Mignotte, Max; Parent, Stefan; Labelle, Hubert; Skalli, Wafa; de Guise, Jacques

    2003-01-01

    We propose a new 3D/2D registration method for vertebrae of the scoliotic spine, using two conventional radiographic views (postero-anterior and lateral), and a priori global knowledge of the geometric structure of each vertebra. This geometric knowledge is efficiently captured by a statistical deformable template integrating a set of admissible deformations, expressed by the first modes of variation in Karhunen-Loeve expansion, of the pathological deformations observed on a representative scoliotic vertebra population. The proposed registration method consists of fitting the projections of this deformable template with the preliminary segmented contours of the corresponding vertebra on the two radiographic views. The 3D/2D registration problem is stated as the minimization of a cost function for each vertebra and solved with a gradient descent technique. Registration of the spine is then done vertebra by vertebra. The proposed method efficiently provides accurate 3D reconstruction of each scoliotic vertebra and, consequently, it also provides accurate knowledge of the 3D structure of the whole scoliotic spine. This registration method has been successfully tested on several biplanar radiographic images and validated on 57 scoliotic vertebrae. The validation results reported in this paper demonstrate that the proposed statistical scheme performs better than other conventional 3D reconstruction methods.

  18. Deformable 3D-2D registration for CT and its application to low dose tomographic fluoroscopy

    NASA Astrophysics Data System (ADS)

    Flach, Barbara; Brehm, Marcus; Sawall, Stefan; Kachelrieß, Marc

    2014-12-01

    Many applications in medical imaging include image registration for matching of images from the same or different modalities. In the case of full data sampling, the respective reconstructed images are usually of such a good image quality that standard deformable volume-to-volume (3D-3D) registration approaches can be applied. But research in temporal-correlated image reconstruction and dose reductions increases the number of cases where rawdata are available from only few projection angles. Here, deteriorated image quality leads to non-acceptable deformable volume-to-volume registration results. Therefore a registration approach is required that is robust against a decreasing number of projections defining the target position. We propose a deformable volume-to-rawdata (3D-2D) registration method that aims at finding a displacement vector field maximizing the alignment of a CT volume and the acquired rawdata based on the sum of squared differences in rawdata domain. The registration is constrained by a regularization term in accordance with a fluid-based diffusion. Both cost function components, the rawdata fidelity and the regularization term, are optimized in an alternating manner. The matching criterion is optimized by a conjugate gradient descent for nonlinear functions, while the regularization is realized by convolution of the vector fields with Gaussian kernels. We validate the proposed method and compare it to the demons algorithm, a well-known 3D-3D registration method. The comparison is done for a range of 4-60 target projections using datasets from low dose tomographic fluoroscopy as an application example. The results show a high correlation to the ground truth target position without introducing artifacts even in the case of very few projections. In particular the matching in the rawdata domain is improved compared to the 3D-3D registration for the investigated range. The proposed volume-to-rawdata registration increases the robustness regarding sparse

  19. Deformable 3D-2D registration for CT and its application to low dose tomographic fluoroscopy.

    PubMed

    Flach, Barbara; Brehm, Marcus; Sawall, Stefan; Kachelrieß, Marc

    2014-12-21

    Many applications in medical imaging include image registration for matching of images from the same or different modalities. In the case of full data sampling, the respective reconstructed images are usually of such a good image quality that standard deformable volume-to-volume (3D-3D) registration approaches can be applied. But research in temporal-correlated image reconstruction and dose reductions increases the number of cases where rawdata are available from only few projection angles. Here, deteriorated image quality leads to non-acceptable deformable volume-to-volume registration results. Therefore a registration approach is required that is robust against a decreasing number of projections defining the target position. We propose a deformable volume-to-rawdata (3D-2D) registration method that aims at finding a displacement vector field maximizing the alignment of a CT volume and the acquired rawdata based on the sum of squared differences in rawdata domain. The registration is constrained by a regularization term in accordance with a fluid-based diffusion. Both cost function components, the rawdata fidelity and the regularization term, are optimized in an alternating manner. The matching criterion is optimized by a conjugate gradient descent for nonlinear functions, while the regularization is realized by convolution of the vector fields with Gaussian kernels. We validate the proposed method and compare it to the demons algorithm, a well-known 3D-3D registration method. The comparison is done for a range of 4-60 target projections using datasets from low dose tomographic fluoroscopy as an application example. The results show a high correlation to the ground truth target position without introducing artifacts even in the case of very few projections. In particular the matching in the rawdata domain is improved compared to the 3D-3D registration for the investigated range. The proposed volume-to-rawdata registration increases the robustness regarding sparse

  20. Spacecraft camera image registration

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)

    1987-01-01

    A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).

  1. Image registration with uncertainty analysis

    DOEpatents

    Simonson, Katherine M [Cedar Crest, NM

    2011-03-22

    In an image registration method, edges are detected in a first image and a second image. A percentage of edge pixels in a subset of the second image that are also edges in the first image shifted by a translation is calculated. A best registration point is calculated based on a maximum percentage of edges matched. In a predefined search region, all registration points other than the best registration point are identified that are not significantly worse than the best registration point according to a predetermined statistical criterion.

  2. Local Metric Learning in 2D/3D Deformable Registration With Application in the Abdomen

    PubMed Central

    Chou, Chen-Rui; Mageras, Gig; Pizer, Stephen

    2015-01-01

    In image-guided radiotherapy (IGRT) of disease sites subject to respiratory motion, soft tissue deformations can affect localization accuracy. We describe the application of a method of 2D/3D deformable registration to soft tissue localization in abdomen. The method, called registration efficiency and accuracy through learning a metric on shape (REALMS), is designed to support real-time IGRT. In a previously developed version of REALMS, the method interpolated 3D deformation parameters for any credible deformation in a deformation space using a single globally-trained Riemannian metric for each parameter. We propose a refinement of the method in which the metric is trained over a particular region of the deformation space, such that interpolation accuracy within that region is improved. We report on the application of the proposed algorithm to IGRT in abdominal disease sites, which is more challenging than in lung because of low intensity contrast and nonrespiratory deformation. We introduce a rigid translation vector to compensate for nonrespiratory deformation, and design a special region-of-interest around fiducial markers implanted near the tumor to produce a more reliable registration. Both synthetic data and actual data tests on abdominal datasets show that the localized approach achieves more accurate 2D/3D deformable registration than the global approach. PMID:24771575

  3. Elastic shape analysis of cylindrical surfaces for 3D/2D registration in endometrial tissue characterization.

    PubMed

    Samir, Chafik; Kurtek, Sebastian; Srivastava, Anuj; Canis, Michel

    2014-05-01

    We study the problem of joint registration and deformation analysis of endometrial tissue using 3D magnetic resonance imaging (MRI) and 2D trans-vaginal ultrasound (TVUS) measurements. In addition to the different imaging techniques involved in the two modalities, this problem is complicated due to: 1) different patient pose during MRI and TVUS observations, 2) the 3D nature of MRI and 2D nature of TVUS measurements, 3) the unknown intersecting plane for TVUS in MRI volume, and 4) the potential deformation of endometrial tissue during TVUS measurement process. Focusing on the shape of the tissue, we use expert manual segmentation of its boundaries in the two modalities and apply, with modification, recent developments in shape analysis of parametric surfaces to this problem. First, we extend the 2D TVUS curves to generalized cylindrical surfaces through replication, and then we compare them with MRI surfaces using elastic shape analysis. This shape analysis provides a simultaneous registration (optimal reparameterization) and deformation (geodesic) between any two parametrized surfaces. Specifically, it provides optimal curves on MRI surfaces that match with the original TVUS curves. This framework results in an accurate quantification and localization of the deformable endometrial cells for radiologists, and growth characterization for gynecologists and obstetricians. We present experimental results using semi-synthetic data and real data from patients to illustrate these ideas.

  4. WE-AB-BRA-07: Quantitative Evaluation of 2D-2D and 2D-3D Image Guided Radiation Therapy for Clinical Trial Credentialing, NRG Oncology/RTOG

    SciTech Connect

    Giaddui, T; Yu, J; Xiao, Y; Jacobs, P; Manfredi, D; Linnemann, N

    2015-06-15

    Purpose: 2D-2D kV image guided radiation therapy (IGRT) credentialing evaluation for clinical trial qualification was historically qualitative through submitting screen captures of the fusion process. However, as quantitative DICOM 2D-2D and 2D-3D image registration tools are implemented in clinical practice for better precision, especially in centers that treat patients with protons, better IGRT credentialing techniques are needed. The aim of this work is to establish methodologies for quantitatively reviewing IGRT submissions based on DICOM 2D-2D and 2D-3D image registration and to test the methodologies in reviewing 2D-2D and 2D-3D IGRT submissions for RTOG/NRG Oncology clinical trials qualifications. Methods: DICOM 2D-2D and 2D-3D automated and manual image registration have been tested using the Harmony tool in MIM software. 2D kV orthogonal portal images are fused with the reference digital reconstructed radiographs (DRR) in the 2D-2D registration while the 2D portal images are fused with DICOM planning CT image in the 2D-3D registration. The Harmony tool allows alignment of the two images used in the registration process and also calculates the required shifts. Shifts calculated using MIM are compared with those submitted by institutions for IGRT credentialing. Reported shifts are considered to be acceptable if differences are less than 3mm. Results: Several tests have been performed on the 2D-2D and 2D-3D registration. The results indicated good agreement between submitted and calculated shifts. A workflow for reviewing these IGRT submissions has been developed and will eventually be used to review IGRT submissions. Conclusion: The IROC Philadelphia RTQA center has developed and tested a new workflow for reviewing DICOM 2D-2D and 2D-3D IGRT credentialing submissions made by different cancer clinical centers, especially proton centers. NRG Center for Innovation in Radiation Oncology (CIRO) and IROC RTQA center continue their collaborative efforts to enhance

  5. A MULTICORE BASED PARALLEL IMAGE REGISTRATION METHOD

    PubMed Central

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.

    2012-01-01

    Image registration is a crucial step for many image-assisted clinical applications such as surgery planning and treatment evaluation. In this paper we proposed a landmark based nonlinear image registration algorithm for matching 2D image pairs. The algorithm was shown to be effective and robust under conditions of large deformations. In landmark based registration, the most important step is establishing the correspondence among the selected landmark points. This usually requires an extensive search which is often computationally expensive. We introduced a nonregular data partition algorithm using the K-means clustering algorithm to group the landmarks based on the number of available processing cores. The step optimizes the memory usage and data transfer. We have tested our method using IBM Cell Broadband Engine (Cell/B.E.) platform. PMID:19964921

  6. Validation of histology image registration

    NASA Astrophysics Data System (ADS)

    Shojaii, Rushin; Karavardanyan, Tigran; Yaffe, Martin; Martel, Anne L.

    2011-03-01

    The aim of this paper is to validate an image registration pipeline used for histology image alignment. In this work a set of histology images are registered to their correspondent optical blockface images to make a histology volume. Then multi-modality fiducial markers are used to validate the alignment of histology images. The fiducial markers are catheters perfused with a mixture of cuttlefish ink and flour. Based on our previous investigations this fiducial marker is visible in medical images, optical blockface images and it can also be localized in histology images. The properties of this fiducial marker make it suitable for validation of the registration techniques used for histology image alignment. This paper reports on the accuracy of a histology image registration approach by calculation of target registration error using these fiducial markers.

  7. 2D microwave imaging reflectometer electronics

    SciTech Connect

    Spear, A. G.; Domier, C. W. Hu, X.; Muscatello, C. M.; Ren, X.; Luhmann, N. C.; Tobias, B. J.

    2014-11-15

    A 2D microwave imaging reflectometer system has been developed to visualize electron density fluctuations on the DIII-D tokamak. Simultaneously illuminated at four probe frequencies, large aperture optics image reflections from four density-dependent cutoff surfaces in the plasma over an extended region of the DIII-D plasma. Localized density fluctuations in the vicinity of the plasma cutoff surfaces modulate the plasma reflections, yielding a 2D image of electron density fluctuations. Details are presented of the receiver down conversion electronics that generate the in-phase (I) and quadrature (Q) reflectometer signals from which 2D density fluctuation data are obtained. Also presented are details on the control system and backplane used to manage the electronics as well as an introduction to the computer based control program.

  8. 2D microwave imaging reflectometer electronics.

    PubMed

    Spear, A G; Domier, C W; Hu, X; Muscatello, C M; Ren, X; Tobias, B J; Luhmann, N C

    2014-11-01

    A 2D microwave imaging reflectometer system has been developed to visualize electron density fluctuations on the DIII-D tokamak. Simultaneously illuminated at four probe frequencies, large aperture optics image reflections from four density-dependent cutoff surfaces in the plasma over an extended region of the DIII-D plasma. Localized density fluctuations in the vicinity of the plasma cutoff surfaces modulate the plasma reflections, yielding a 2D image of electron density fluctuations. Details are presented of the receiver down conversion electronics that generate the in-phase (I) and quadrature (Q) reflectometer signals from which 2D density fluctuation data are obtained. Also presented are details on the control system and backplane used to manage the electronics as well as an introduction to the computer based control program.

  9. Fast DRR generation for 2D to 3D registration on GPUs

    SciTech Connect

    Tornai, Gabor Janos; Cserey, Gyoergy

    2012-08-15

    Purpose: The generation of digitally reconstructed radiographs (DRRs) is the most time consuming step on the CPU in intensity based two-dimensional x-ray to three-dimensional (CT or 3D rotational x-ray) medical image registration, which has application in several image guided interventions. This work presents optimized DRR rendering on graphical processor units (GPUs) and compares performance achievable on four commercially available devices. Methods: A ray-cast based DRR rendering was implemented for a 512 Multiplication-Sign 512 Multiplication-Sign 72 CT volume. The block size parameter was optimized for four different GPUs for a region of interest (ROI) of 400 Multiplication-Sign 225 pixels with different sampling ratios (1.1%-9.1% and 100%). Performance was statistically evaluated and compared for the four GPUs. The method and the block size dependence were validated on the latest GPU for several parameter settings with a public gold standard dataset (512 Multiplication-Sign 512 Multiplication-Sign 825 CT) for registration purposes. Results: Depending on the GPU, the full ROI is rendered in 2.7-5.2 ms. If sampling ratio of 1.1%-9.1% is applied, execution time is in the range of 0.3-7.3 ms. On all GPUs, the mean of the execution time increased linearly with respect to the number of pixels if sampling was used. Conclusions: The presented results outperform other results from the literature. This indicates that automatic 2D to 3D registration, which typically requires a couple of hundred DRR renderings to converge, can be performed quasi on-line, in less than a second or depending on the application and hardware in less than a couple of seconds. Accordingly, a whole new field of applications is opened for image guided interventions, where the registration is continuously performed to match the real-time x-ray.

  10. Local image registration by adaptive filtering.

    PubMed

    Caner, Gulcin; Tekalp, A Murat; Sharma, Gaurav; Heinzelman, Wendi

    2006-10-01

    We propose a new adaptive filtering framework for local image registration, which compensates for the effect of local distortions/displacements without explicitly estimating a distortion/displacement field. To this effect, we formulate local image registration as a two-dimensional (2-D) system identification problem with spatially varying system parameters. We utilize a 2-D adaptive filtering framework to identify the locally varying system parameters, where a new block adaptive filtering scheme is introduced. We discuss the conditions under which the adaptive filter coefficients conform to a local displacement vector at each pixel. Experimental results demonstrate that the proposed 2-D adaptive filtering framework is very successful in modeling and compensation of both local distortions, such as Stirmark attacks, and local motion, such as in the presence of a parallax field. In particular, we show that the proposed method can provide image registration to: a) enable reliable detection of watermarks following a Stirmark attack in nonblind detection scenarios, b) compensate for lens distortions, and c) align multiview images with nonparametric local motion.

  11. Image Registration: A Necessary Evil

    NASA Technical Reports Server (NTRS)

    Bell, James; McLachlan, Blair; Hermstad, Dexter; Trosin, Jeff; George, Michael W. (Technical Monitor)

    1995-01-01

    Registration of test and reference images is a key component of nearly all PSP data reduction techniques. This is done to ensure that a test image pixel viewing a particular point on the model is ratioed by the reference image pixel which views the same point. Typically registration is needed to account for model motion due to differing airloads when the wind-off and wind-on images are taken. Registration is also necessary when two cameras are used for simultaneous acquisition of data from a dual-frequency paint. This presentation will discuss the advantages and disadvantages of several different image registration techniques. In order to do so, it is necessary to propose both an accuracy requirement for image registration and a means for measuring the accuracy of a particular technique. High contrast regions in the unregistered images are most sensitive to registration errors, and it is proposed that these regions be used to establish the error limits for registration. Once this is done, the actual registration error can be determined by locating corresponding points on the test and reference images, and determining how well a particular registration technique matches them. An example of this procedure is shown for three transforms used to register images of a semispan model. Thirty control points were located on the model. A subset of the points were used to determine the coefficients of each registration transform, and the error with which each transform aligned the remaining points was determined. The results indicate the general superiority of a third-order polynomial over other candidate transforms, as well as showing how registration accuracy varies with number of control points. Finally, it is proposed that image registration may eventually be done away with completely. As more accurate image resection techniques and more detailed model surface grids become available, it will be possible to map raw image data onto the model surface accurately. Intensity

  12. Efficient feature-based 2D/3D registration of transesophageal echocardiography to x-ray fluoroscopy for cardiac interventions

    NASA Astrophysics Data System (ADS)

    Hatt, Charles R.; Speidel, Michael A.; Raval, Amish N.

    2014-03-01

    We present a novel 2D/ 3D registration algorithm for fusion between transesophageal echocardiography (TEE) and X-ray fluoroscopy (XRF). The TEE probe is modeled as a subset of 3D gradient and intensity point features, which facilitates efficient 3D-to-2D perspective projection. A novel cost-function, based on a combination of intensity and edge features, evaluates the registration cost value without the need for time-consuming generation of digitally reconstructed radiographs (DRRs). Validation experiments were performed with simulations and phantom data. For simulations, in silica XRF images of a TEE probe were generated in a number of different pose configurations using a previously acquired CT image. Random misregistrations were applied and our method was used to recover the TEE probe pose and compare the result to the ground truth. Phantom experiments were performed by attaching fiducial markers externally to a TEE probe, imaging the probe with an interventional cardiac angiographic x-ray system, and comparing the pose estimated from the external markers to that estimated from the TEE probe using our algorithm. Simulations found a 3D target registration error of 1.08(1.92) mm for biplane (monoplane) geometries, while the phantom experiment found a 2D target registration error of 0.69mm. For phantom experiments, we demonstrated a monoplane tracking frame-rate of 1.38 fps. The proposed feature-based registration method is computationally efficient, resulting in near real-time, accurate image based registration between TEE and XRF.

  13. Accurate positioning for head and neck cancer patients using 2D and 3D image guidance

    PubMed Central

    Kang, Hyejoo; Lovelock, Dale M.; Yorke, Ellen D.; Kriminiski, Sergey; Lee, Nancy; Amols, Howard I.

    2011-01-01

    Our goal is to determine an optimized image-guided setup by comparing setup errors determined by two-dimensional (2D) and three-dimensional (3D) image guidance for head and neck cancer (HNC) patients immobilized by customized thermoplastic masks. Nine patients received weekly imaging sessions, for a total of 54, throughout treatment. Patients were first set up by matching lasers to surface marks (initial) and then translationally corrected using manual registration of orthogonal kilovoltage (kV) radiographs with DRRs (2D-2D) on bony anatomy. A kV cone beam CT (kVCBCT) was acquired and manually registered to the simulation CT using only translations (3D-3D) on the same bony anatomy to determine further translational corrections. After treatment, a second set of kVCBCT was acquired to assess intrafractional motion. Averaged over all sessions, 2D-2D registration led to translational corrections from initial setup of 3.5 ± 2.2 (range 0–8) mm. The addition of 3D-3D registration resulted in only small incremental adjustment (0.8 ± 1.5 mm). We retrospectively calculated patient setup rotation errors using an automatic rigid-body algorithm with 6 degrees of freedom (DoF) on regions of interest (ROI) of in-field bony anatomy (mainly the C2 vertebral body). Small rotations were determined for most of the imaging sessions; however, occasionally rotations > 3° were observed. The calculated intrafractional motion with automatic registration was < 3.5 mm for eight patients, and < 2° for all patients. We conclude that daily manual 2D-2D registration on radiographs reduces positioning errors for mask-immobilized HNC patients in most cases, and is easily implemented. 3D-3D registration adds little improvement over 2D-2D registration without correcting rotational errors. We also conclude that thermoplastic masks are effective for patient immobilization. PMID:21330971

  14. Known-component 3D-2D registration for quality assurance of spine surgery pedicle screw placement

    PubMed Central

    Uneri, A; De Silva, T; Stayman, JW; Kleinszig, G; Vogt, S; Khanna, AJ; Gokaslan, ZL; Wolinsky, J-P; Siewerdsen, JH

    2015-01-01

    Purpose A 3D-2D image registration method is presented that exploits knowledge of interventional devices (e.g., K-wires or spine screws – referred to as “known components”) to extend the functionality of intraoperative radiography/fluoroscopy by providing quantitative measurement and quality assurance (QA) of the surgical product. Methods The known-component registration (KC-Reg) algorithm uses robust 3D-2D registration combined with 3D component models of surgical devices known to be present in intraoperative 2D radiographs. Component models were investigated that vary in fidelity from simple parametric models (e.g., approximation of a screw as a simple cylinder, referred to as “parametrically-known” component [pKC] registration) to precise models based on device-specific CAD drawings (referred to as “exactly-known” component [eKC] registration). 3D-2D registration from three intraoperative radiographs was solved using the covariance matrix adaptation evolution strategy (CMA-ES) to maximize image-gradient similarity, relating device placement relative to 3D preoperative CT of the patient. Spine phantom and cadaver studies were conducted to evaluate registration accuracy and demonstrate QA of the surgical product by verification of the type of devices delivered and conformance within the “acceptance window” of the spinal pedicle. Results Pedicle screws were successfully registered to radiographs acquired from a mobile C-arm, providing TRE 1–4 mm and <5° using simple parametric (pKC) models, further improved to <1 mm and <1° using eKC registration. Using advanced pKC models, screws that did not match the device models specified in the surgical plan were detected with an accuracy of >99%. Visualization of registered devices relative to surgical planning and the pedicle acceptance window provided potentially valuable QA of the surgical product and reliable detection of pedicle screw breach. Conclusions 3D-2D registration combined with 3D models

  15. Known-component 3D-2D registration for quality assurance of spine surgery pedicle screw placement.

    PubMed

    Uneri, A; De Silva, T; Stayman, J W; Kleinszig, G; Vogt, S; Khanna, A J; Gokaslan, Z L; Wolinsky, J-P; Siewerdsen, J H

    2015-10-21

    A 3D-2D image registration method is presented that exploits knowledge of interventional devices (e.g. K-wires or spine screws-referred to as 'known components') to extend the functionality of intraoperative radiography/fluoroscopy by providing quantitative measurement and quality assurance (QA) of the surgical product. The known-component registration (KC-Reg) algorithm uses robust 3D-2D registration combined with 3D component models of surgical devices known to be present in intraoperative 2D radiographs. Component models were investigated that vary in fidelity from simple parametric models (e.g. approximation of a screw as a simple cylinder, referred to as 'parametrically-known' component [pKC] registration) to precise models based on device-specific CAD drawings (referred to as 'exactly-known' component [eKC] registration). 3D-2D registration from three intraoperative radiographs was solved using the covariance matrix adaptation evolution strategy (CMA-ES) to maximize image-gradient similarity, relating device placement relative to 3D preoperative CT of the patient. Spine phantom and cadaver studies were conducted to evaluate registration accuracy and demonstrate QA of the surgical product by verification of the type of devices delivered and conformance within the 'acceptance window' of the spinal pedicle. Pedicle screws were successfully registered to radiographs acquired from a mobile C-arm, providing TRE 1-4 mm and  <5° using simple parametric (pKC) models, further improved to  <1 mm and  <1° using eKC registration. Using advanced pKC models, screws that did not match the device models specified in the surgical plan were detected with an accuracy of  >99%. Visualization of registered devices relative to surgical planning and the pedicle acceptance window provided potentially valuable QA of the surgical product and reliable detection of pedicle screw breach. 3D-2D registration combined with 3D models of known surgical devices offers a

  16. Known-component 3D-2D registration for quality assurance of spine surgery pedicle screw placement

    NASA Astrophysics Data System (ADS)

    Uneri, A.; De Silva, T.; Stayman, J. W.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Gokaslan, Z. L.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2015-10-01

    A 3D-2D image registration method is presented that exploits knowledge of interventional devices (e.g. K-wires or spine screws—referred to as ‘known components’) to extend the functionality of intraoperative radiography/fluoroscopy by providing quantitative measurement and quality assurance (QA) of the surgical product. The known-component registration (KC-Reg) algorithm uses robust 3D-2D registration combined with 3D component models of surgical devices known to be present in intraoperative 2D radiographs. Component models were investigated that vary in fidelity from simple parametric models (e.g. approximation of a screw as a simple cylinder, referred to as ‘parametrically-known’ component [pKC] registration) to precise models based on device-specific CAD drawings (referred to as ‘exactly-known’ component [eKC] registration). 3D-2D registration from three intraoperative radiographs was solved using the covariance matrix adaptation evolution strategy (CMA-ES) to maximize image-gradient similarity, relating device placement relative to 3D preoperative CT of the patient. Spine phantom and cadaver studies were conducted to evaluate registration accuracy and demonstrate QA of the surgical product by verification of the type of devices delivered and conformance within the ‘acceptance window’ of the spinal pedicle. Pedicle screws were successfully registered to radiographs acquired from a mobile C-arm, providing TRE 1-4 mm and  <5° using simple parametric (pKC) models, further improved to  <1 mm and  <1° using eKC registration. Using advanced pKC models, screws that did not match the device models specified in the surgical plan were detected with an accuracy of  >99%. Visualization of registered devices relative to surgical planning and the pedicle acceptance window provided potentially valuable QA of the surgical product and reliable detection of pedicle screw breach. 3D-2D registration combined with 3D models of known surgical

  17. Image Segmentation, Registration, Compression, and Matching

    NASA Technical Reports Server (NTRS)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  18. In-die photomask registration and overlay metrology with PROVE using 2D correlation methods

    NASA Astrophysics Data System (ADS)

    Seidel, D.; Arnz, M.; Beyer, D.

    2011-11-01

    According to the ITRS roadmap, semiconductor industry drives the 193nm lithography to its limits, using techniques like double exposure, double patterning, mask-source optimization and inverse lithography. For photomask metrology this translates to full in-die measurement capability for registration and critical dimension together with challenging specifications for repeatability and accuracy. Especially, overlay becomes more and more critical and must be ensured on every die. For this, Carl Zeiss SMS has developed the next generation photomask registration and overlay metrology tool PROVE® which serves the 32nm node and below and which is already well established in the market. PROVE® features highly stable hardware components for the stage and environmental control. To ensure in-die measurement capability, sophisticated image analysis methods based on 2D correlations have been developed. In this paper we demonstrate the in-die capability of PROVE® and present corresponding measurement results for shortterm and long-term measurements as well as the attainable accuracy for feature sizes down to 85nm using different illumination modes and mask types. Standard measurement methods based on threshold criteria are compared with the new 2D correlation methods to demonstrate the performance gain of the latter. In addition, mask-to-mask overlay results of typical box-in-frame structures down to 200nm feature size are presented. It is shown, that from overlay measurements a reproducibility budget can be derived that takes into account stage, image analysis and global effects like mask loading and environmental control. The parts of the budget are quantified from measurement results to identify critical error contributions and to focus on the corresponding improvement strategies.

  19. Remapping of digital subtraction angiography on a standard fluoroscopy system using 2D-3D registration

    NASA Astrophysics Data System (ADS)

    Alhrishy, Mazen G.; Varnavas, Andreas; Guyot, Alexis; Carrell, Tom; King, Andrew; Penney, Graeme

    2015-03-01

    Fluoroscopy-guided endovascular interventions are being performing for more and more complex cases with longer screening times. However, X-ray is much better at visualizing interventional devices and dense structures compared to vasculature. To visualise vasculature, angiography screening is essential but requires the use of iodinated contrast medium (ICM) which is nephrotoxic. Acute kidney injury is the main life-threatening complication of ICM. Digital subtraction angiography (DSA) is also often a major contributor to overall patient radiation dose (81% reported). Furthermore, a DSA image is only valid for the current interventional view and not the new view once the C-arm is moved. In this paper, we propose the use of 2D-3D image registration between intraoperative images and the preoperative CT volume to facilitate DSA remapping using a standard fluoroscopy system. This allows repeated ICM-free DSA and has the potential to enable a reduction in ICM usage and radiation dose. Experiments were carried out using 9 clinical datasets. In total, 41 DSA images were remapped. For each dataset, the maximum and averaged remapping accuracy error were calculated and presented. Numerical results showed an overall averaged error of 2.50 mm, with 7 patients scoring averaged errors < 3 mm and 2 patients < 6 mm.

  20. Registration of interferometric SAR images

    NASA Technical Reports Server (NTRS)

    Lin, Qian; Vesecky, John F.; Zebker, Howard A.

    1992-01-01

    Interferometric synthetic aperture radar (INSAR) is a new way of performing topography mapping. Among the factors critical to mapping accuracy is the registration of the complex SAR images from repeated orbits. A new algorithm for registering interferometric SAR images is presented. A new figure of merit, the average fluctuation function of the phase difference image, is proposed to evaluate the fringe pattern quality. The process of adjusting the registration parameters according to the fringe pattern quality is optimized through a downhill simplex minimization algorithm. The results of applying the proposed algorithm to register two pairs of Seasat SAR images with a short baseline (75 m) and a long baseline (500 m) are shown. It is found that the average fluctuation function is a very stable measure of fringe pattern quality allowing very accurate registration.

  1. 2D-3D registration for brain radiation therapy using a 3D CBCT and a single limited field-of-view 2D kV radiograph

    NASA Astrophysics Data System (ADS)

    Munbodh, R.; Moseley, D. J.

    2014-03-01

    We report results of an intensity-based 2D-3D rigid registration framework for patient positioning and monitoring during brain radiotherapy. We evaluated two intensity-based similarity measures, the Pearson Correlation Coefficient (ICC) and Maximum Likelihood with Gaussian noise (MLG) derived from the statistics of transmission images. A useful image frequency band was identified from the bone-to-no-bone ratio. Validation was performed on gold-standard data consisting of 3D kV CBCT scans and 2D kV radiographs of an anthropomorphic head phantom acquired at 23 different poses with parameter variations along six degrees of freedom. At each pose, a single limited field of view kV radiograph was registered to the reference CBCT. The ground truth was determined from markers affixed to the phantom and visible in the CBCT images. The mean (and standard deviation) of the absolute errors in recovering each of the six transformation parameters along the x, y and z axes for ICC were varphix: 0.08(0.04)°, varphiy: 0.10(0.09)°, varphiz: 0.03(0.03)°, tx: 0.13(0.11) mm, ty: 0.08(0.06) mm and tz: 0.44(0.23) mm. For MLG, the corresponding results were varphix: 0.10(0.04)°, varphiy: 0.10(0.09)°, varphiz: 0.05(0.07)°, tx: 0.11(0.13) mm, ty: 0.05(0.05) mm and tz: 0.44(0.31) mm. It is feasible to accurately estimate all six transformation parameters from a 3D CBCT of the head and a single 2D kV radiograph within an intensity-based registration framework that incorporates the physics of transmission images.

  2. 3D-2D registration in mobile radiographs: algorithm development and preliminary clinical evaluation

    NASA Astrophysics Data System (ADS)

    Otake, Yoshito; Wang, Adam S.; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Aygun, Nafi; Lo, Sheng-fu L.; Wolinsky, Jean-Paul; Gokaslan, Ziya L.; Siewerdsen, Jeffrey H.

    2015-03-01

    An image-based 3D-2D registration method is presented using radiographs acquired in the uncalibrated, unconstrained geometry of mobile radiography. The approach extends a previous method for six degree-of-freedom (DOF) registration in C-arm fluoroscopy (namely ‘LevelCheck’) to solve the 9-DOF estimate of geometry in which the position of the source and detector are unconstrained. The method was implemented using a gradient correlation similarity metric and stochastic derivative-free optimization on a GPU. Development and evaluation were conducted in three steps. First, simulation studies were performed that involved a CT scan of an anthropomorphic body phantom and 1000 randomly generated digitally reconstructed radiographs in posterior-anterior and lateral views. A median projection distance error (PDE) of 0.007 mm was achieved with 9-DOF registration compared to 0.767 mm for 6-DOF. Second, cadaver studies were conducted using mobile radiographs acquired in three anatomical regions (thorax, abdomen and pelvis) and three levels of source-detector distance (~800, ~1000 and ~1200 mm). The 9-DOF method achieved a median PDE of 0.49 mm (compared to 2.53 mm for the 6-DOF method) and demonstrated robustness in the unconstrained imaging geometry. Finally, a retrospective clinical study was conducted with intraoperative radiographs of the spine exhibiting real anatomical deformation and image content mismatch (e.g. interventional devices in the radiograph that were not in the CT), demonstrating a PDE = 1.1 mm for the 9-DOF approach. Average computation time was 48.5 s, involving 687 701 function evaluations on average, compared to 18.2 s for the 6-DOF method. Despite the greater computational load, the 9-DOF method may offer a valuable tool for target localization (e.g. decision support in level counting) as well as safety and quality assurance checks at the conclusion of a procedure (e.g. overlay of planning data on the radiograph for verification of

  3. 3D–2D registration in mobile radiographs: algorithm development and preliminary clinical evaluation.

    PubMed

    Otake, Yoshito; Wang, Adam S; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Aygun, Nafi; Lo, Sheng-fu L; Wolinsky, Jean-Paul; Gokaslan, Ziya L; Siewerdsen, Jeffrey H

    2015-03-07

    An image-based 3D-2D registration method is presented using radiographs acquired in the uncalibrated, unconstrained geometry of mobile radiography. The approach extends a previous method for six degree-of-freedom (DOF) registration in C-arm fluoroscopy (namely 'LevelCheck') to solve the 9-DOF estimate of geometry in which the position of the source and detector are unconstrained. The method was implemented using a gradient correlation similarity metric and stochastic derivative-free optimization on a GPU. Development and evaluation were conducted in three steps. First, simulation studies were performed that involved a CT scan of an anthropomorphic body phantom and 1000 randomly generated digitally reconstructed radiographs in posterior-anterior and lateral views. A median projection distance error (PDE) of 0.007 mm was achieved with 9-DOF registration compared to 0.767 mm for 6-DOF. Second, cadaver studies were conducted using mobile radiographs acquired in three anatomical regions (thorax, abdomen and pelvis) and three levels of source-detector distance (~800, ~1000 and ~1200 mm). The 9-DOF method achieved a median PDE of 0.49 mm (compared to 2.53 mm for the 6-DOF method) and demonstrated robustness in the unconstrained imaging geometry. Finally, a retrospective clinical study was conducted with intraoperative radiographs of the spine exhibiting real anatomical deformation and image content mismatch (e.g. interventional devices in the radiograph that were not in the CT), demonstrating a PDE = 1.1 mm for the 9-DOF approach. Average computation time was 48.5 s, involving 687 701 function evaluations on average, compared to 18.2 s for the 6-DOF method. Despite the greater computational load, the 9-DOF method may offer a valuable tool for target localization (e.g. decision support in level counting) as well as safety and quality assurance checks at the conclusion of a procedure (e.g. overlay of planning data on the radiograph for verification of the surgical product

  4. 3D–2D registration in mobile radiographs: algorithm development and preliminary clinical evaluation

    PubMed Central

    Otake, Yoshito; Wang, Adam S; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Aygun, Nafi; Lo, Sheng-fu L; Wolinsky, Jean-Paul; Gokaslan, Ziya L; Siewerdsen, Jeffrey H

    2015-01-01

    An image-based 3D–2D registration method is presented using radiographs acquired in the uncalibrated, unconstrained geometry of mobile radiography. The approach extends a previous method for six degree-of-freedom (DOF) registration in C-arm fluoroscopy (namely ‘LevelCheck’) to solve the 9-DOF estimate of geometry in which the position of the source and detector are unconstrained. The method was implemented using a gradient correlation similarity metric and stochastic derivative-free optimization on a GPU. Development and evaluation were conducted in three steps. First, simulation studies were performed that involved a CT scan of an anthropomorphic body phantom and 1000 randomly generated digitally reconstructed radiographs in posterior–anterior and lateral views. A median projection distance error (PDE) of 0.007 mm was achieved with 9-DOF registration compared to 0.767 mm for 6-DOF. Second, cadaver studies were conducted using mobile radiographs acquired in three anatomical regions (thorax, abdomen and pelvis) and three levels of source-detector distance (~800, ~1000 and ~1200 mm). The 9-DOF method achieved a median PDE of 0.49 mm (compared to 2.53 mm for the 6-DOF method) and demonstrated robustness in the unconstrained imaging geometry. Finally, a retrospective clinical study was conducted with intraoperative radiographs of the spine exhibiting real anatomical deformation and image content mismatch (e.g. interventional devices in the radiograph that were not in the CT), demonstrating a PDE = 1.1 mm for the 9-DOF approach. Average computation time was 48.5 s, involving 687 701 function evaluations on average, compared to 18.2 s for the 6-DOF method. Despite the greater computational load, the 9-DOF method may offer a valuable tool for target localization (e.g. decision support in level counting) as well as safety and quality assurance checks at the conclusion of a procedure (e.g. overlay of planning data on the radiograph for verification of the surgical

  5. Voxel similarity measures for automated image registration

    NASA Astrophysics Data System (ADS)

    Hill, Derek L.; Studholme, Colin; Hawkes, David J.

    1994-09-01

    We present the concept of the feature space sequence: 2D distributions of voxel features of two images generated at registration and a sequence of misregistrations. We provide an explanation of the structure seen in these images. Feature space sequences have been generated for a pair of MR image volumes identical apart from the addition of Gaussian noise to one, MR image volumes with and without Gadolinium enhancement, MR and PET-FDG image volumes and MR and CT image volumes, all of the head. The structure seen in the feature space sequences was used to devise two new measures of similarity which in turn were used to produce plots of cost versus misregistration for the 6 degrees of freedom of rigid body motion. One of these, the third order moment of the feature space histogram, was used to register the MR image volumes with and without Gadolinium enhancement. These techniques have the potential for registration accuracy to within a small fraction of a voxel or resolution element and therefore interpolation errors in image transformation can be the dominant source of error in subtracted images. We present a method for removing these errors using sinc interpolation and show how interpolation errors can be reduced by over two orders of magnitude.

  6. Registration Of SAR Images With Multisensor Images

    NASA Technical Reports Server (NTRS)

    Evans, Diane L.; Burnette, Charles F.; Van Zyl, Jakob J.

    1993-01-01

    Semiautomated technique intended primarily to facilitate registration of polarimetric synthetic-aperture-radar (SAR) images with other images of same or partly overlapping terrain while preserving polarization information conveyed by SAR data. Technique generally applicable in sense one or both of images to be registered with each other generated by polarimetric or nonpolarimetric SAR, infrared radiometry, conventional photography, or any other applicable sensing method.

  7. Canny edge-based deformable image registration

    NASA Astrophysics Data System (ADS)

    Kearney, Vasant; Huang, Yihui; Mao, Weihua; Yuan, Baohong; Tang, Liping

    2017-02-01

    This work focuses on developing a 2D Canny edge-based deformable image registration (Canny DIR) algorithm to register in vivo white light images taken at various time points. This method uses a sparse interpolation deformation algorithm to sparsely register regions of the image with strong edge information. A stability criterion is enforced which removes regions of edges that do not deform in a smooth uniform manner. Using a synthetic mouse surface ground truth model, the accuracy of the Canny DIR algorithm was evaluated under axial rotation in the presence of deformation. The accuracy was also tested using fluorescent dye injections, which were then used for gamma analysis to establish a second ground truth. The results indicate that the Canny DIR algorithm performs better than rigid registration, intensity corrected Demons, and distinctive features for all evaluation matrices and ground truth scenarios. In conclusion Canny DIR performs well in the presence of the unique lighting and shading variations associated with white-light-based image registration.

  8. A faster method for 3D/2D medical image registration—a simulation study

    NASA Astrophysics Data System (ADS)

    Birkfellner, Wolfgang; Wirth, Joachim; Burgstaller, Wolfgang; Baumann, Bernard; Staedele, Harald; Hammer, Beat; Claudius Gellrich, Niels; Jacob, Augustinus Ludwig; Regazzoni, Pietro; Messmer, Peter

    2003-08-01

    3D/2D patient-to-computed-tomography (CT) registration is a method to determine a transformation that maps two coordinate systems by comparing a projection image rendered from CT to a real projection image. Iterative variation of the CT's position between rendering steps finally leads to exact registration. Applications include exact patient positioning in radiation therapy, calibration of surgical robots, and pose estimation in computer-aided surgery. One of the problems associated with 3D/2D registration is the fact that finding a registration includes solving a minimization problem in six degrees of freedom (dof) in motion. This results in considerable time requirements since for each iteration step at least one volume rendering has to be computed. We show that by choosing an appropriate world coordinate system and by applying a 2D/2D registration method in each iteration step, the number of iterations can be grossly reduced from n6 to n5. Here, n is the number of discrete variations around a given coordinate. Depending on the configuration of the optimization algorithm, this reduces the total number of iterations necessary to at least 1/3 of it's original value. The method was implemented and extensively tested on simulated x-ray images of a tibia, a pelvis and a skull base. When using one projective image and a discrete full parameter space search for solving the optimization problem, average accuracy was found to be 1.0 +/- 0.6(°) and 4.1 +/- 1.9 (mm) for a registration in six parameters, and 1.0 +/- 0.7(°) and 4.2 +/- 1.6 (mm) when using the 5 + 1 dof method described in this paper. Time requirements were reduced by a factor 3.1. We conclude that this hardware-independent optimization of 3D/2D registration is a step towards increasing the acceptance of this promising method for a wide number of clinical applications.

  9. Medical image registration using sparse coding of image patches.

    PubMed

    Afzali, Maryam; Ghaffari, Aboozar; Fatemizadeh, Emad; Soltanian-Zadeh, Hamid

    2016-06-01

    Image registration is a basic task in medical image processing applications like group analysis and atlas construction. Similarity measure is a critical ingredient of image registration. Intensity distortion of medical images is not considered in most previous similarity measures. Therefore, in the presence of bias field distortions, they do not generate an acceptable registration. In this paper, we propose a sparse based similarity measure for mono-modal images that considers non-stationary intensity and spatially-varying distortions. The main idea behind this measure is that the aligned image is constructed by an analysis dictionary trained using the image patches. For this purpose, we use "Analysis K-SVD" to train the dictionary and find the sparse coefficients. We utilize image patches to construct the analysis dictionary and then we employ the proposed sparse similarity measure to find a non-rigid transformation using free form deformation (FFD). Experimental results show that the proposed approach is able to robustly register 2D and 3D images in both simulated and real cases. The proposed method outperforms other state-of-the-art similarity measures and decreases the transformation error compared to the previous methods. Even in the presence of bias field distortion, the proposed method aligns images without any preprocessing.

  10. Fast 3D fluid registration of brain magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Leporé, Natasha; Chou, Yi-Yu; Lopez, Oscar L.; Aizenstein, Howard J.; Becker, James T.; Toga, Arthur W.; Thompson, Paul M.

    2008-03-01

    Fluid registration is widely used in medical imaging to track anatomical changes, to correct image distortions, and to integrate multi-modality data. Fluid mappings guarantee that the template image deforms smoothly into the target, without tearing or folding, even when large deformations are required for accurate matching. Here we implemented an intensity-based fluid registration algorithm, accelerated by using a filter designed by Bro-Nielsen and Gramkow. We validated the algorithm on 2D and 3D geometric phantoms using the mean square difference between the final registered image and target as a measure of the accuracy of the registration. In tests on phantom images with different levels of overlap, varying amounts of Gaussian noise, and different intensity gradients, the fluid method outperformed a more commonly used elastic registration method, both in terms of accuracy and in avoiding topological errors during deformation. We also studied the effect of varying the viscosity coefficients in the viscous fluid equation, to optimize registration accuracy. Finally, we applied the fluid registration algorithm to a dataset of 2D binary corpus callosum images and 3D volumetric brain MRIs from 14 healthy individuals to assess its accuracy and robustness.

  11. Image registration method for medical image sequences

    DOEpatents

    Gee, Timothy F.; Goddard, James S.

    2013-03-26

    Image registration of low contrast image sequences is provided. In one aspect, a desired region of an image is automatically segmented and only the desired region is registered. Active contours and adaptive thresholding of intensity or edge information may be used to segment the desired regions. A transform function is defined to register the segmented region, and sub-pixel information may be determined using one or more interpolation methods.

  12. Framework for 2D-3D image fusion of infrared thermography with preoperative MRI.

    PubMed

    Hoffmann, Nico; Weidner, Florian; Urban, Peter; Meyer, Tobias; Schnabel, Christian; Radev, Yordan; Schackert, Gabriele; Petersohn, Uwe; Koch, Edmund; Gumhold, Stefan; Steiner, Gerald; Kirsch, Matthias

    2017-01-23

    Multimodal medical image fusion combines information of one or more images in order to improve the diagnostic value. While previous applications mainly focus on merging images from computed tomography, magnetic resonance imaging (MRI), ultrasonic and single-photon emission computed tomography, we propose a novel approach for the registration and fusion of preoperative 3D MRI with intraoperative 2D infrared thermography. Image-guided neurosurgeries are based on neuronavigation systems, which further allow us track the position and orientation of arbitrary cameras. Hereby, we are able to relate the 2D coordinate system of the infrared camera with the 3D MRI coordinate system. The registered image data are now combined by calibration-based image fusion in order to map our intraoperative 2D thermographic images onto the respective brain surface recovered from preoperative MRI. In extensive accuracy measurements, we found that the proposed framework achieves a mean accuracy of 2.46 mm.

  13. Evaluating Similarity Measures for Brain Image Registration.

    PubMed

    Razlighi, Q R; Kehtarnavaz, N; Yousefi, S

    2013-10-01

    Evaluation of similarity measures for image registration is a challenging problem due to its complex interaction with the underlying optimization, regularization, image type and modality. We propose a single performance metric, named robustness, as part of a new evaluation method which quantifies the effectiveness of similarity measures for brain image registration while eliminating the effects of the other parts of the registration process. We show empirically that similarity measures with higher robustness are more effective in registering degraded images and are also more successful in performing intermodal image registration. Further, we introduce a new similarity measure, called normalized spatial mutual information, for 3D brain image registration whose robustness is shown to be much higher than the existing ones. Consequently, it tolerates greater image degradation and provides more consistent outcomes for intermodal brain image registration.

  14. Measurement of complex joint trajectories using slice-to-volume 2D/3D registration and cine MR

    NASA Astrophysics Data System (ADS)

    Bloch, C.; Figl, M.; Gendrin, C.; Weber, C.; Unger, E.; Aldrian, S.; Birkfellner, W.

    2010-02-01

    A method for studying the in vivo kinematics of complex joints is presented. It is based on automatic fusion of single slice cine MR images capturing the dynamics and a static MR volume. With the joint at rest the 3D scan is taken. In the data the anatomical compartments are identified and segmented resulting in a 3D volume of each individual part. In each of the cine MR images the joint parts are segmented and their pose and position are derived using a 2D/3D slice-to-volume registration to the volumes. The method is tested on the carpal joint because of its complexity and the small but complex motion of its compartments. For a first study a human cadaver hand was scanned and the method was evaluated with artificially generated slice images. Starting from random initial positions of about 5 mm translational and 12° rotational deviation, 70 to 90 % of the registrations converged successfully to a deviation better than 0.5 mm and 5°. First evaluations using real data from a cine MR were promising. The feasibility of the method was demonstrated. However we experienced difficulties with the segmentation of the cine MR images. We therefore plan to examine different parameters for the image acquisition in future studies.

  15. Practical pseudo-3D registration for large tomographic images

    NASA Astrophysics Data System (ADS)

    Liu, Xuan; Laperre, Kjell; Sasov, Alexander

    2014-09-01

    Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has

  16. Automated 2D-3D registration of a radiograph and a cone beam CT using line-segment enhancement

    SciTech Connect

    Munbodh, Reshma; Jaffray, David A.; Moseley, Douglas J.; Chen Zhe; Knisely, Jonathan P.S.; Cathier, Pascal; Duncan, James S.

    2006-05-15

    The objective of this study was to develop a fully automated two-dimensional (2D)-three-dimensional (3D) registration framework to quantify setup deviations in prostate radiation therapy from cone beam CT (CBCT) data and a single AP radiograph. A kilovoltage CBCT image and kilovoltage AP radiograph of an anthropomorphic phantom of the pelvis were acquired at 14 accurately known positions. The shifts in the phantom position were subsequently estimated by registering digitally reconstructed radiographs (DRRs) from the 3D CBCT scan to the AP radiographs through the correlation of enhanced linear image features mainly representing bony ridges. Linear features were enhanced by filtering the images with ''sticks,'' short line segments which are varied in orientation to achieve the maximum projection value at every pixel in the image. The mean (and standard deviations) of the absolute errors in estimating translations along the three orthogonal axes in millimeters were 0.134 (0.096) AP(out-of-plane), 0.021 (0.023) ML and 0.020 (0.020) SI. The corresponding errors for rotations in degrees were 0.011 (0.009) AP, 0.029 (0.016) ML (out-of-plane), and 0.030 (0.028) SI (out-of-plane). Preliminary results with megavoltage patient data have also been reported. The results suggest that it may be possible to enhance anatomic features that are common to DRRs from a CBCT image and a single AP radiography of the pelvis for use in a completely automated and accurate 2D-3D registration framework for setup verification in prostate radiotherapy. This technique is theoretically applicable to other rigid bony structures such as the cranial vault or skull base and piecewise rigid structures such as the spine.

  17. A method of 2D/3D registration of a statistical mouse atlas with a planar X-ray projection and an optical photo

    PubMed Central

    Wang, Hongkai; Stout, David B; Chatziioannou, Arion F

    2013-01-01

    The development of sophisticated and high throughput whole body small animal imaging technologies has created a need for improved image analysis and increased automation. The registration of a digital mouse atlas to individual images is a prerequisite for automated organ segmentation and uptake quantification. This paper presents a fully-automatic method for registering a statistical mouse atlas with individual subjects based on an anterior-posterior X-ray projection and a lateral optical photo of the mouse silhouette. The mouse atlas was trained as a statistical shape model based on 83 organ-segmented micro-CT images. For registration, a hierarchical approach is applied which first registers high contrast organs, and then estimates low contrast organs based on the registered high contrast organs. To register the high contrast organs, a 2D-registration-back-projection strategy is used that deforms the 3D atlas based on the 2D registrations of the atlas projections. For validation, this method was evaluated using 55 subjects of preclinical mouse studies. The results showed that this method can compensate for moderate variations of animal postures and organ anatomy. Two different metrics, the Dice coefficient and the average surface distance, were used to assess the registration accuracy of major organs. The Dice coefficients vary from 0.31±0.16 for the spleen to 0.88±0.03 for the whole body, and the average surface distance varies from 0.54±0.06 mm for the lungs to 0.85±0.10 mm for the skin. The method was compared with a direct 3D deformation optimization (without 2D-registration-back-projection) and a single-subject atlas registration (instead of using the statistical atlas). The comparison revealed that the 2D-registration-back-projection strategy significantly improved the registration accuracy, and the use of the statistical mouse atlas led to more plausible organ shapes than the single-subject atlas. This method was also tested with shoulder xenograft

  18. Image Registration for Stability Testing of MEMS

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; LeMoigne, Jacqueline; Blake, Peter N.; Morey, Peter A.; Landsman, Wayne B.; Chambers, Victor J.; Moseley, Samuel H.

    2011-01-01

    Image registration, or alignment of two or more images covering the same scenes or objects, is of great interest in many disciplines such as remote sensing, medical imaging. astronomy, and computer vision. In this paper, we introduce a new application of image registration algorithms. We demonstrate how through a wavelet based image registration algorithm, engineers can evaluate stability of Micro-Electro-Mechanical Systems (MEMS). In particular, we applied image registration algorithms to assess alignment stability of the MicroShutters Subsystem (MSS) of the Near Infrared Spectrograph (NIRSpec) instrument of the James Webb Space Telescope (JWST). This work introduces a new methodology for evaluating stability of MEMS devices to engineers as well as a new application of image registration algorithms to computer scientists.

  19. Intensity-based femoral atlas 2D/3D registration using Levenberg-Marquardt optimisation

    NASA Astrophysics Data System (ADS)

    Klima, Ondrej; Kleparnik, Petr; Spanel, Michal; Zemcik, Pavel

    2016-03-01

    The reconstruction of a patient-specific 3D anatomy is the crucial step in the computer-aided preoperative planning based on plain X-ray images. In this paper, we propose a robust and fast reconstruction methods based on fitting the statistical shape and intensity model of a femoral bone onto a pair of calibrated X-ray images. We formulate the registration as a non-linear least squares problem, allowing for the involvement of Levenberg-Marquardt optimisation. The proposed methods have been tested on a set of 96 virtual X-ray images. The reconstruction accuracy was evaluated using the symmetric Hausdorff distance between reconstructed and ground-truth bones. The accuracy of the intensity-based method reached 1.18 +/- 1.57mm on average, the registration took 8.76 seconds on average.

  20. Evaluation of similarity measures for use in the intensity-based rigid 2D-3D registration for patient positioning in radiotherapy

    SciTech Connect

    Wu Jian; Kim, Minho; Peters, Jorg; Chung, Heeteak; Samant, Sanjiv S.

    2009-12-15

    Purpose: Rigid 2D-3D registration is an alternative to 3D-3D registration for cases where largely bony anatomy can be used for patient positioning in external beam radiation therapy. In this article, the authors evaluated seven similarity measures for use in the intensity-based rigid 2D-3D registration using a variation in Skerl's similarity measure evaluation protocol. Methods: The seven similarity measures are partitioned intensity uniformity, normalized mutual information (NMI), normalized cross correlation (NCC), entropy of the difference image, pattern intensity (PI), gradient correlation (GC), and gradient difference (GD). In contrast to traditional evaluation methods that rely on visual inspection or registration outcomes, the similarity measure evaluation protocol probes the transform parameter space and computes a number of similarity measure properties, which is objective and optimization method independent. The variation in protocol offers an improved property in the quantification of the capture range. The authors used this protocol to investigate the effects of the downsampling ratio, the region of interest, and the method of the digitally reconstructed radiograph (DRR) calculation [i.e., the incremental ray-tracing method implemented on a central processing unit (CPU) or the 3D texture rendering method implemented on a graphics processing unit (GPU)] on the performance of the similarity measures. The studies were carried out using both the kilovoltage (kV) and the megavoltage (MV) images of an anthropomorphic cranial phantom and the MV images of a head-and-neck cancer patient. Results: Both the phantom and the patient studies showed the 2D-3D registration using the GPU-based DRR calculation yielded better robustness, while providing similar accuracy compared to the CPU-based calculation. The phantom study using kV imaging suggested that NCC has the best accuracy and robustness, but its slow function value change near the global maximum requires a

  1. Digital image registration by correlation techniques.

    NASA Technical Reports Server (NTRS)

    Popp, D. J.; Mccormack, D. S.; Lee, G. M.

    1972-01-01

    This study considers the translation problem associated with digital image registration and develops a means for comparing commonly used correlation techniques. Using suitably defined constraints, an optimum and four suboptimum registration techniques are defined and evaluated. A computational comparison is made and Gaussian image statistics are used to compare the selected techniques in terms of radial position location error.

  2. Research relative to automated multisensor image registration

    NASA Technical Reports Server (NTRS)

    Kanal, L. N.

    1983-01-01

    The basic aproaches to image registration are surveyed. Three image models are presented as models of the subpixel problem. A variety of approaches to the analysis of subpixel analysis are presented using these models.

  3. Development of a piecewise linear omnidirectional 3D image registration method.

    PubMed

    Bae, Hyunsoo; Kang, Wonjin; Lee, SukGyu; Kim, Youngwoo

    2016-12-01

    This paper proposes a new piecewise linear omnidirectional image registration method. The proposed method segments an image captured by multiple cameras into 2D segments defined by feature points of the image and then stitches each segment geometrically by considering the inclination of the segment in the 3D space. Depending on the intended use of image registration, the proposed method can be used to improve image registration accuracy or reduce the computation time in image registration because the trade-off between the computation time and image registration accuracy can be controlled for. In general, nonlinear image registration methods have been used in 3D omnidirectional image registration processes to reduce image distortion by camera lenses. The proposed method depends on a linear transformation process for omnidirectional image registration, and therefore it can enhance the effectiveness of the geometry recognition process, increase image registration accuracy by increasing the number of cameras or feature points of each image, increase the image registration speed by reducing the number of cameras or feature points of each image, and provide simultaneous information on shapes and colors of captured objects.

  4. Development of a piecewise linear omnidirectional 3D image registration method

    NASA Astrophysics Data System (ADS)

    Bae, Hyunsoo; Kang, Wonjin; Lee, SukGyu; Kim, Youngwoo

    2016-12-01

    This paper proposes a new piecewise linear omnidirectional image registration method. The proposed method segments an image captured by multiple cameras into 2D segments defined by feature points of the image and then stitches each segment geometrically by considering the inclination of the segment in the 3D space. Depending on the intended use of image registration, the proposed method can be used to improve image registration accuracy or reduce the computation time in image registration because the trade-off between the computation time and image registration accuracy can be controlled for. In general, nonlinear image registration methods have been used in 3D omnidirectional image registration processes to reduce image distortion by camera lenses. The proposed method depends on a linear transformation process for omnidirectional image registration, and therefore it can enhance the effectiveness of the geometry recognition process, increase image registration accuracy by increasing the number of cameras or feature points of each image, increase the image registration speed by reducing the number of cameras or feature points of each image, and provide simultaneous information on shapes and colors of captured objects.

  5. Contrast-Based 3D/2D Registration of the Left Atrium: Fast versus Consistent.

    PubMed

    Hoffmann, Matthias; Kowalewski, Christopher; Maier, Andreas; Kurzidim, Klaus; Strobel, Norbert; Hornegger, Joachim

    2016-01-01

    For augmented fluoroscopy during cardiac ablation, a preoperatively acquired 3D model of a patient's left atrium (LA) can be registered to X-ray images recorded during a contrast agent (CA) injection. An automatic registration method that works also for small amounts of CA is desired. We propose two similarity measures: The first focuses on edges of the patient anatomy. The second computes a contrast agent distribution estimate (CADE) inside the 3D model and rates its consistency with the CA as seen in biplane fluoroscopic images. Moreover, temporal filtering on the obtained registration results of a sequence is applied using a Markov chain framework. Evaluation was performed on 11 well-contrasted clinical angiographic sequences and 10 additional sequences with less CA. For well-contrasted sequences, the error for all 73 frames was 7.9 ± 6.3 mm and it dropped to 4.6 ± 4.0 mm when registering to an automatically selected, well enhanced frame in each sequence. Temporal filtering reduced the error for all frames from 7.9 ± 6.3 mm to 5.7 ± 4.6 mm. The error was typically higher if less CA was used. A combination of both similarity measures outperforms a previously proposed similarity measure. The mean accuracy for well contrasted sequences is in the range of other proposed manual registration methods.

  6. Contrast-Based 3D/2D Registration of the Left Atrium: Fast versus Consistent

    PubMed Central

    Kowalewski, Christopher; Kurzidim, Klaus; Strobel, Norbert; Hornegger, Joachim

    2016-01-01

    For augmented fluoroscopy during cardiac ablation, a preoperatively acquired 3D model of a patient's left atrium (LA) can be registered to X-ray images recorded during a contrast agent (CA) injection. An automatic registration method that works also for small amounts of CA is desired. We propose two similarity measures: The first focuses on edges of the patient anatomy. The second computes a contrast agent distribution estimate (CADE) inside the 3D model and rates its consistency with the CA as seen in biplane fluoroscopic images. Moreover, temporal filtering on the obtained registration results of a sequence is applied using a Markov chain framework. Evaluation was performed on 11 well-contrasted clinical angiographic sequences and 10 additional sequences with less CA. For well-contrasted sequences, the error for all 73 frames was 7.9 ± 6.3 mm and it dropped to 4.6 ± 4.0 mm when registering to an automatically selected, well enhanced frame in each sequence. Temporal filtering reduced the error for all frames from 7.9 ± 6.3 mm to 5.7 ± 4.6 mm. The error was typically higher if less CA was used. A combination of both similarity measures outperforms a previously proposed similarity measure. The mean accuracy for well contrasted sequences is in the range of other proposed manual registration methods. PMID:27051412

  7. Edge-based correlation image registration for multispectral imaging

    DOEpatents

    Nandy, Prabal

    2009-11-17

    Registration information for images of a common target obtained from a plurality of different spectral bands can be obtained by combining edge detection and phase correlation. The images are edge-filtered, and pairs of the edge-filtered images are then phase correlated to produce phase correlation images. The registration information can be determined based on these phase correlation images.

  8. A 2D 3D registration with low dose radiographic system for in vivo kinematic studies.

    PubMed

    Jerbi, T; Burdin, V; Stindel, E; Roux, C

    2011-01-01

    The knowledge of the poses and the positions of the knee bones and prostheses is of a great interest in the orthopedic and biomechanical applications. In this context, we use an ultra low dose bi-planar radiographic system called EOS to acquire two radiographs of the studied bones in each position. In this paper, we develop a new method for 2D 3D registration based on the frequency domain to determine the poses and the positions during quasi static motion analysis for healthy and prosthetic knees. Data of two healthy knees and four knees with unicompartimental prosthesis performing three different poses (full extension, 30° and 60° of flexion) were used in this work. The results we obtained are in concordance with the clinical accuracy and with the accuracy reported in other previous studies.

  9. Automatic registration and mosaicking of conservation images

    NASA Astrophysics Data System (ADS)

    Conover, Damon M.; Delaney, John K.; Loew, Murray H.

    2013-05-01

    As high-resolution conservation images, acquired using various imaging modalities, become more widely available, it is increasingly important to achieve accurate registration between the images. Accurate registration allows information unavailable in any one image to be compiled from several images and then used to provide a better understanding of how a painting was constructed. We have developed an algorithm that solves several important conservation problems: 1) registration and mosaicking of multiple X-ray films, ultraviolet images, and infrared reflectograms to a color reference image at high spatial-resolution (200 to 500 dpi) of paintings (both panel and canvas) and of works on paper, 2) registration of the images within visible and infrared multispectral reflectance and luminescence image cubes, and 3) mosaicking of hyperspectral image cubes (400 to 2500 nm). The registration/mosaicking algorithm corrects for several kinds of distortion, small rotation and scale errors, and keystone effects between the images. Thus images acquired with different cameras, illumination, and geometries can be registered/mosaicked. This automatic algorithm for registering/mosaicking multimodal conservation images is expected to be a valuable tool for conservators attempting to answer questions regarding the creation and preservation history of paintings. For example, an analysis of the reflectance spectra obtained from the sub-pixel registered multispectral image cubes can be used to separate, map, and identify artist materials in situ. And, by comparing the corresponding images in the X-ray, visible, and infrared regions, conservators can obtain a deeper understanding of compositional changes.

  10. Photorealistic image synthesis and camera validation from 2D images

    NASA Astrophysics Data System (ADS)

    Santos Ferrer, Juan C.; González Chévere, David; Manian, Vidya

    2014-06-01

    This paper presents a new 3D scene reconstruction technique using the Unity 3D game engine. The method presented here allow us to reconstruct the shape of simple objects and more complex ones from multiple 2D images, including infrared and digital images from indoor scenes and only digital images from outdoor scenes and then add the reconstructed object to the simulated scene created in Unity 3D, these scenes are then validated with real world scenes. The method used different cameras settings and explores different properties in the reconstructions of the scenes including light, color, texture, shapes and different views. To achieve the highest possible resolution, it was necessary the extraction of partial textures from visible surfaces. To recover the 3D shapes and the depth of simple objects that can be represented by the geometric bodies, there geometric characteristics were used. To estimate the depth of more complex objects the triangulation method was used, for this the intrinsic and extrinsic parameters were calculated using geometric camera calibration. To implement the methods mentioned above the Matlab tool was used. The technique presented here also let's us to simulate small simple videos, by reconstructing a sequence of multiple scenes of the video separated by small margins of time. To measure the quality of the reconstructed images and video scenes the Fast Low Band Model (FLBM) metric from the Video Quality Measurement (VQM) software was used. Low bandwidth perception based features include edges and motion.

  11. A Multistage Approach for Image Registration.

    PubMed

    Bowen, Francis; Hu, Jianghai; Du, Eliza Yingzi

    2016-09-01

    Successful image registration is an important step for object recognition, target detection, remote sensing, multimodal content fusion, scene blending, and disaster assessment and management. The geometric and photometric variations between images adversely affect the ability for an algorithm to estimate the transformation parameters that relate the two images. Local deformations, lighting conditions, object obstructions, and perspective differences all contribute to the challenges faced by traditional registration techniques. In this paper, a novel multistage registration approach is proposed that is resilient to view point differences, image content variations, and lighting conditions. Robust registration is realized through the utilization of a novel region descriptor which couples with the spatial and texture characteristics of invariant feature points. The proposed region descriptor is exploited in a multistage approach. A multistage process allows the utilization of the graph-based descriptor in many scenarios thus allowing the algorithm to be applied to a broader set of images. Each successive stage of the registration technique is evaluated through an effective similarity metric which determines subsequent action. The registration of aerial and street view images from pre- and post-disaster provide strong evidence that the proposed method estimates more accurate global transformation parameters than traditional feature-based methods. Experimental results show the robustness and accuracy of the proposed multistage image registration methodology.

  12. 2-D-3-D frequency registration using a low-dose radiographic system for knee motion estimation.

    PubMed

    Jerbi, Taha; Burdin, Valerie; Leboucher, Julien; Stindel, Eric; Roux, Christian

    2013-03-01

    In this paper, a new method is presented to study the feasibility of the pose and the position estimation of bone structures using a low-dose radiographic system, the entrepreneurial operating system (designed by EOS-Imaging Company). This method is based on a 2-D-3-D registration of EOS bi-planar X-ray images with an EOS 3-D reconstruction. This technique is relevant to such an application thanks to the EOS ability to simultaneously make acquisitions of frontal and sagittal radiographs, and also to produce a 3-D surface reconstruction with its attached software. In this paper, the pose and position of a bone in radiographs is estimated through the link between 3-D and 2-D data. This relationship is established in the frequency domain using the Fourier central slice theorem. To estimate the pose and position of the bone, we define a distance between the 3-D data and the radiographs, and use an iterative optimization approach to converge toward the best estimation. In this paper, we give the mathematical details of the method. We also show the experimental protocol and the results, which validate our approach.

  13. Improved 2D/3D registration robustness using local spatial information

    NASA Astrophysics Data System (ADS)

    De Momi, Elena; Eckman, Kort; Jaramaz, Branislav; DiGioia, Anthony, III

    2006-03-01

    Xalign is a tool designed to measure implant orientation after joint arthroplasty by co-registering a projection of an implant model and a digitally reconstructed radiograph of the patient's anatomy with a post operative x-ray. A mutual information based registration method is used to automate alignment. When using basic mutual information, the presence of local maxima can result in misregistration. To increase robustness of registration, our research is aimed at improving the similarity function by modifying the information measure and incorporating local spatial information. A test dataset with known groundtruth parameters was created to evaluate the performance of this measure. A synthetic radiograph was generated first from a preoperative pelvic CT scan to act as the gold standard. The voxel weights used to generate the image were then modified and new images were generated with the CT rigidly transformed. The roll, pitch and yaw angles span a range of -10/+10 degrees, while x, y and z translations range from -10mm to +10mm. These images were compared with the reference image. The proposed cost function correctly identified the correct pose in all tests and did not exhibit any local maxima which would slow or prevent locating the global maximum.

  14. Nonrigid image registration with crystal dislocation energy.

    PubMed

    Luo, Yishan; Chung, Albert C S

    2013-01-01

    The goal of nonrigid image registration is to find a suitable transformation such that the transformed moving image becomes similar to the reference image. The image registration problem can also be treated as an optimization problem, which tries to minimize an objective energy function that measures the differences between two involved images. In this paper, we consider image matching as the process of aligning object boundaries in two different images. The registration energy function can be defined based on the total energy associated with the object boundaries. The optimal transformation is obtained by finding the equilibrium state when the total energy is minimized, which indicates the object boundaries find their correspondences and stop deforming. We make an analogy between the above processes with the dislocation system in physics. The object boundaries are viewed as dislocations (line defects) in crystal. Then the well-developed dislocation energy is used to derive the energy assigned to object boundaries in images. The newly derived registration energy function takes the global gradient information of the entire image into consideration, and produces an orientation-dependent and long-range interaction between two images to drive the registration process. This property of interaction endows the new registration framework with both fast convergence rate and high registration accuracy. Moreover, the new energy function can be adapted to realize symmetric diffeomorphic transformation so as to ensure one-to-one matching between subjects. In this paper, the superiority of the new method is theoretically proven, experimentally tested and compared with the state-of-the-art SyN method. Experimental results with 3-D magnetic resonance brain images demonstrate that the proposed method outperforms the compared methods in terms of both registration accuracy and computation time.

  15. Non-rigid registration of medical images based on estimation of deformation states

    NASA Astrophysics Data System (ADS)

    Marami, Bahram; Sirouspour, Shahin; Capson, David W.

    2014-11-01

    A unified framework for automatic non-rigid 3D-3D and 3D-2D registration of medical images with static and dynamic deformations is proposed in this paper. The problem of non-rigid image registration is approached as a classical state estimation problem using a generic deformation model for the soft tissue. The registration technique employs a dynamic linear elastic continuum mechanics model of the tissue deformation, which is discretized using the finite element method. In the proposed method, the registration is achieved through a Kalman-like filtering process, which incorporates information from the deformation model and a vector of observation prediction errors computed from an intensity-based similarity/distance metric between images. With this formulation, single and multiple-modality, 3D-3D and 3D-2D image registration problems can all be treated within the same framework. The performance of the proposed registration technique was evaluated in a number of different registration scenarios. First, 3D magnetic resonance (MR) images of uncompressed and compressed breast tissue were co-registered. 3D MR images of the uncompressed breast tissue were also registered to a sequence of simulated 2D interventional MR images of the compressed breast. Finally, the registration algorithm was employed to dynamically track a target sub-volume inside the breast tissue during the process of the biopsy needle insertion based on registering pre-insertion 3D MR images to a sequence of real-time simulated 2D interventional MR images. Registration results indicate that the proposed method can be effectively employed for the registration of medical images in image-guided procedures, such as breast biopsy in which the tissue undergoes static and dynamic deformations.

  16. Non-rigid registration of medical images based on estimation of deformation states.

    PubMed

    Marami, Bahram; Sirouspour, Shahin; Capson, David W

    2014-11-21

    A unified framework for automatic non-rigid 3D-3D and 3D-2D registration of medical images with static and dynamic deformations is proposed in this paper. The problem of non-rigid image registration is approached as a classical state estimation problem using a generic deformation model for the soft tissue. The registration technique employs a dynamic linear elastic continuum mechanics model of the tissue deformation, which is discretized using the finite element method. In the proposed method, the registration is achieved through a Kalman-like filtering process, which incorporates information from the deformation model and a vector of observation prediction errors computed from an intensity-based similarity/distance metric between images. With this formulation, single and multiple-modality, 3D-3D and 3D-2D image registration problems can all be treated within the same framework. The performance of the proposed registration technique was evaluated in a number of different registration scenarios. First, 3D magnetic resonance (MR) images of uncompressed and compressed breast tissue were co-registered. 3D MR images of the uncompressed breast tissue were also registered to a sequence of simulated 2D interventional MR images of the compressed breast. Finally, the registration algorithm was employed to dynamically track a target sub-volume inside the breast tissue during the process of the biopsy needle insertion based on registering pre-insertion 3D MR images to a sequence of real-time simulated 2D interventional MR images. Registration results indicate that the proposed method can be effectively employed for the registration of medical images in image-guided procedures, such as breast biopsy in which the tissue undergoes static and dynamic deformations.

  17. Deformable Medical Image Registration: A Survey

    PubMed Central

    Sotiras, Aristeidis; Davatzikos, Christos; Paragios, Nikos

    2013-01-01

    Deformable image registration is a fundamental task in medical image processing. Among its most important applications, one may cite: i) multi-modality fusion, where information acquired by different imaging devices or protocols is fused to facilitate diagnosis and treatment planning; ii) longitudinal studies, where temporal structural or anatomical changes are investigated; and iii) population modeling and statistical atlases used to study normal anatomical variability. In this paper, we attempt to give an overview of deformable registration methods, putting emphasis on the most recent advances in the domain. Additional emphasis has been given to techniques applied to medical images. In order to study image registration methods in depth, their main components are identified and studied independently. The most recent techniques are presented in a systematic fashion. The contribution of this paper is to provide an extensive account of registration techniques in a systematic manner. PMID:23739795

  18. Automated Registration Of Images From Multiple Sensors

    NASA Technical Reports Server (NTRS)

    Rignot, Eric J. M.; Kwok, Ronald; Curlander, John C.; Pang, Shirley S. N.

    1994-01-01

    Images of terrain scanned in common by multiple Earth-orbiting remote sensors registered automatically with each other and, where possible, on geographic coordinate grid. Simulated image of terrain viewed by sensor computed from ancillary data, viewing geometry, and mathematical model of physics of imaging. In proposed registration algorithm, simulated and actual sensor images matched by area-correlation technique.

  19. Effects of Spatial Resolution on Image Registration

    PubMed Central

    Zhao, Can; Carass, Aaron; Jog, Amod; Prince, Jerry L.

    2016-01-01

    This paper presents a theoretical analysis of the effect of spatial resolution on image registration. Based on the assumption of additive Gaussian noise on the images, the mean and variance of the distribution of the sum of squared differences (SSD) were estimated. Using these estimates, we evaluate a distance between the SSD distributions of aligned images and non-aligned images. The experimental results show that by matching the resolutions of the moving and fixed images one can get a better image registration result. The results agree with our theoretical analysis of SSD, but also suggest that it may be valid for mutual information as well. PMID:27773960

  20. Image registration techniques for multimodal sensors

    NASA Astrophysics Data System (ADS)

    Altinalev, Tevfik; Cetin, Enis A.; Yardimci, Yasemin C.

    2002-08-01

    Image registration refers to the problem of spatially aligning two or more images. A challenging problem in this area is the registration of images obtained by different types of sensors. In general such images have different gray level characteristics and commonly used techniques such as those based on area correlations cannot be applied directly. On the other hand, contours representing the region boundaries are preserved in most cases. Therefore, contour based registration techniques are applicable to multimodal sensors. In this paper, various registration techniques based on subband decomposition and projection along x and y directions are introduced. The effect of binarization is investigated. Unknown translation and scaling parameters are computed using cross-correlation methods over the projections. Performance of the algorithms is compared.

  1. Available information in 2D motional Stark effect imaging.

    PubMed

    Creese, Mathew; Howard, John

    2010-10-01

    Recent advances in imaging techniques have allowed the extension of the standard polarimetric 1D motional Stark effect (MSE) diagnostic to 2D imaging of the internal magnetic field of fusion devices [J. Howard, Plasma Phys. Controlled Fusion 50, 125003 (2008)]. This development is met with the challenge of identifying and extracting the new information, which can then be used to increase the accuracy of plasma equilibrium and current density profile determinations. This paper develops a 2D analysis of the projected MSE polarization orientation and Doppler phase shift. It is found that, for a standard viewing position, the 2D MSE imaging system captures sufficient information to allow imaging of the internal vertical magnetic field component B(Z)(r,z) in a tokamak.

  2. Sparse radar imaging using 2D compressed sensing

    NASA Astrophysics Data System (ADS)

    Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying

    2014-10-01

    Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.

  3. Register cardiac fiber orientations from 3D DTI volume to 2D ultrasound image of rat hearts

    NASA Astrophysics Data System (ADS)

    Qin, Xulei; Wang, Silun; Shen, Ming; Zhang, Xiaodong; Lerakis, Stamatios; Wagner, Mary B.; Fei, Baowei

    2015-03-01

    Two-dimensional (2D) ultrasound or echocardiography is one of the most widely used examinations for the diagnosis of cardiac diseases. However, it only supplies the geometric and structural information of the myocardium. In order to supply more detailed microstructure information of the myocardium, this paper proposes a registration method to map cardiac fiber orientations from three-dimensional (3D) magnetic resonance diffusion tensor imaging (MR-DTI) volume to the 2D ultrasound image. It utilizes a 2D/3D intensity based registration procedure including rigid, log-demons, and affine transformations to search the best similar slice from the template volume. After registration, the cardiac fiber orientations are mapped to the 2D ultrasound image via fiber relocations and reorientations. This method was validated by six images of rat hearts ex vivo. The evaluation results indicated that the final Dice similarity coefficient (DSC) achieved more than 90% after geometric registrations; and the inclination angle errors (IAE) between the mapped fiber orientations and the gold standards were less than 15 degree. This method may provide a practical tool for cardiologists to examine cardiac fiber orientations on ultrasound images and have the potential to supply additional information for diagnosis of cardiac diseases.

  4. Onboard Image Registration from Invariant Features

    NASA Technical Reports Server (NTRS)

    Wang, Yi; Ng, Justin; Garay, Michael J.; Burl, Michael C

    2008-01-01

    This paper describes a feature-based image registration technique that is potentially well-suited for onboard deployment. The overall goal is to provide a fast, robust method for dynamically combining observations from multiple platforms into sensors webs that respond quickly to short-lived events and provide rich observations of objects that evolve in space and time. The approach, which has enjoyed considerable success in mainstream computer vision applications, uses invariant SIFT descriptors extracted at image interest points together with the RANSAC algorithm to robustly estimate transformation parameters that relate one image to another. Experimental results for two satellite image registration tasks are presented: (1) automatic registration of images from the MODIS instrument on Terra to the MODIS instrument on Aqua and (2) automatic stabilization of a multi-day sequence of GOES-West images collected during the October 2007 Southern California wildfires.

  5. Non-rigid registration of tomographic images with Fourier transforms

    NASA Astrophysics Data System (ADS)

    Osorio, Ar; Isoardi, Ra; Mato, G.

    2007-11-01

    Spatial image registration of deformable body parts such as thorax and abdomen has important medical applications, but at the same time, it represents an important computational challenge. In this work we propose an automatic algorithm to perform non-rigid registration of tomographic images using a non-rigid model based on Fourier transforms. As a measure of similarity, we use the correlation coefficient, finding that the optimal order of the transformation is n = 3 (36 parameters). We apply this method to a digital phantom and to 7 pairs of patient images corresponding to clinical CT scans. The preliminary results indicate a fairly good agreement according to medical experts, with an average registration error of 2 mm for the case of clinical images. For 2D images (dimensions 512×512), the average running time for the algorithm is 15 seconds using a standard personal computer. Summarizing, we find that intra-modality registration of the abdomen can be achieved with acceptable accuracy for slight deformations and can be extended to 3D with a reasonable execution time.

  6. Bidirectional elastic image registration using B-spline affine transformation.

    PubMed

    Gu, Suicheng; Meng, Xin; Sciurba, Frank C; Ma, Hongxia; Leader, Joseph; Kaminski, Naftali; Gur, David; Pu, Jiantao

    2014-06-01

    A registration scheme termed as B-spline affine transformation (BSAT) is presented in this study to elastically align two images. We define an affine transformation instead of the traditional translation at each control point. Mathematically, BSAT is a generalized form of the affine transformation and the traditional B-spline transformation (BST). In order to improve the performance of the iterative closest point (ICP) method in registering two homologous shapes but with large deformation, a bidirectional instead of the traditional unidirectional objective/cost function is proposed. In implementation, the objective function is formulated as a sparse linear equation problem, and a sub-division strategy is used to achieve a reasonable efficiency in registration. The performance of the developed scheme was assessed using both two-dimensional (2D) synthesized dataset and three-dimensional (3D) volumetric computed tomography (CT) data. Our experiments showed that the proposed B-spline affine model could obtain reasonable registration accuracy.

  7. Bidirectional Elastic Image Registration Using B-Spline Affine Transformation

    PubMed Central

    Gu, Suicheng; Meng, Xin; Sciurba, Frank C.; Wang, Chen; Kaminski, Naftali; Pu, Jiantao

    2014-01-01

    A registration scheme termed as B-spline affine transformation (BSAT) is presented in this study to elastically align two images. We define an affine transformation instead of the traditional translation at each control point. Mathematically, BSAT is a generalized form of the affine transformation and the traditional B-Spline transformation (BST). In order to improve the performance of the iterative closest point (ICP) method in registering two homologous shapes but with large deformation, a bi-directional instead of the traditional unidirectional objective / cost function is proposed. In implementation, the objective function is formulated as a sparse linear equation problem, and a sub-division strategy is used to achieve a reasonable efficiency in registration. The performance of the developed scheme was assessed using both two-dimensional (2D) synthesized dataset and three-dimensional (3D) volumetric computed tomography (CT) data. Our experiments showed that the proposed B-spline affine model could obtain reasonable registration accuracy. PMID:24530210

  8. 2D Orthogonal Locality Preserving Projection for Image Denoising.

    PubMed

    Shikkenawis, Gitam; Mitra, Suman K

    2016-01-01

    Sparse representations using transform-domain techniques are widely used for better interpretation of the raw data. Orthogonal locality preserving projection (OLPP) is a linear technique that tries to preserve local structure of data in the transform domain as well. Vectorized nature of OLPP requires high-dimensional data to be converted to vector format, hence may lose spatial neighborhood information of raw data. On the other hand, processing 2D data directly, not only preserves spatial information, but also improves the computational efficiency considerably. The 2D OLPP is expected to learn the transformation from 2D data itself. This paper derives mathematical foundation for 2D OLPP. The proposed technique is used for image denoising task. Recent state-of-the-art approaches for image denoising work on two major hypotheses, i.e., non-local self-similarity and sparse linear approximations of the data. Locality preserving nature of the proposed approach automatically takes care of self-similarity present in the image while inferring sparse basis. A global basis is adequate for the entire image. The proposed approach outperforms several state-of-the-art image denoising approaches for gray-scale, color, and texture images.

  9. Dual-projection 3D-2D registration for surgical guidance: preclinical evaluation of performance and minimum angular separation

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Otake, Y.; Wang, A. S.; Kleinszig, G.; Vogt, S.; Gallia, G. L.; Rigamonti, D.; Wolinsky, J.-P.; Gokaslan, Ziya L.; Khanna, A. J.; Siewerdsen, J. H.

    2014-03-01

    An algorithm for 3D-2D registration of CT and x-ray projections has been developed using dual projection views to provide 3D localization with accuracy exceeding that of conventional tracking systems. The registration framework employs a normalized gradient information (NGI) similarity metric and covariance matrix adaptation evolution strategy (CMAES) to solve for the patient pose in 6 degrees of freedom. Registration performance was evaluated in anthropomorphic head and chest phantoms, as well as a human torso cadaver, using C-arm projection views acquired at angular separations (Δ𝜃) ranging 0-178°. Registration accuracy was assessed in terms target registration error (TRE) and compared to that of an electromagnetic tracker. Studies evaluated the influence of C-arm magnification, x-ray dose, and preoperative CT slice thickness on registration accuracy and the minimum angular separation required to achieve TRE ~2 mm. The results indicate that Δ𝜃 as small as 10-20° is adequate to achieve TRE <2 mm with 95% confidence, comparable or superior to that of commercial trackers. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers, and manual registration. The studies support potential application to percutaneous spine procedures and intracranial neurosurgery.

  10. TU-B-19A-01: Image Registration II: TG132-Quality Assurance for Image Registration

    SciTech Connect

    Brock, K; Mutic, S

    2014-06-15

    AAPM Task Group 132 was charged with a review of the current approaches and solutions for image registration in radiotherapy and to provide recommendations for quality assurance and quality control of these clinical processes. As the results of image registration are always used as the input of another process for planning or delivery, it is important for the user to understand and document the uncertainty associate with the algorithm in general and the Result of a specific registration. The recommendations of this task group, which at the time of abstract submission are currently being reviewed by the AAPM, include the following components. The user should understand the basic image registration techniques and methods of visualizing image fusion. The disclosure of basic components of the image registration by commercial vendors is critical in this respect. The physicists should perform end-to-end tests of imaging, registration, and planning/treatment systems if image registration is performed on a stand-alone system. A comprehensive commissioning process should be performed and documented by the physicist prior to clinical use of the system. As documentation is important to the safe implementation of this process, a request and report system should be integrated into the clinical workflow. Finally, a patient specific QA practice should be established for efficient evaluation of image registration results. The implementation of these recommendations will be described and illustrated during this educational session. Learning Objectives: Highlight the importance of understanding the image registration techniques used in their clinic. Describe the end-to-end tests needed for stand-alone registration systems. Illustrate a comprehensive commissioning program using both phantom data and clinical images. Describe a request and report system to ensure communication and documentation. Demonstrate an clinically-efficient patient QA practice for efficient evaluation of image

  11. Medical image registration using fuzzy theory.

    PubMed

    Pan, Meisen; Tang, Jingtian; Xiong, Qi

    2012-01-01

    Mutual information (MI)-based registration, which uses MI as the similarity measure, is a representative method in medical image registration. It has an excellent robustness and accuracy, but with the disadvantages of a large amount of calculation and a long processing time. In this paper, by computing the medical image moments, the centroid is acquired. By applying fuzzy c-means clustering, the coordinates of the medical image are divided into two clusters to fit a straight line, and the rotation angles of the reference and floating images are computed, respectively. Thereby, the initial values for registering the images are determined. When searching the optimal geometric transformation parameters, we put forward the two new concepts of fuzzy distance and fuzzy signal-to-noise ratio (FSNR), and we select FSNR as the similarity measure between the reference and floating images. In the experiments, the Simplex method is chosen as multi-parameter optimisation. The experimental results show that this proposed method has a simple implementation, a low computational cost, a fast registration and good registration accuracy. Moreover, it can effectively avoid trapping into the local optima. It is adapted to both mono-modality and multi-modality image registrations.

  12. Real-time 2-D temperature imaging using ultrasound.

    PubMed

    Liu, Dalong; Ebbini, Emad S

    2010-01-01

    We have previously introduced methods for noninvasive estimation of temperature change using diagnostic ultrasound. The basic principle was validated both in vitro and in vivo by several groups worldwide. Some limitations remain, however, that have prevented these methods from being adopted in monitoring and guidance of minimally invasive thermal therapies, e.g., RF ablation and high-intensity-focused ultrasound (HIFU). In this letter, we present first results from a real-time system for 2-D imaging of temperature change using pulse-echo ultrasound. The front end of the system is a commercially available scanner equipped with a research interface, which allows the control of imaging sequence and access to the RF data in real time. A high-frame-rate 2-D RF acquisition mode, M2D, is used to capture the transients of tissue motion/deformations in response to pulsed HIFU. The M2D RF data is streamlined to the back end of the system, where a 2-D temperature imaging algorithm based on speckle tracking is implemented on a graphics processing unit. The real-time images of temperature change are computed on the same spatial and temporal grid of the M2D RF data, i.e., no decimation. Verification of the algorithm was performed by monitoring localized HIFU-induced heating of a tissue-mimicking elastography phantom. These results clearly demonstrate the repeatability and sensitivity of the algorithm. Furthermore, we present in vitro results demonstrating the possible use of this algorithm for imaging changes in tissue parameters due to HIFU-induced lesions. These results clearly demonstrate the value of the real-time data streaming and processing in monitoring, and guidance of minimally invasive thermotherapy.

  13. Reflectance and fluorescence hyperspectral elastic image registration

    NASA Astrophysics Data System (ADS)

    Lange, Holger; Baker, Ross; Hakansson, Johan; Gustafsson, Ulf P.

    2004-05-01

    Science and Technology International (STI) presents a novel multi-modal elastic image registration approach for a new hyperspectral medical imaging modality. STI's HyperSpectral Diagnostic Imaging (HSDI) cervical instrument is used for the early detection of uterine cervical cancer. A Computer-Aided-Diagnostic (CAD) system is being developed to aid the physician with the diagnosis of pre-cancerous and cancerous tissue regions. The CAD system uses the fusion of multiple data sources to optimize its performance. The key enabling technology for the data fusion is image registration. The difficulty lies in the image registration of fluorescence and reflectance hyperspectral data due to the occurrence of soft tissue movement and the limited resemblance of these types of imagery. The presented approach is based on embedding a reflectance image in the fluorescence hyperspectral imagery. Having a reflectance image in both data sets resolves the resemblance problem and thereby enables the use of elastic image registration algorithms required to compensate for soft tissue movements. Several methods of embedding the reflectance image in the fluorescence hyperspectral imagery are described. Initial experiments with human subject data are presented where a reflectance image is embedded in the fluorescence hyperspectral imagery.

  14. Mid-space-independent deformable image registration.

    PubMed

    Aganj, Iman; Iglesias, Juan Eugenio; Reuter, Martin; Sabuncu, Mert Rory; Fischl, Bruce

    2017-02-24

    Aligning images in a mid-space is a common approach to ensuring that deformable image registration is symmetric - that it does not depend on the arbitrary ordering of the input images. The results are, however, generally dependent on the mathematical definition of the mid-space. In particular, the set of possible solutions is typically restricted by the constraints that are enforced on the transformations to prevent the mid-space from drifting too far from the native image spaces. The use of an implicit atlas has been proposed as an approach to mid-space image registration. In this work, we show that when the atlas is aligned to each image in the native image space, the data term of implicit-atlas-based deformable registration is inherently independent of the mid-space. In addition, we show that the regularization term can be reformulated independently of the mid-space as well. We derive a new symmetric cost function that only depends on the transformation morphing the images to each other, rather than to the atlas. This eliminates the need for anti-drift constraints, thereby expanding the space of allowable deformations. We provide an implementation scheme for the proposed framework, and validate it through diffeomorphic registration experiments on brain magnetic resonance images.

  15. Image registration for DSA quality enhancement.

    PubMed

    Buzug, T M; Weese, J

    1998-01-01

    A generalized framework for histogram-based similarity measures is presented and applied to the image-enhancement task in digital subtraction angiography (DSA). The class of differentiable, strictly convex weighting functions is identified as suitable weightings of histograms for measuring the degree of clustering that goes along with registration. With respect to computation time, the energy similarity measure is the function of choice for the registration of mask and contrast image prior to subtraction. The robustness of the energy measure is studied for geometrical image distortions like rotation and scaling. Additionally, it is investigated how the histogram binning and inhomogeneous motion inside the templates influence the quality of the similarity measure. Finally, the registration success for the automated procedure is compared with the manually shift-corrected image pair of the head.

  16. SAR image registration based on Susan algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Chun-bo; Fu, Shao-hua; Wei, Zhong-yi

    2011-10-01

    Synthetic Aperture Radar (SAR) is an active remote sensing system which can be installed on aircraft, satellite and other carriers with the advantages of all day and night and all-weather ability. It is the important problem that how to deal with SAR and extract information reasonably and efficiently. Particularly SAR image geometric correction is the bottleneck to impede the application of SAR. In this paper we introduces image registration and the Susan algorithm knowledge firstly, then introduces the process of SAR image registration based on Susan algorithm and finally presents experimental results of SAR image registration. The Experiment shows that this method is effective and applicable, no matter from calculating the time or from the calculation accuracy.

  17. [3D display of sequential 2D medical images].

    PubMed

    Lu, Yisong; Chen, Yazhu

    2003-12-01

    A detailed review is given in this paper on various current 3D display methods for sequential 2D medical images and the new development in 3D medical image display. True 3D display, surface rendering, volume rendering, 3D texture mapping and distributed collaborative rendering are discussed in depth. For two kinds of medical applications: Real-time navigation system and high-fidelity diagnosis in computer aided surgery, different 3D display methods are presented.

  18. The image registration of multi-band images by geometrical optics

    NASA Astrophysics Data System (ADS)

    Yan, Yung-Jhe; Chiang, Hou-Chi; Tsai, Yu-Hsiang; Huang, Ting-Wei; Mang, Ou-Yang

    2015-09-01

    The image fusion is combination of two or more images into one image. The fusion of multi-band spectral images has been in many applications, such as thermal system, remote sensing, medical treatment, etc. Images are taken with the different imaging sensors. If the sensors take images through the different optical paths in the same time, it will be in the different positions. The task of the image registration will be more difficult. Because the images are in the different field of views (F.O.V.), the different resolutions and the different view angles. It is important to build the relationship of the viewpoints in one image to the other image. In this paper, we focus on the problem of image registration for two non-pinhole sensors. The affine transformation between the 2-D image and the 3-D real world can be derived from the geometrical optics of the sensors. In the other word, the geometrical affine transformation function of two images are derived from the intrinsic and extrinsic parameters of two sensors. According to the affine transformation function, the overlap of the F.O.V. in two images can be calculated and resample two images in the same resolution. Finally, we construct the image registration model by the mapping function. It merges images for different imaging sensors. And, imaging sensors absorb different wavebands of electromagnetic spectrum at the different position in the same time.

  19. Building 3D scenes from 2D image sequences

    NASA Astrophysics Data System (ADS)

    Cristea, Paul D.

    2006-05-01

    Sequences of 2D images, taken by a single moving video receptor, can be fused to generate a 3D representation. This dynamic stereopsis exists in birds and reptiles, whereas the static binocular stereopsis is common in mammals, including humans. Most multimedia computer vision systems for stereo image capture, transmission, processing, storage and retrieval are based on the concept of binocularity. As a consequence, their main goal is to acquire, conserve and enhance pairs of 2D images able to generate a 3D visual perception in a human observer. Stereo vision in birds is based on the fusion of images captured by each eye, with previously acquired and memorized images from the same eye. The process goes on simultaneously and conjointly for both eyes and generates an almost complete all-around visual field. As a consequence, the baseline distance is no longer fixed, as in the case of binocular 3D view, but adjustable in accordance with the distance to the object of main interest, allowing a controllable depth effect. Moreover, the synthesized 3D scene can have a better resolution than each individual 2D image in the sequence. Compression of 3D scenes can be achieved, and stereo transmissions with lower bandwidth requirements can be developed.

  20. An accurate registration technique for distorted images

    NASA Technical Reports Server (NTRS)

    Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis

    1990-01-01

    Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.

  1. Targeted fluorescence imaging enhanced by 2D materials: a comparison between 2D MoS2 and graphene oxide.

    PubMed

    Xie, Donghao; Ji, Ding-Kun; Zhang, Yue; Cao, Jun; Zheng, Hu; Liu, Lin; Zang, Yi; Li, Jia; Chen, Guo-Rong; James, Tony D; He, Xiao-Peng

    2016-08-04

    Here we demonstrate that 2D MoS2 can enhance the receptor-targeting and imaging ability of a fluorophore-labelled ligand. The 2D MoS2 has an enhanced working concentration range when compared with graphene oxide, resulting in the improved imaging of both cell and tissue samples.

  2. Segmentation by surface-to-image registration

    NASA Astrophysics Data System (ADS)

    Xie, Zhiyong; Tamez-Pena, Jose; Gieseg, Michael; Liachenko, Serguei; Dhamija, Shantanu; Chiao, Ping

    2006-03-01

    This paper presents a new image segmentation algorithm using surface-to-image registration. The algorithm employs multi-level transformations and multi-resolution image representations to progressively register atlas surfaces (modeling anatomical structures) to subject images based on weighted external forces in which weights and forces are determined by gradients and local intensity profiles obtained from images. The algorithm is designed to prevent atlas surfaces converging to unintended strong edges or leaking out of structures of interest through weak edges where the image contrast is low. Segmentation of bone structures on MR images of rat knees analyzed in this manner performs comparably to technical experts using a semi-automatic tool.

  3. Software for Automated Image-to-Image Co-registration

    NASA Technical Reports Server (NTRS)

    Benkelman, Cody A.; Hughes, Heidi

    2007-01-01

    The project objectives are: a) Develop software to fine-tune image-to-image co-registration, presuming images are orthorectified prior to input; b) Create a reusable software development kit (SDK) to enable incorporation of these tools into other software; d) provide automated testing for quantitative analysis; and e) Develop software that applies multiple techniques to achieve subpixel precision in the co-registration of image pairs.

  4. Groupwise Image Registration Guided by a Dynamic Digraph of Images.

    PubMed

    Tang, Zhenyu; Fan, Yong

    2016-04-01

    For groupwise image registration, graph theoretic methods have been adopted for discovering the manifold of images to be registered so that accurate registration of images to a group center image can be achieved by aligning similar images that are linked by the shortest graph paths. However, the image similarity measures adopted to build a graph of images in the extant methods are essentially pairwise measures, not effective for capturing the groupwise similarity among multiple images. To overcome this problem, we present a groupwise image similarity measure that is built on sparse coding for characterizing image similarity among all input images and build a directed graph (digraph) of images so that similar images are connected by the shortest paths of the digraph. Following the shortest paths determined according to the digraph, images are registered to a group center image in an iterative manner by decomposing a large anatomical deformation field required to register an image to the group center image into a series of small ones between similar images. During the iterative image registration, the digraph of images evolves dynamically at each iteration step to pursue an accurate estimation of the image manifold. Moreover, an adaptive dictionary strategy is adopted in the groupwise image similarity measure to ensure fast convergence of the iterative registration procedure. The proposed method has been validated based on both simulated and real brain images, and experiment results have demonstrated that our method was more effective for learning the manifold of input images and achieved higher registration accuracy than state-of-the-art groupwise image registration methods.

  5. Interactive 2D to 3D stereoscopic image synthesis

    NASA Astrophysics Data System (ADS)

    Feldman, Mark H.; Lipton, Lenny

    2005-03-01

    Advances in stereoscopic display technologies, graphic card devices, and digital imaging algorithms have opened up new possibilities in synthesizing stereoscopic images. The power of today"s DirectX/OpenGL optimized graphics cards together with adapting new and creative imaging tools found in software products such as Adobe Photoshop, provide a powerful environment for converting planar drawings and photographs into stereoscopic images. The basis for such a creative process is the focus of this paper. This article presents a novel technique, which uses advanced imaging features and custom Windows-based software that utilizes the Direct X 9 API to provide the user with an interactive stereo image synthesizer. By creating an accurate and interactive world scene with moveable and flexible depth map altered textured surfaces, perspective stereoscopic cameras with both visible frustums and zero parallax planes, a user can precisely model a virtual three-dimensional representation of a real-world scene. Current versions of Adobe Photoshop provide a creative user with a rich assortment of tools needed to highlight elements of a 2D image, simulate hidden areas, and creatively shape them for a 3D scene representation. The technique described has been implemented as a Photoshop plug-in and thus allows for a seamless transition of these 2D image elements into 3D surfaces, which are subsequently rendered to create stereoscopic views.

  6. Quantifying Therapeutic and Diagnostic Efficacy in 2D Microvascular Images

    NASA Technical Reports Server (NTRS)

    Parsons-Wingerter, Patricia; Vickerman, Mary B.; Keith, Patricia A.

    2009-01-01

    VESGEN is a newly automated, user-interactive program that maps and quantifies the effects of vascular therapeutics and regulators on microvascular form and function. VESGEN analyzes two-dimensional, black and white vascular images by measuring important vessel morphology parameters. This software guides the user through each required step of the analysis process via a concise graphical user interface (GUI). Primary applications of the VESGEN code are 2D vascular images acquired as clinical diagnostic images of the human retina and as experimental studies of the effects of vascular regulators and therapeutics on vessel remodeling.

  7. Automated image registration for FDOPA PET studies

    NASA Astrophysics Data System (ADS)

    Lin, Kang-Ping; Huang, Sung-Cheng; Yu, Dan-Chu; Melega, William; Barrio, Jorge R.; Phelps, Michael E.

    1996-12-01

    In this study, various image registration methods are investigated for their suitability for registration of L-6-[18F]-fluoro-DOPA (FDOPA) PET images. Five different optimization criteria including sum of absolute difference (SAD), mean square difference (MSD), cross-correlation coefficient (CC), standard deviation of pixel ratio (SDPR), and stochastic sign change (SSC) were implemented and Powell's algorithm was used to optimize the criteria. The optimization criteria were calculated either unidirectionally (i.e. only evaluating the criteria for comparing the resliced image 1 with the original image 2) or bidirectionally (i.e. averaging the criteria for comparing the resliced image 1 with the original image 2 and those for the sliced image 2 with the original image 1). Monkey FDOPA images taken at various known orientations were used to evaluate the accuracy of different methods. A set of human FDOPA dynamic images was used to investigate the ability of the methods for correcting subject movement. It was found that a large improvement in performance resulted when bidirectional rather than unidirectional criteria were used. Overall, the SAD, MSD and SDPR methods were found to be comparable in performance and were suitable for registering FDOPA images. The MSD method gave more adequate results for frame-to-frame image registration for correcting subject movement during a dynamic FDOPA study. The utility of the registration method is further demonstrated by registering FDOPA images in monkeys before and after amphetamine injection to reveal more clearly the changes in spatial distribution of FDOPA due to the drug intervention.

  8. Volumetric elasticity imaging with a 2-D CMUT array.

    PubMed

    Fisher, Ted G; Hall, Timothy J; Panda, Satchi; Richards, Michael S; Barbone, Paul E; Jiang, Jingfeng; Resnick, Jeff; Barnes, Steve

    2010-06-01

    This article reports the use of a two-dimensional (2-D) capacitive micro-machined ultrasound transducer (CMUT) to acquire radio-frequency (RF) echo data from relatively large volumes of a simple ultrasound phantom to compare three-dimensional (3-D) elasticity imaging methods. Typical 2-D motion tracking for elasticity image formation was compared with three different methods of 3-D motion tracking, with sum-squared difference (SSD) used as the similarity measure. Differences among the algorithms were the degree to which they tracked elevational motion: not at all (2-D search), planar search, combination of multiple planes and plane independent guided search. The cross-correlation between the predeformation and motion-compensated postdeformation RF echo fields was used to quantify motion tracking accuracy. The lesion contrast-to-noise ratio was used to quantify image quality. Tracking accuracy and strain image quality generally improved with increased tracking sophistication. When used as input for a 3-D modulus reconstruction, high quality 3-D displacement estimates yielded accurate and low noise modulus reconstruction.

  9. Volumetric Elasticity Imaging with a 2D CMUT Array

    PubMed Central

    Fisher, Ted G.; Hall, Timothy J.; Panda, Satchi; Richards, Michael S.; Barbone, Paul E.; Jiang, Jingfeng; Resnick, Jeff; Barnes, Steve

    2010-01-01

    This paper reports the use of a two-dimensional (2D) capacitive micro-machined ultrasound transducer (CMUT) to acquire radio frequency (RF) echo data from relatively large volumes of a simple ultrasound phantom to compare 3D elasticity imaging methods. Typical 2D motion tracking for elasticity image formation was compared to three different methods of 3D motion tracking, with sum-squared difference (SSD) used as the similarity measure. Differences among the algorithms were the degree to which they tracked elevational motion: not at all (2D search), planar search, combination of multiple planes, and plane independent guided search. The cross correlation between the pre-deformation and motion-compensated post-deformation RF echo fields was used to quantify motion tracking accuracy. The lesion contrast-to-noise ratio was used to quantify image quality. Tracking accuracy and strain image quality generally improved with increased tracking sophistication. When used as input for a 3D modulus reconstruction, high quality 3D displacement estimates yielded accurate and low noise modulus reconstruction. PMID:20510188

  10. Registration of 2D point sets by complex translation and rotation operations.

    PubMed

    Sahin, Ismet

    2010-01-01

    Alignment of two sets containing two dimensional vectors (2D points) constitutes an important problem in medical imaging, remote sensing, and computer vision. We assume that the points in one set, called the transformed set, are constructed by translating and rotating the points in the other set, called the original set. The points in both sets are represented by complex numbers. In order to translate and then rotate a point, we add a complex constant and then multiply by a complex exponential respectively. We construct a cost function which tries to achieve the least-squares differences between a given transformed set and the set containing transformed points with respect to optimization parameters. We implement the Newton-Raphson optimization algorithm with polynomial line search in order to minimize this cost function. Simulation results with multiple datasets demonstrate that the proposed method aligns two sets efficiently and reliably.

  11. Volume Calculation of Venous Thrombosis Using 2D Ultrasound Images.

    PubMed

    Dhibi, M; Puentes, J; Bressollette, L; Guias, B; Solaiman, B

    2005-01-01

    Venous thrombosis screening exams use 2D ultrasound images, from which medical experts obtain a rough idea of the thrombosis aspect and infer an approximate volume. Such estimation is essential to follow up the thrombosis evolution. This paper proposes a method to calculate venous thrombosis volume from non-parallel 2D ultrasound images, taking advantage of a priori knowledge about the thrombosis shape. An interactive ellipse fitting contour segmentation extracts the 2D thrombosis contours. Then, a Delaunay triangulation is applied to the set of 2D segmented contours positioned in 3D, and the area that each contour defines, to obtain a global thrombosis 3D surface reconstruction, with a dense triangulation inside the contours. Volume is calculated from the obtained surface and contours triangulation, using a maximum unit normal component approach. Preliminary results obtained on 3 plastic phantoms and 3 in vitro venous thromboses, as well as one in vivo case are presented and discussed. An error rate of volume estimation inferior to 4,5% for the plastic phantoms, and 3,5% for the in vitro venous thromboses was obtained.

  12. Validation for 2D/3D registration II: The comparison of intensity- and gradient-based merit functions using a new gold standard data set

    SciTech Connect

    Gendrin, Christelle; Markelj, Primoz; Pawiro, Supriyanto Ardjo; Spoerk, Jakob; Bloch, Christoph; Weber, Christoph; Figl, Michael; Bergmann, Helmar; Birkfellner, Wolfgang; Likar, Bostjan; Pernus, Franjo

    2011-03-15

    Purpose: A new gold standard data set for validation of 2D/3D registration based on a porcine cadaver head with attached fiducial markers was presented in the first part of this article. The advantage of this new phantom is the large amount of soft tissue, which simulates realistic conditions for registration. This article tests the performance of intensity- and gradient-based algorithms for 2D/3D registration using the new phantom data set. Methods: Intensity-based methods with four merit functions, namely, cross correlation, rank correlation, correlation ratio, and mutual information (MI), and two gradient-based algorithms, the backprojection gradient-based (BGB) registration method and the reconstruction gradient-based (RGB) registration method, were compared. Four volumes consisting of CBCT with two fields of view, 64 slice multidetector CT, and magnetic resonance-T1 weighted images were registered to a pair of kV x-ray images and a pair of MV images. A standardized evaluation methodology was employed. Targets were evenly spread over the volumes and 250 starting positions of the 3D volumes with initial displacements of up to 25 mm from the gold standard position were calculated. After the registration, the displacement from the gold standard was retrieved and the root mean square (RMS), mean, and standard deviation mean target registration errors (mTREs) over 250 registrations were derived. Additionally, the following merit properties were computed: Accuracy, capture range, number of minima, risk of nonconvergence, and distinctiveness of optimum for better comparison of the robustness of each merit. Results: Among the merit functions used for the intensity-based method, MI reached the best accuracy with an RMS mTRE down to 1.30 mm. Furthermore, it was the only merit function that could accurately register the CT to the kV x rays with the presence of tissue deformation. As for the gradient-based methods, BGB and RGB methods achieved subvoxel accuracy (RMS m

  13. A survey of medical image registration - under review.

    PubMed

    Viergever, Max A; Maintz, J B Antoine; Klein, Stefan; Murphy, Keelin; Staring, Marius; Pluim, Josien P W

    2016-10-01

    A retrospective view on the past two decades of the field of medical image registration is presented, guided by the article "A survey of medical image registration" (Maintz and Viergever, 1998). It shows that the classification of the field introduced in that article is still usable, although some modifications to do justice to advances in the field would be due. The main changes over the last twenty years are the shift from extrinsic to intrinsic registration, the primacy of intensity-based registration, the breakthrough of nonlinear registration, the progress of inter-subject registration, and the availability of generic image registration software packages. Two problems that were called urgent already 20 years ago, are even more urgent nowadays: Validation of registration methods, and translation of results of image registration research to clinical practice. It may be concluded that the field of medical image registration has evolved, but still is in need of further development in various aspects.

  14. A 2D histogram representation of images for pooling

    NASA Astrophysics Data System (ADS)

    Yu, Xinnan; Zhang, Yu-Jin

    2011-03-01

    Designing a suitable image representation is one of the most fundamental issues of computer vision. There are three steps in the popular Bag of Words based image representation: feature extraction, coding and pooling. In the final step, current methods make an M x K encoded feature matrix degraded to a K-dimensional vector (histogram), where M is the number of features, and K is the size of the codebook: information is lost dramatically here. In this paper, a novel pooling method, based on 2-D histogram representation, is proposed to retain more information from the encoded image features. This pooling method can be easily incorporated into state-of- the-art computer vision system frameworks. Experiments show that our approach improves current pooling methods, and can achieve satisfactory performance of image classification and image reranking even when using a small codebook and costless linear SVM.

  15. Bayesian 2D Current Reconstruction from Magnetic Images

    NASA Astrophysics Data System (ADS)

    Clement, Colin B.; Bierbaum, Matthew K.; Nowack, Katja; Sethna, James P.

    We employ a Bayesian image reconstruction scheme to recover 2D currents from magnetic flux imaged with scanning SQUIDs (Superconducting Quantum Interferometric Devices). Magnetic flux imaging is a versatile tool to locally probe currents and magnetic moments, however present reconstruction methods sacrifice resolution due to numerical instability. Using state-of-the-art blind deconvolution techniques we recover the currents, point-spread function and height of the SQUID loop by optimizing the probability of measuring an image. We obtain uncertainties on these quantities by sampling reconstructions. This generative modeling technique could be used to develop calibration protocols for scanning SQUIDs, to diagnose systematic noise in the imaging process, and can be applied to many tools beyond scanning SQUIDs.

  16. Spatially weighted mutual information image registration for image guided radiation therapy

    SciTech Connect

    Park, Samuel B.; Rhee, Frank C.; Monroe, James I.; Sohn, Jason W.

    2010-09-15

    Purpose: To develop a new metric for image registration that incorporates the (sub)pixelwise differential importance along spatial location and to demonstrate its application for image guided radiation therapy (IGRT). Methods: It is well known that rigid-body image registration with mutual information is dependent on the size and location of the image subset on which the alignment analysis is based [the designated region of interest (ROI)]. Therefore, careful review and manual adjustments of the resulting registration are frequently necessary. Although there were some investigations of weighted mutual information (WMI), these efforts could not apply the differential importance to a particular spatial location since WMI only applies the weight to the joint histogram space. The authors developed the spatially weighted mutual information (SWMI) metric by incorporating an adaptable weight function with spatial localization into mutual information. SWMI enables the user to apply the selected transform to medically ''important'' areas such as tumors and critical structures, so SWMI is neither dominated by, nor neglects the neighboring structures. Since SWMI can be utilized with any weight function form, the authors presented two examples of weight functions for IGRT application: A Gaussian-shaped weight function (GW) applied to a user-defined location and a structures-of-interest (SOI) based weight function. An image registration example using a synthesized 2D image is presented to illustrate the efficacy of SWMI. The convergence and feasibility of the registration method as applied to clinical imaging is illustrated by fusing a prostate treatment planning CT with a clinical cone beam CT (CBCT) image set acquired for patient alignment. Forty-one trials are run to test the speed of convergence. The authors also applied SWMI registration using two types of weight functions to two head and neck cases and a prostate case with clinically acquired CBCT/MVCT image sets. The

  17. Image registration for luminescent paint applications

    NASA Technical Reports Server (NTRS)

    Bell, James H.; Mclachlan, Blair G.

    1993-01-01

    The use of pressure sensitive luminescent paints is a viable technique for the measurement of surface pressure on wind tunnel models. This technique requires data reduction of images obtained under known as well as test conditions and spatial transformation of the images. A general transform which registers images to subpixel accuracy is presented and the general characteristics of transforms for image registration and their derivation are discussed. Image resection and its applications are described. The mapping of pressure data to the three dimensional model surface for small wind tunnel models to a spatial accuracy of 0.5 percent of the model length is demonstrated.

  18. Image registration using binary boundary maps

    NASA Technical Reports Server (NTRS)

    Andrus, J. F.; Campbell, C. W.; Jayroe, R. R.

    1978-01-01

    Registration technique that matches binary boundary maps extracted from raw data, rather than matching actual data, is considerably faster than other techniques. Boundary maps, which are digital representations of regions where image amplitudes change significantly, typically represent data compression of 60 to 70 percent. Maps allow average products to be computed with addition rather than multiplication, further reducing computation time.

  19. Microwave Imaging with Infrared 2-D Lock-in Amplifier

    NASA Astrophysics Data System (ADS)

    Chiyo, Noritaka; Arai, Mizuki; Tanaka, Yasuhiro; Nishikata, Atsuhiro; Maeno, Takashi

    We have developed a 3-D electromagnetic field measurement system using 2-D lock-in amplifier. This system uses an amplitude modulated electromagnetic wave source to heat a resistive screen. A very small change of temperature on a screen illuminated with the modulated electromagnetic wave is measured using an infrared thermograph camera. In this paper, we attempted to apply our system to microwave imaging. By placing conductor patches in front of the resistive screen and illuminating with microwave, the shape of each conductor was clearly observed as the temperature difference image of the screen. In this way, the conductor pattern inside the non-contact type IC card could be visualized. Moreover, we could observe the temperature difference image reflecting the shape of a Konnyaku (a gelatinous food made from devil's-tonge starch) or a dried fishbone, both as non-conducting material resembling human body. These results proved that our method is applicable to microwave see-through imaging.

  20. Deep Adaptive Log-Demons: Diffeomorphic Image Registration with Very Large Deformations

    PubMed Central

    Zhao, Liya; Jia, Kebin

    2015-01-01

    This paper proposes a new framework for capturing large and complex deformation in image registration. Traditionally, this challenging problem relies firstly on a preregistration, usually an affine matrix containing rotation, scale, and translation and afterwards on a nonrigid transformation. According to preregistration, the directly calculated affine matrix, which is obtained by limited pixel information, may misregistrate when large biases exist, thus misleading following registration subversively. To address this problem, for two-dimensional (2D) images, the two-layer deep adaptive registration framework proposed in this paper firstly accurately classifies the rotation parameter through multilayer convolutional neural networks (CNNs) and then identifies scale and translation parameters separately. For three-dimensional (3D) images, affine matrix is located through feature correspondences by a triplanar 2D CNNs. Then deformation removal is done iteratively through preregistration and demons registration. By comparison with the state-of-the-art registration framework, our method gains more accurate registration results on both synthetic and real datasets. Besides, principal component analysis (PCA) is combined with correlation like Pearson and Spearman to form new similarity standards in 2D and 3D registration. Experiment results also show faster convergence speed. PMID:26120356

  1. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2002-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  2. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2003-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  3. Landsat image registration for agricultural applications

    NASA Technical Reports Server (NTRS)

    Wolfe, R. H., Jr.; Juday, R. D.; Wacker, A. G.; Kaneko, T.

    1982-01-01

    An image registration system has been developed at the NASA Johnson Space Center (JSC) to spatially align multi-temporal Landsat acquisitions for use in agriculture and forestry research. Working in conjunction with the Master Data Processor (MDP) at the Goddard Space Flight Center, it functionally replaces the long-standing LACIE Registration Processor as JSC's data supplier. The system represents an expansion of the techniques developed for the MDP and LACIE Registration Processor, and it utilizes the experience gained in an IBM/JSC effort evaluating the performance of the latter. These techniques are discussed in detail. Several tests were developed to evaluate the registration performance of the system. The results indicate that 1/15-pixel accuracy (about 4m for Landsat MSS) is achievable in ideal circumstances, sub-pixel accuracy (often to 0.2 pixel or better) was attained on a representative set of U.S. acquisitions, and a success rate commensurate with the LACIE Registration Processor was realized. The system has been employed in a production mode on U.S. and foreign data, and a performance similar to the earlier tests has been noted.

  4. Geometric assessment of image quality using digital image registration techniques

    NASA Technical Reports Server (NTRS)

    Tisdale, G. E.

    1976-01-01

    Image registration techniques were developed to perform a geometric quality assessment of multispectral and multitemporal image pairs. Based upon LANDSAT tapes, accuracies to a small fraction of a pixel were demonstrated. Because it is insensitive to the choice of registration areas, the technique is well suited to performance in an automatic system. It may be implemented at megapixel-per-second rates using a commercial minicomputer in combination with a special purpose digital preprocessor.

  5. Image Appraisal for 2D and 3D Electromagnetic Inversion

    SciTech Connect

    Alumbaugh, D.L.; Newman, G.A.

    1999-01-28

    Linearized methods are presented for appraising image resolution and parameter accuracy in images generated with two and three dimensional non-linear electromagnetic inversion schemes. When direct matrix inversion is employed, the model resolution and posterior model covariance matrices can be directly calculated. A method to examine how the horizontal and vertical resolution varies spatially within the electromagnetic property image is developed by examining the columns of the model resolution matrix. Plotting the square root of the diagonal of the model covariance matrix yields an estimate of how errors in the inversion process such as data noise and incorrect a priori assumptions about the imaged model map into parameter error. This type of image is shown to be useful in analyzing spatial variations in the image sensitivity to the data. A method is analyzed for statistically estimating the model covariance matrix when the conjugate gradient method is employed rather than a direct inversion technique (for example in 3D inversion). A method for calculating individual columns of the model resolution matrix using the conjugate gradient method is also developed. Examples of the image analysis techniques are provided on 2D and 3D synthetic cross well EM data sets, as well as a field data set collected at the Lost Hills Oil Field in Central California.

  6. 2-D Drift Velocities from the IMAGE EUV Plasmaspheric Imager

    NASA Technical Reports Server (NTRS)

    Gallagher, D.; Adrian, M.

    2007-01-01

    The IMAGE Mission extreme ultraviolet imager (EUY) observes He+ plasmaspheric ions throughout the inner magnetosphere. Limited by ionizing radiation and viewing close to the Sun, images of the He+ distribution are available every 10 minutes for many hours as the spacecraft passes through apogee in its highly elliptical orbit. As a consistent constituent at about 15%, He+ is an excellent surrogate for monitoring all of the processes that control the dynamics of plasmaspheric plasma. In particular, the motion ofHe+ transverse to the ambient magnetic field is a direct indication of convective electric fields. The analysis of boundary motions has already achieved new insights into the electrodynamic coupling processes taking place between energetic magnetospheric plasmas and the ionosphere. Yet to be fulfilled, however, is the original promise that global EUY images of the plasmasphere might yield two-dimensional pictures of meso-scale to macro-scale electric fields in the inner magnetosphere. This work details the technique and initial application of an IMAGE EUY analysis that appears capable of following thermal plasma motion on a global basis.

  7. NGMIX: Gaussian mixture models for 2D images

    NASA Astrophysics Data System (ADS)

    Sheldon, Erin

    2015-08-01

    NGMIX implements Gaussian mixture models for 2D images. Both the PSF profile and the galaxy are modeled using mixtures of Gaussians. Convolutions are thus performed analytically, resulting in fast model generation as compared to methods that perform the convolution in Fourier space. For the galaxy model, NGMIX supports exponential disks and de Vaucouleurs and Sérsic profiles; these are implemented approximately as a sum of Gaussians using the fits from Hogg & Lang (2013). Additionally, any number of Gaussians can be fit, either completely free or constrained to be cocentric and co-elliptical.

  8. Registration of plantar pressure images.

    PubMed

    Oliveira, Francisco P M; Tavares, João Manuel R S

    2012-01-01

    In this work, five computational methodologies to register plantar pressure images are compared: (1) the first methodology is based on matching the external contours of the feet; (2) the second uses the phase correlation technique; (3) the third addresses the direct maximization of cross-correlation using the Fourier transform; (4) the fourth minimizes the sum of squared differences using the Fourier transform; and (5) the fifth methodology iteratively optimizes an intensity (dis)similarity measure based on Powell's method. The accuracy and robustness of the five methodologies were assessed by using images from three common plantar pressure acquisition devices: a Footscan system, an EMED system, and a light reflection system. Using the residual error as a measure of accuracy, all methodologies revealed to be very accurate even in the presence of noise. The most accurate was the methodology based on the iterative optimization, when the mean squared error was minimized. It achieved a residual error inferior to 0.01 mm and 0.6 mm for non-noisy and noisy images, respectively. On the other hand, the methodology based on image contour matching was the fastest, but its accuracy was the lowest.

  9. Robust patella motion tracking using intensity-based 2D-3D registration on dynamic bi-plane fluoroscopy: towards quantitative assessment in MPFL reconstruction surgery

    NASA Astrophysics Data System (ADS)

    Otake, Yoshito; Esnault, Matthieu; Grupp, Robert; Kosugi, Shinichi; Sato, Yoshinobu

    2016-03-01

    The determination of in vivo motion of multiple-bones using dynamic fluoroscopic images and computed tomography (CT) is useful for post-operative assessment of orthopaedic surgeries such as medial patellofemoral ligament reconstruction. We propose a robust method to measure the 3D motion of multiple rigid objects with high accuracy using a series of bi-plane fluoroscopic images and a multi-resolution, intensity-based, 2D-3D registration. A Covariance Matrix Adaptation Evolution Strategy (CMA-ES) optimizer was used with a gradient correlation similarity metric. Four approaches to register three rigid objects (femur, tibia-fibula and patella) were implemented: 1) an individual bone approach registering one bone at a time, each with optimization of a six degrees of freedom (6DOF) parameter, 2) a sequential approach registering one bone at a time but using the previous bone results as the background in DRR generation, 3) a simultaneous approach registering all the bones together (18DOF) and 4) a combination of the sequential and the simultaneous approaches. These approaches were compared in experiments using simulated images generated from the CT of a healthy volunteer and measured fluoroscopic images. Over the 120 simulated frames of motion, the simultaneous approach showed improved registration accuracy compared to the individual approach: with less than 0.68mm root-mean-square error (RMSE) for translation and less than 1.12° RMSE for rotation. A robustness evaluation was conducted with 45 trials of a randomly perturbed initialization showed that the sequential approach improved robustness significantly (74% success rate) compared to the individual bone approach (34% success) for patella registration (femur and tibia-fibula registration had a 100% success rate with each approach).

  10. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA Marshall Space Flight Center, atmospheric scientist Paul Meyer (left) and solar physicist Dr. David Hathaway, have developed promising new software, called Video Image Stabilization and Registration (VISAR), that may help law enforcement agencies to catch criminals by improving the quality of video recorded at crime scenes, VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects; produces clearer images of moving objects; smoothes jagged edges; enhances still images; and reduces video noise of snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of Ultrasounds which are infamous for their grainy, blurred quality. It would be especially useful for tornadoes, tracking whirling objects and helping to determine the tornado's wind speed. This image shows two scientists reviewing an enhanced video image of a license plate taken from a moving automobile.

  11. Joint 2D and 3D phase processing for quantitative susceptibility mapping: application to 2D echo-planar imaging.

    PubMed

    Wei, Hongjiang; Zhang, Yuyao; Gibbs, Eric; Chen, Nan-Kuei; Wang, Nian; Liu, Chunlei

    2017-04-01

    Quantitative susceptibility mapping (QSM) measures tissue magnetic susceptibility and typically relies on time-consuming three-dimensional (3D) gradient-echo (GRE) MRI. Recent studies have shown that two-dimensional (2D) multi-slice gradient-echo echo-planar imaging (GRE-EPI), which is commonly used in functional MRI (fMRI) and other dynamic imaging techniques, can also be used to produce data suitable for QSM with much shorter scan times. However, the production of high-quality QSM maps is difficult because data obtained by 2D multi-slice scans often have phase inconsistencies across adjacent slices and strong susceptibility field gradients near air-tissue interfaces. To address these challenges in 2D EPI-based QSM studies, we present a new data processing procedure that integrates 2D and 3D phase processing. First, 2D Laplacian-based phase unwrapping and 2D background phase removal are performed to reduce phase inconsistencies between slices and remove in-plane harmonic components of the background phase. This is followed by 3D background phase removal for the through-plane harmonic components. The proposed phase processing was evaluated with 2D EPI data obtained from healthy volunteers, and compared against conventional 3D phase processing using the same 2D EPI datasets. Our QSM results were also compared with QSM values from time-consuming 3D GRE data, which were taken as ground truth. The experimental results show that this new 2D EPI-based QSM technique can produce quantitative susceptibility measures that are comparable with those of 3D GRE-based QSM across different brain regions (e.g. subcortical iron-rich gray matter, cortical gray and white matter). This new 2D EPI QSM reconstruction method is implemented within STI Suite, which is a comprehensive shareware for susceptibility imaging and quantification. Copyright © 2016 John Wiley & Sons, Ltd.

  12. 2-D fluorescence lifetime imaging using a time-gated image intensifier

    NASA Astrophysics Data System (ADS)

    Dowling, K.; Hyde, S. C. W.; Dainty, J. C.; French, P. M. W.; Hares, J. D.

    1997-02-01

    We report a 2-D fluorescence lifetime imaging system based on a time-gated image intensifier and a Cr:LiSAF regenerative amplifier. We have demonstrated 185 ps temporal resolution. The deleterious effects of optical scattering are demonstrated.

  13. In vivo kinematic study of the tarsal joints complex based on fluoroscopic 3D-2D registration technique.

    PubMed

    Chen Wang, M D; Geng, Xiang; Wang, Shaobai; Xin Ma, M D; Xu Wang, M D; Jiazhang Huang, M D; Chao Zhang, M D; Li Chen, M S; Yang, Junsheng; Wang, Kan

    2016-09-01

    The tarsal bones articulate with each other and demonstrate complicated kinematic characteristics. The in vivo motions of these tarsal joints during normal gait are still unclear. Seven healthy subjects were recruited and fourteen feet in total were tested in the current study. Three dimensional models of the tarsal bones were first created using CT scanning. Corresponding local 3D coordinate systems of each tarsal bone was subsequently established for 6DOF motion decompositions. The fluoroscopy system captured the lateral fluoroscopic images of the targeted tarsal region whilst the subject was walking. Seven key pose images during the stance phase were selected and 3D to 2D bone model registrations were performed on each image to determine joint positions. The 6DOF motions of each tarsal joint during gait were then obtained by connecting these positions together. The TNJ (talo-navicular joint) exhibited the largest ROMs (range of motion) on all rotational directions with 7.39±2.75°of dorsi/plantarflexion, 21.12±4.68°of inversion/eversion, and 16.11±4.44°of internal/external rotation. From heel strike to midstance, the TNJ, STJ (subtalar joint), and CCJ (calcaneao-cuboid joint) were associated with 5.97°, 5.04°, and 3.93°of dorsiflexion; 15.46°, 8.21°, and 5.82°of eversion; and 9.75°, 7.6°, and 4.99°of external rotation, respectively. Likewise, from midstance to heel off, the TNJ, STJ, and CCJ were associated with 6.39, 6.19°, and 4.47°of plantarflexion; 18.57°, 11.86°, and 6.32°of inversion and 13.95°, 9.66°, and 7.58°of internal rotation, respectively. In conclusion, among the tarsal joints, the TNJ exhibited the greatest rotational mobility. Synchronous and homodromous rotational motions were detected for TNJ, STJ, and CCJ during the stance phase.

  14. Automated landmark-guided deformable image registration.

    PubMed

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-07

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency.

  15. Automated landmark-guided deformable image registration

    NASA Astrophysics Data System (ADS)

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-01

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency.

  16. Surface driven biomechanical breast image registration

    NASA Astrophysics Data System (ADS)

    Eiben, Björn; Vavourakis, Vasileios; Hipwell, John H.; Kabus, Sven; Lorenz, Cristian; Buelow, Thomas; Williams, Norman R.; Keshtgar, M.; Hawkes, David J.

    2016-03-01

    Biomechanical modelling enables large deformation simulations of breast tissues under different loading conditions to be performed. Such simulations can be utilised to transform prone Magnetic Resonance (MR) images into a different patient position, such as upright or supine. We present a novel integration of biomechanical modelling with a surface registration algorithm which optimises the unknown material parameters of a biomechanical model and performs a subsequent regularised surface alignment. This allows deformations induced by effects other than gravity, such as those due to contact of the breast and MR coil, to be reversed. Correction displacements are applied to the biomechanical model enabling transformation of the original pre-surgical images to the corresponding target position. The algorithm is evaluated for the prone-to-supine case using prone MR images and the skin outline of supine Computed Tomography (CT) scans for three patients. A mean target registration error (TRE) of 10:9 mm for internal structures is achieved. For the prone-to-upright scenario, an optical 3D surface scan of one patient is used as a registration target and the nipple distances after alignment between the transformed MRI and the surface are 10:1 mm and 6:3 mm respectively.

  17. An object-oriented framework for medical image registration, fusion, and visualization.

    PubMed

    Zhu, Yang-Ming; Cochoff, Steven M

    2006-06-01

    An object-oriented framework for image registration, fusion, and visualization was developed based on the classic model-view-controller paradigm. The framework employs many design patterns to facilitate legacy code reuse, manage software complexity, and enhance the maintainability and portability of the framework. Three sample applications built a-top of this framework are illustrated to show the effectiveness of this framework: the first one is for volume image grouping and re-sampling, the second one is for 2D registration and fusion, and the last one is for visualization of single images as well as registered volume images.

  18. Tracking of deformable target in 2D ultrasound images

    NASA Astrophysics Data System (ADS)

    Royer, Lucas; Marchal, Maud; Le Bras, Anthony; Dardenne, Guillaume; Krupa, Alexandre

    2015-03-01

    In this paper, we propose a novel approach for automatically tracking deformable target within 2D ultrasound images. Our approach uses only dense information combined with a physically-based model and has therefore the advantage of not using any fiducial marker nor a priori knowledge on the anatomical environment. The physical model is represented by a mass-spring damper system driven by different types of forces where the external forces are obtained by maximizing image similarity metric between a reference target and a deformed target across the time. This deformation is represented by a parametric warping model where the optimal parameters are estimated from the intensity variation. This warping function is well-suited to represent localized deformations in the ultrasound images because it directly links the forces applied on each mass with the motion of all the pixels in its vicinity. The internal forces constrain the deformation to physically plausible motions, and reduce the sensitivity to the speckle noise. The approach was validated on simulated and real data, both for rigid and free-form motions of soft tissues. The results are very promising since the deformable target could be tracked with a good accuracy for both types of motion. Our approach opens novel possibilities for computer-assisted interventions where deformable organs are involved and could be used as a new tool for interactive tracking of soft tissues in ultrasound images.

  19. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center, atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  20. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  1. A scanning-mode 2D shear wave imaging (s2D-SWI) system for ultrasound elastography.

    PubMed

    Qiu, Weibao; Wang, Congzhi; Li, Yongchuan; Zhou, Juan; Yang, Ge; Xiao, Yang; Feng, Ge; Jin, Qiaofeng; Mu, Peitian; Qian, Ming; Zheng, Hairong

    2015-09-01

    Ultrasound elastography is widely used for the non-invasive measurement of tissue elasticity properties. Shear wave imaging (SWI) is a quantitative method for assessing tissue stiffness. SWI has been demonstrated to be less operator dependent than quasi-static elastography, and has the ability to acquire quantitative elasticity information in contrast with acoustic radiation force impulse (ARFI) imaging. However, traditional SWI implementations cannot acquire two dimensional (2D) quantitative images of the tissue elasticity distribution. This study proposes and evaluates a scanning-mode 2D SWI (s2D-SWI) system. The hardware and image processing algorithms are presented in detail. Programmable devices are used to support flexible control of the system and the image processing algorithms. An analytic signal based cross-correlation method and a Radon transformation based shear wave speed determination method are proposed, which can be implemented using parallel computation. Imaging of tissue mimicking phantoms, and in vitro, and in vivo imaging test are conducted to demonstrate the performance of the proposed system. The s2D-SWI system represents a new choice for the quantitative mapping of tissue elasticity, and has great potential for implementation in commercial ultrasound scanners.

  2. [The meteorological satellite spectral image registration based on Fourier-Mellin transform].

    PubMed

    Wang, Liang; Liu, Rong; Zhang, Li; Duan, Fu-Qing; Lü, Ke

    2013-03-01

    The meteorological satellite spectral image is an effective tool for researches on meteorological science and environmental remote sensing science. Image registration is the basis for the application of the meteorological satellite spectral image data. In order to realize the registration of the satellite image and the template image, a new registration method based on the Fourier-Mellin transform is presented in this paper. Firstly, we use the global coastline vector map data to build a landmark template, which is a reference for the meteorological satellite spectral image registration. Secondly, we choose infrared sub-image of no cloud according to the cloud channel data, and extract the edges of the infrared image by Sobel operator. Finally, the affine transform model parameters between the landmark template and the satellite image are determined by the Fourier-Mellin transform, and thus the registration is realized. The proposed method is based on the curve matching in essence. It needs no feature point extraction, and can greatly simplify the process of registration. The experimental results using the infrared spectral data of the FY-2D meteorological satellite show that the method is robust and can reach a high speed and high accuracy.

  3. Automated 3D-2D registration of X-ray microcomputed tomography with histological sections for dental implants in bone using chamfer matching and simulated annealing.

    PubMed

    Becker, Kathrin; Stauber, Martin; Schwarz, Frank; Beißbarth, Tim

    2015-09-01

    We propose a novel 3D-2D registration approach for micro-computed tomography (μCT) and histology (HI), constructed for dental implant biopsies, that finds the position and normal vector of the oblique slice from μCT that corresponds to HI. During image pre-processing, the implants and the bone tissue are segmented using a combination of thresholding, morphological filters and component labeling. After this, chamfer matching is employed to register the implant edges and fine registration of the bone tissues is achieved using simulated annealing. The method was tested on n=10 biopsies, obtained at 20 weeks after non-submerged healing in the canine mandible. The specimens were scanned with μCT 100 and processed for hard tissue sectioning. After registration, we assessed the agreement of bone to implant contact (BIC) using automated and manual measurements. Statistical analysis was conducted to test the agreement of the BIC measurements in the registered samples. Registration was successful for all specimens and agreement of the respective binary images was high (median: 0.90, 1.-3. Qu.: 0.89-0.91). Direct comparison of BIC yielded that automated (median 0.82, 1.-3. Qu.: 0.75-0.85) and manual (median 0.61, 1.-3. Qu.: 0.52-0.67) measures from μCT were significant positively correlated with HI (median 0.65, 1.-3. Qu.: 0.59-0.72) between μCT and HI groups (manual: R(2)=0.87, automated: R(2)=0.75, p<0.001). The results show that this method yields promising results and that μCT may become a valid alternative to assess osseointegration in three dimensions.

  4. Direct Image-To Registration Using Mobile Sensor Data

    NASA Astrophysics Data System (ADS)

    Kehl, C.; Buckley, S. J.; Gawthorpe, R. L.; Viola, I.; Howell, J. A.

    2016-06-01

    Adding supplementary texture and 2D image-based annotations to 3D surface models is a useful next step for domain specialists to make use of photorealistic products of laser scanning and photogrammetry. This requires a registration between the new camera imagery and the model geometry to be solved, which can be a time-consuming task without appropriate automation. The increasing availability of photorealistic models, coupled with the proliferation of mobile devices, gives users the possibility to complement their models in real time. Modern mobile devices deliver digital photographs of increasing quality, as well as on-board sensor data, which can be used as input for practical and automatic camera registration procedures. Their familiar user interface also improves manual registration procedures. This paper introduces a fully automatic pose estimation method using the on-board sensor data for initial exterior orientation, and feature matching between an acquired photograph and a synthesised rendering of the orientated 3D scene as input for fine alignment. The paper also introduces a user-friendly manual camera registration- and pose estimation interface for mobile devices, based on existing surface geometry and numerical optimisation methods. The article further assesses the automatic algorithm's accuracy compared to traditional methods, and the impact of computational- and environmental parameters. Experiments using urban and geological case studies show a significant sensitivity of the automatic procedure to the quality of the initial mobile sensor values. Changing natural lighting conditions remain a challenge for automatic pose estimation techniques, although progress is presented here. Finally, the automatically-registered mobile images are used as the basis for adding user annotations to the input textured model.

  5. Respiratory motion compensation for simultaneous PET/MR based on a 3D-2D registration of strongly undersampled radial MR data: a simulation study

    NASA Astrophysics Data System (ADS)

    Rank, Christopher M.; Heußer, Thorsten; Flach, Barbara; Brehm, Marcus; Kachelrieß, Marc

    2015-03-01

    We propose a new method for PET/MR respiratory motion compensation, which is based on a 3D-2D registration of strongly undersampled MR data and a) runs in parallel with the PET acquisition, b) can be interlaced with clinical MR sequences, and c) requires less than one minute of the total MR acquisition time per bed position. In our simulation study, we applied a 3D encoded radial stack-of-stars sampling scheme with 160 radial spokes per slice and an acquisition time of 38 s. Gated 4D MR images were reconstructed using a 4D iterative reconstruction algorithm. Based on these images, motion vector fields were estimated using our newly-developed 3D-2D registration framework. A 4D PET volume of a patient with eight hot lesions in the lungs and upper abdomen was simulated and MoCo 4D PET images were reconstructed based on the motion vector fields derived from MR. For evaluation, average SUVmean values of the artificial lesions were determined for a 3D, a gated 4D, a MoCo 4D and a reference (with ten-fold measurement time) gated 4D reconstruction. Compared to the reference, 3D reconstructions yielded an underestimation of SUVmean values due to motion blurring. In contrast, gated 4D reconstructions showed the highest variation of SUVmean due to low statistics. MoCo 4D reconstructions were only slightly affected by these two sources of uncertainty resulting in a significant visual and quantitative improvement in terms of SUVmean values. Whereas temporal resolution was comparable to the gated 4D images, signal-to-noise ratio and contrast-to-noise ratio were close to the 3D reconstructions.

  6. Group-wise feature-based registration of CT and ultrasound images of spine

    NASA Astrophysics Data System (ADS)

    Rasoulian, Abtin; Mousavi, Parvin; Hedjazi Moghari, Mehdi; Foroughi, Pezhman; Abolmaesumi, Purang

    2010-02-01

    Registration of pre-operative CT and freehand intra-operative ultrasound of lumbar spine could aid surgeons in the spinal needle injection which is a common procedure for pain management. Patients are always in a supine position during the CT scan, and in the prone or sitting position during the intervention. This leads to a difference in the spinal curvature between the two imaging modalities, which means a single rigid registration cannot be used for all of the lumbar vertebrae. In this work, a method for group-wise registration of pre-operative CT and intra-operative freehand 2-D ultrasound images of the lumbar spine is presented. The approach utilizes a pointbased registration technique based on the unscented Kalman filter, taking as input segmented vertebrae surfaces in both CT and ultrasound data. Ultrasound images are automatically segmented using a dynamic programming approach, while the CT images are semi-automatically segmented using thresholding. Since the curvature of the spine is different between the pre-operative and the intra-operative data, the registration approach is designed to simultaneously align individual groups of points segmented from each vertebra in the two imaging modalities. A biomechanical model is used to constrain the vertebrae transformation parameters during the registration and to ensure convergence. The mean target registration error achieved for individual vertebrae on five spine phantoms generated from CT data of patients, is 2.47 mm with standard deviation of 1.14 mm.

  7. Automatic geometric rectification for patient registration in image-guided spinal surgery

    NASA Astrophysics Data System (ADS)

    Cai, Yunliang; Olson, Jonathan D.; Fan, Xiaoyao; Evans, Linton T.; Paulsen, Keith D.; Roberts, David W.; Mirza, Sohail K.; Lollis, S. Scott; Ji, Songbai

    2016-03-01

    Accurate and efficient patient registration is crucial for the success of image-guidance in open spinal surgery. Recently, we have established the feasibility of using intraoperative stereovision (iSV) to perform patient registration with respect to preoperative CT (pCT) in human subjects undergoing spinal surgery. Although a desired accuracy was achieved, the method required manual segmentation and placement of feature points on reconstructed iSV and pCT surfaces. In this study, we present an improved registration pipeline to eliminate these manual operations. Specifically, automatic geometric rectification was performed on spines extracted from pCT and iSV into pose-invariant shapes using a nonlinear principal component analysis (NLPCA). Rectified spines were obtained by projecting the reconstructed 3D surfaces into an anatomically determined orientation. Two-dimensional projection images were then created with image intensity values encoding feature "height" in the dorsal-ventral direction. Registration between the 2D depth maps yielded an initial point-wise correspondence between the 3D surfaces. A refined registration was achieved using an iterative closest point (ICP) algorithm. The technique was successfully applied to two explanted and one live porcine spines. The computational cost of the registration pipeline was less than 1 min, with an average target registration error (TRE) less than 2.2 mm in the laminae area. These results suggest the potential for the pose-invariant, rectification-based registration technique for clinical application in human subjects in the future.

  8. 2D and 3D visualization methods of endoscopic panoramic bladder images

    NASA Astrophysics Data System (ADS)

    Behrens, Alexander; Heisterklaus, Iris; Müller, Yannick; Stehle, Thomas; Gross, Sebastian; Aach, Til

    2011-03-01

    While several mosaicking algorithms have been developed to compose endoscopic images of the internal urinary bladder wall into panoramic images, the quantitative evaluation of these output images in terms of geometrical distortions have often not been discussed. However, the visualization of the distortion level is highly desired for an objective image-based medical diagnosis. Thus, we present in this paper a method to create quality maps from the characteristics of transformation parameters, which were applied to the endoscopic images during the registration process of the mosaicking algorithm. For a global first view impression, the quality maps are laid over the panoramic image and highlight image regions in pseudo-colors according to their local distortions. This illustration supports then surgeons to identify geometrically distorted structures easily in the panoramic image, which allow more objective medical interpretations of tumor tissue in shape and size. Aside from introducing quality maps in 2-D, we also discuss a visualization method to map panoramic images onto a 3-D spherical bladder model. Reference points are manually selected by the surgeon in the panoramic image and the 3-D model. Then the panoramic image is mapped by the Hammer-Aitoff equal-area projection onto the 3-D surface using texture mapping. Finally the textured bladder model can be freely moved in a virtual environment for inspection. Using a two-hemisphere bladder representation, references between panoramic image regions and their corresponding space coordinates within the bladder model are reconstructed. This additional spatial 3-D information thus assists the surgeon in navigation, documentation, as well as surgical planning.

  9. Intensity-based 3D/2D registration for percutaneous intervention of major aorto-pulmonary collateral arteries

    NASA Astrophysics Data System (ADS)

    Couet, Julien; Rivest-Henault, David; Miro, Joaquim; Lapierre, Chantal; Duong, Luc; Cheriet, Mohamed

    2012-02-01

    Percutaneous cardiac interventions rely mainly on the experience of the cardiologist to safely navigate inside soft tissues vessels under X-ray angiography guidance. Additional navigation guidance tool might contribute to improve reliability and safety of percutaneous procedures. This study focus on major aorta-pulmonary collateral arteries (MAPCAs) which are pediatric structures. We present a fully automatic intensity-based 3D/2D registration method that accurately maps pre-operatively acquired 3D tomographic vascular data of a newborn patient over intra-operatively acquired angiograms. The tomographic dataset 3D pose is evaluated by comparing the angiograms with simulated X-ray projections, computed from the pre-operative dataset with a proposed splatting-based projection technique. The rigid 3D pose is updated via a transformation matrix usually defined in respect of the C-Arm acquisition system reference frame, but it can also be defined in respect of the projection plane local reference frame. The optimization of the transformation is driven by two algorithms. First the hill climbing local search and secondly a proposed variant, the dense hill climbing. The latter makes the search space denser by considering the combinations of the registration parameters instead of neighboring solutions only. Although this study focused on the registration of pediatric structures, the same procedure could be applied for any cardiovascular structures involving CT-scan and X-ray angiography. Our preliminary results are promising that an accurate (3D TRE 0.265 +/- 0.647mm) and robust (99% success rate) bi-planes registration of the aorta and MAPCAs from a initial displacement up to 20mm and 20° can be obtained within a reasonable amount of time (13.7 seconds).

  10. Tool for computer-assisted geo-spatial registration and truthing of images of differing dimensionality and resolution

    NASA Astrophysics Data System (ADS)

    Williams, Bradford D.; Amphay, Sengvieng A.; Stansbery, Stacey; Hulsey, Donald R.

    2000-07-01

    The Air Force Research Lab, Advanced Guidance Division, AFRL/MNG located at Eglin AFB has expanded the capabilities of its Modular Algorithm Concept Evaluation Tool (MACET) for autonomous target acquisition (ATA) analysis to include an imagery truth editor for simultaneously displaying and working with multiple images of differing dimensionality and resolution. To support multi-sensor truthing, the MACET Truth Editor performs computer-assisted geo-spatial registration between multiple 2D images, or between 2D images and 3D images. The input images of overlapping scenes may be obtained from various sensor types (visible, passive infrared, laser radar (ladar), etc.) and taken at different sensor locations and orientations. Registration of 3D to 2D and 2D to 2D imagery pixels is made to a reference 3D coordinate system using `hints' provided by an analyst. Hints may include some combinations of the following to reach an approximate solution to the registration problem: marking of common points in each image, marking of horizon lines in 2D images, entry of imagery sensor characteristics (FOV, FPA layout, etc.), and entry of relative sensor location and orientation. The MACET Truth Editor has a consistent user interface that allows registration hints to be entered and truthing operations to be performed graphically.

  11. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  12. Medical image registration using machine learning-based interest point detector

    NASA Astrophysics Data System (ADS)

    Sergeev, Sergey; Zhao, Yang; Linguraru, Marius George; Okada, Kazunori

    2012-02-01

    This paper presents a feature-based image registration framework which exploits a novel machine learning (ML)-based interest point detection (IPD) algorithm for feature selection and correspondence detection. We use a feed-forward neural network (NN) with back-propagation as our base ML detector. Literature on ML-based IPD is scarce and to our best knowledge no previous research has addressed feature selection strategy for IPD purpose with cross-validation (CV) detectability measure. Our target application is the registration of clinical abdominal CT scans with abnormal anatomies. We evaluated the correspondence detection performance of the proposed ML-based detector against two well-known IPD algorithms: SIFT and SURF. The proposed method is capable of performing affine rigid registrations of 2D and 3D CT images, demonstrating more than two times better accuracy in correspondence detection than SIFT and SURF. The registration accuracy has been validated manually using identified landmark points. Our experimental results shows an improvement in 3D image registration quality of 18.92% compared with affine transformation image registration method from standard ITK affine registration toolkit.

  13. Image Registration for Targeted MRI-guided Transperineal Prostate Biopsy

    PubMed Central

    Fedorov, Andriy; Tuncali, Kemal; Fennessy, Fiona M.; Tokuda, Junichi; Hata, Nobuhiko; Wells, William M.; Kikinis, Ron; Tempany, Clare M.

    2012-01-01

    Purpose To develop and evaluate image registration methodology for automated re-identification of tumor-suspicious foci from pre-procedural MR exams during MR-guided transperineal prostate core biopsy. Materials and Methods A hierarchical approach for automated registration between planning and intra-procedural T2-weighted prostate MRI was developed and evaluated on the images acquired during 10 consecutive MR-guided biopsies. Registration accuracy was quantified at image-based landmarks and by evaluating spatial overlap for the manually segmented prostate and sub-structures. Registration reliability was evaluated by simulating initial mis-registration and analyzing the convergence behavior. Registration precision was characterized at the planned biopsy targets. Results The total computation time was compatible with a clinical setting, being at most 2 minutes. Deformable registration led to a significant improvement in spatial overlap of the prostate and peripheral zone contours compared to both rigid and affine registration. Average in-slice landmark registration error was 1.3±0.5 mm. Experiments simulating initial mis-registration resulted in an estimated average capture range of 6 mm and an average in-slice registration precision of ±0.3 mm. Conclusion Our registration approach requires minimum user interaction and is compatible with the time constraints of our interventional clinical workflow. The initial evaluation shows acceptable accuracy, reliability and consistency of the method. PMID:22645031

  14. Effects of Image Contrast on Functional MRI Image Registration

    PubMed Central

    Gonzalez-Castillo, Javier; Duthie, Kristen N.; Saad, Ziad S.; Chu, Carlton; Bandettini, Peter A.; Luh, Wen-Ming

    2012-01-01

    Lack of tissue contrast and existing inhomogeneous bias fields from multi-channel coils have the potential to degrade the output of registration algorithms; and consequently degrade group analysis and any attempt to accurately localize brain function. Non-invasive ways to improve tissue contrast in fMRI images include the use of low flip angles (FAs) well below the Ernst angle and longer repetition times (TR). Techniques to correct intensity inhomogeneity are also available in most mainstream fMRI data analysis packages; but are not used as part of the pre-processing pipeline in many studies. In this work, we use a combination of real data and simulations to show that simple-to-implement acquisition/pre-processing techniques can significantly improve the outcome of both functional-to-functional and anatomical-to-functional image registrations. We also emphasize the need of tissue contrast on EPI images to be able to appropriately evaluate the quality of the alignment. In particular, we show that the use of low FAs (e.g., θ≤40°), when physiological noise considerations permit such an approach, significantly improves accuracy, consistency and stability of registration for data acquired at relatively short TRs (TR≤2s). Moreover, we also show that the application of bias correction techniques significantly improves alignment both for array-coil data (known to contain high intensity inhomogeneity) as well as birdcage-coil data. Finally, improvements in alignment derived from the use of the first infinite-TR volumes (ITVs) as targets for registration are also demonstrated. For the purpose of quantitatively evaluating the different scenarios, two novel metrics were developed: Mean Voxel Distance (MVD) to evaluate registration consistency, and Deviation of Mean Voxel Distance (dMVD) to evaluate registration stability across successive alignment attempts. PMID:23128074

  15. Registration of Optical Data with High-Resolution SAR Data: a New Image Registration Solution

    NASA Astrophysics Data System (ADS)

    Bahr, T.; Jin, X.

    2013-04-01

    Accurate image-to-image registration is critical for many image processing workflows, including georeferencing, change detection, data fusion, image mosaicking, DEM extraction and 3D modeling. Users need a solution to generate tie points accurately and geometrically align the images automatically. To solve these requirements we developed the Hybrid Powered Auto-Registration Engine (HyPARE). HyPARE combines all available spatial reference information with a number of image registration approaches to improve the accuracy, performance, and automation of tie point generation and image registration. We demonstrate this approach by the registration of a Pléiades-1a image with a TerraSAR-X SpotLight image of Hannover, Germany. Registering images with different modalities is a known challenging problem; e.g. manual tie point collection is prone to error. The registration engine allows to generate tie points automatically, using an optimized mutual information-based matching method. It produces more accurate results than traditional correlation-based measures. In this example the resulting tie points are well distributed across the overlapping areas, even as the images have significant local feature differences.

  16. Visualization of Deformable Image Registration Quality using Local Image Dissimilarity.

    PubMed

    Schlachter, Matthias; Fechter, Tobias; Jurisic, Miro; Schimek-Jasch, Tanja; Oehlke, Oliver; Adebahr, Sonja; Birkfellner, Wolfgang; Nestle, Ursula; Buhler, Katja

    2016-04-29

    Deformable image registration (DIR) has the potential to improve modern radiotherapy in many aspects, including volume definition, treatment planning and image-guided adaptive radiotherapy. Studies have shown its possible clinical benefits. However, measuring DIR accuracy is difficult without known ground truth, but necessary before integration in the radiotherapy workflow. Visual assessment is an important step towards clinical acceptance. We propose a visualization framework which supports the exploration and the assessment of DIR accuracy. It offers different interaction and visualization features for exploration of candidate regions to simplify the process of visual assessment. The visualization is based on voxel-wise comparison of local image patches for which dissimilarity measures are computed and visualized to indicate locally the registration results. We performed an evaluation with three radiation oncologists to demonstrate the viability of our approach. In the evaluation, lung regions were rated by the participants with regards to their visual accuracy and compared to the registration error measured with expert defined landmarks. Regions rated as "accepted" had an average registration error of 1.8 mm, with the highest single landmark error being 3.3 mm. Additionally, survey results show that the proposed visualizations support a fast and intuitive investigation of DIR accuracy, and are suitable for finding even small errors.

  17. Visualization of Deformable Image Registration Quality Using Local Image Dissimilarity.

    PubMed

    Schlachter, Matthias; Fechter, Tobias; Jurisic, Miro; Schimek-Jasch, Tanja; Oehlke, Oliver; Adebahr, Sonja; Birkfellner, Wolfgang; Nestle, Ursula; Bu Hler, Katja

    2016-10-01

    Deformable image registration (DIR) has the potential to improve modern radiotherapy in many aspects, including volume definition, treatment planning and image-guided adaptive radiotherapy. Studies have shown its possible clinical benefits. However, measuring DIR accuracy is difficult without known ground truth, but necessary before integration in the radiotherapy workflow. Visual assessment is an important step towards clinical acceptance. We propose a visualization framework which supports the exploration and the assessment of DIR accuracy. It offers different interaction and visualization features for exploration of candidate regions to simplify the process of visual assessment. The visualization is based on voxel-wise comparison of local image patches for which dissimilarity measures are computed and visualized to indicate locally the registration results. We performed an evaluation with three radiation oncologists to demonstrate the viability of our approach. In the evaluation, lung regions were rated by the participants with regards to their visual accuracy and compared to the registration error measured with expert defined landmarks. Regions rated as "accepted" had an average registration error of 1.8 mm, with the highest single landmark error being 3.3 mm. Additionally, survey results show that the proposed visualizations support a fast and intuitive investigation of DIR accuracy, and are suitable for finding even small errors.

  18. Progressive piecewise registration of orthophotos and airborne scanner images

    NASA Astrophysics Data System (ADS)

    Chen, Lin-Chi; Yang, T. T.

    1994-08-01

    From the image-to-image registration point of view, we propose a scheme to iteratively register airborne multi-spectral imagery onto its counterpart, i.e., orthographic photography. The required registration control point pairs are automatically augmented first. Then a local registration procedure is applied according to the generated registration control point pairs. The coordinate transformation uses thin plate spline function. Through a consistency check, if the disparities between the reference image and the transformed airborne multi-spectral image is too large to accept, next iteration is performed. During the second iteration, some best matched feature points used in the consistency check of the first iteration append to the existing registration control points. This iteration procedure continues until the disparities are small enough. Experimental results indicate that the output image attain an excellent geometrical similarity with respect to the reference image. The rms of the disparities is less than 0.5 pixels.

  19. Biomechanical model as a registration tool for image-guided neurosurgery: evaluation against BSpline registration

    PubMed Central

    Mostayed, Ahmed; Garlapati, Revanth Reddy; Joldes, Grand Roman; Wittek, Adam; Roy, Aditi; Kikinis, Ron; Warfield, Simon K.; Miller, Karol

    2013-01-01

    In this paper we evaluate the accuracy of warping of neuro-images using brain deformation predicted by means of a patient-specific biomechanical model against registration using a BSpline-based free form deformation algorithm. Unlike the Bspline algorithm, biomechanics-based registration does not require an intra-operative MR image which is very expensive and cumbersome to acquire. Only sparse intra-operative data on the brain surface is sufficient to compute deformation for the whole brain. In this contribution the deformation fields obtained from both methods are qualitatively compared and overlaps of Canny edges extracted from the images are examined. We define an edge based Hausdorff distance metric to quantitatively evaluate the accuracy of registration for these two algorithms. The qualitative and quantitative evaluations indicate that our biomechanics-based registration algorithm, despite using much less input data, has at least as high registration accuracy as that of the BSpline algorithm. PMID:23771299

  20. Constrained non-rigid registration for whole body image registration: method and validation

    NASA Astrophysics Data System (ADS)

    Li, Xia; Yankeelov, Thomas E.; Peterson, Todd E.; Gore, John C.; Dawant, Benoit M.

    2007-03-01

    3D intra- and inter-subject registration of image volumes is important for tasks that include measurements and quantification of temporal/longitudinal changes, atlas-based segmentation, deriving population averages, or voxel and tensor-based morphometry. A number of methods have been proposed to tackle this problem but few of them have focused on the problem of registering whole body image volumes acquired either from humans or small animals. These image volumes typically contain a large number of articulated structures, which makes registration more difficult than the registration of head images, to which the vast majority of registration algorithms have been applied. To solve this problem, we have previously proposed an approach, which initializes an intensity-based non-rigid registration algorithm with a point based registration technique [1, 2]. In this paper, we introduce new constraints into our non-rigid registration algorithm to prevent the bones from being deformed inaccurately. Results we have obtained show that the new constrained algorithm leads to better registration results than the previous one.

  1. Registration of clinical volumes to beams-eye-view images for real-time tracking

    SciTech Connect

    Bryant, Jonathan H.; Rottmann, Joerg; Lewis, John H.; Mishra, Pankaj; Berbeco, Ross I.; Keall, Paul J.

    2014-12-15

    Purpose: The authors combine the registration of 2D beam’s eye view (BEV) images and 3D planning computed tomography (CT) images, with relative, markerless tumor tracking to provide automatic absolute tracking of physician defined volumes such as the gross tumor volume (GTV). Methods: During treatment of lung SBRT cases, BEV images were continuously acquired with an electronic portal imaging device (EPID) operating in cine mode. For absolute registration of physician-defined volumes, an intensity based 2D/3D registration to the planning CT was performed using the end-of-exhale (EoE) phase of the four dimensional computed tomography (4DCT). The volume was converted from Hounsfield units into electron density by a calibration curve and digitally reconstructed radiographs (DRRs) were generated for each beam geometry. Using normalized cross correlation between the DRR and an EoE BEV image, the best in-plane rigid transformation was found. The transformation was applied to physician-defined contours in the planning CT, mapping them into the EPID image domain. A robust multiregion method of relative markerless lung tumor tracking quantified deviations from the EoE position. Results: The success of 2D/3D registration was demonstrated at the EoE breathing phase. By registering at this phase and then employing a separate technique for relative tracking, the authors are able to successfully track target volumes in the BEV images throughout the entire treatment delivery. Conclusions: Through the combination of EPID/4DCT registration and relative tracking, a necessary step toward the clinical implementation of BEV tracking has been completed. The knowledge of tumor volumes relative to the treatment field is important for future applications like real-time motion management, adaptive radiotherapy, and delivered dose calculations.

  2. Registration of clinical volumes to beams-eye-view images for real-time tracking

    PubMed Central

    Bryant, Jonathan H.; Rottmann, Joerg; Lewis, John H.; Mishra, Pankaj; Keall, Paul J.; Berbeco, Ross I.

    2014-01-01

    Purpose: The authors combine the registration of 2D beam’s eye view (BEV) images and 3D planning computed tomography (CT) images, with relative, markerless tumor tracking to provide automatic absolute tracking of physician defined volumes such as the gross tumor volume (GTV). Methods: During treatment of lung SBRT cases, BEV images were continuously acquired with an electronic portal imaging device (EPID) operating in cine mode. For absolute registration of physician-defined volumes, an intensity based 2D/3D registration to the planning CT was performed using the end-of-exhale (EoE) phase of the four dimensional computed tomography (4DCT). The volume was converted from Hounsfield units into electron density by a calibration curve and digitally reconstructed radiographs (DRRs) were generated for each beam geometry. Using normalized cross correlation between the DRR and an EoE BEV image, the best in-plane rigid transformation was found. The transformation was applied to physician-defined contours in the planning CT, mapping them into the EPID image domain. A robust multiregion method of relative markerless lung tumor tracking quantified deviations from the EoE position. Results: The success of 2D/3D registration was demonstrated at the EoE breathing phase. By registering at this phase and then employing a separate technique for relative tracking, the authors are able to successfully track target volumes in the BEV images throughout the entire treatment delivery. Conclusions: Through the combination of EPID/4DCT registration and relative tracking, a necessary step toward the clinical implementation of BEV tracking has been completed. The knowledge of tumor volumes relative to the treatment field is important for future applications like real-time motion management, adaptive radiotherapy, and delivered dose calculations. PMID:25471950

  3. 2D ERT imaging of tracer dispersion in laboratory experiments

    NASA Astrophysics Data System (ADS)

    Lekmine, G.; Pessel, M.; Auradou, H.

    2009-12-01

    Electrical resistivity tomography applied in cross-borehole is a method often used to follow the invasion process of pollutants. The aim of this work is to test experimentally the electrode arrays and inversion processes used to obtain a spatial representation of tracer propagation in porous media. Experiments were conducted in a plexiglass container with glass beads of 166 microns in diameter. The height of the container is 275 mm, its width 85 mm and its thickness 10 mm. 21 electrodes, equally spaced, are placed along each of the lateral sides of the porous medium : these electrodes are used to perform the electrical measurements. The device is lightened from behind and a video camera records the fluid propagation. The tracer (i.e the pollutant) is a water solution containing a known amount of dye together with NaCl (0.5g/l up to 1.5g/l). The medium is first saturated by a water solution containing a slight concentration of NaCl so that its density is smaller than the tracer’s. An upward flow is first established, the denser fluid is injected at the bottom and over the full width of the medium. In this way, the flow is stabilized by gravity avoiding the development of unstable fingers. Still, the fluids are miscible and a mixing front develops during the flow: in the present study, the interest is to estimate the 2D tracer front dispersion by both optical and electrical imaging. The comparison of the two techniques allows to study the ability of the inversion process to quantify the solute transport. A sensitivity analysis is led in order to determine the best measurement sequence to monitor the tracer’s front evolution through the entire volume of the medium. Hence, each time step is constituted by the same 190 transverse dipole-dipole set of lasting 5 minutes between the first and the last measurement. At the laboratory scale, the experimental design affects the measurements through edges effects: most of these artefacts can be partially suppressed by using

  4. SAR/LANDSAT image registration study

    NASA Technical Reports Server (NTRS)

    Murphrey, S. W. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. Temporal registration of synthetic aperture radar data with LANDSAT-MSS data is both feasible (from a technical standpoint) and useful (from an information-content viewpoint). The greatest difficulty in registering aircraft SAR data to corrected LANDSAT-MSS data is control-point location. The differences in SAR and MSS data impact the selection of features that will serve as a good control points. The SAR and MSS data are unsuitable for automatic computer correlation of digital control-point data. The gray-level data can not be compared by the computer because of the different response characteristics of the MSS and SAR images.

  5. a New Approach for Optical and SAR Satellite Image Registration

    NASA Astrophysics Data System (ADS)

    Merkle, N.; Müller, R.; Schwind, P.; Palubinskas, G.; Reinartz, P.

    2015-03-01

    Over the last years several research studies have shown the high geometric accuracy of high resolution radar satellites like TerraSARX. Due to this fact, the impact of high resolution SAR images for image registration has increased. An aim of high accuracy image registration is the improvement of the absolute geometric accuracy of optical images by using SAR images as references. High accuracy image registration is required for different remote sensing applications and is an on-going research topic. The registration of images acquired by different sensor types, like optical and SAR images, is a challenging task. In our work, a novel approach is proposed, which is a combination of the classical feature-based and intensity-based registration approaches. In the first step of the method, spatial features, here roundabouts, are detected in the optical image. In the second step, the detected features are used to generate SAR like roundabout templates. In the third step, the templates are matched with the corresponding parts of the SAR image by using an intensitybased matching process. The proposed method is tested for a pair of TerraSAR-X and QuickBird images and a pair of TerraSAR-X and WorldView-2 images of a suburban area. The results show that the proposed method offers an alternative approach compared to the common optical and SAR images registration methods and it can be used for the geometric accuracy improvement of optical images.

  6. Diffeomorphic Image Registration of Diffusion MRI Using Spherical Harmonics

    PubMed Central

    Geng, Xiujuan; Ross, Thomas J.; Gu, Hong; Shin, Wanyong; Zhan, Wang; Chao, Yi-Ping; Lin, Ching-Po; Schuff, Norbert; Yang, Yihong

    2013-01-01

    Non-rigid registration of diffusion MRI is crucial for group analyses and building white matter and fiber tract atlases. Most current diffusion MRI registration techniques are limited to the alignment of diffusion tensor imaging (DTI) data. We propose a novel diffeomorphic registration method for high angular resolution diffusion images by mapping their orientation distribution functions (ODFs). ODFs can be reconstructed using q-ball imaging (QBI) techniques and represented by spherical harmonics (SHs) to resolve intra-voxel fiber crossings. The registration is based on optimizing a diffeomorphic demons cost function. Unlike scalar images, deforming ODF maps requires ODF reorientation to maintain its consistency with the local fiber orientations. Our method simultaneously reorients the ODFs by computing a Wigner rotation matrix at each voxel, and applies it to the SH coefficients during registration. Rotation of the coefficients avoids the estimation of principal directions, which has no analytical solution and is time consuming. The proposed method was validated on both simulated and real data sets with various metrics, which include the distance between the estimated and simulated transformation fields, the standard deviation of the general fractional anisotropy and the directional consistency of the deformed and reference images. The registration performance using SHs with different maximum orders were compared using these metrics. Results show that the diffeomorphic registration improved the affine alignment, and registration using SHs with higher order SHs further improved the registration accuracy by reducing the shape difference and improving the directional consistency of the registered and reference ODF maps. PMID:21134814

  7. Evaluation of various deformable image registration algorithms for thoracic images.

    PubMed

    Kadoya, Noriyuki; Fujita, Yukio; Katsuta, Yoshiyuki; Dobashi, Suguru; Takeda, Ken; Kishi, Kazuma; Kubozono, Masaki; Umezawa, Rei; Sugawara, Toshiyuki; Matsushita, Haruo; Jingu, Keiichi

    2014-01-01

    We evaluated the accuracy of one commercially available and three publicly available deformable image registration (DIR) algorithms for thoracic four-dimensional (4D) computed tomography (CT) images. Five patients with esophagus cancer were studied. Datasets of the five patients were provided by DIR-lab (dir-lab.com) and consisted of thoracic 4D CT images and a coordinate list of anatomical landmarks that had been manually identified. Expert landmark correspondence was used for evaluating DIR spatial accuracy. First, the manually measured displacement vector field (mDVF) was obtained from the coordinate list of anatomical landmarks. Then the automatically calculated displacement vector field (aDVF) was calculated by using the following four DIR algorithms: B-spine implemented in Velocity AI (Velocity Medical, Atlanta, GA, USA), free-form deformation (FFD), Horn-Schunk optical flow (OF) and Demons in DIRART of MATLAB software. Registration error is defined as the difference between mDVF and aDVF. The mean 3D registration errors were 2.7 ± 0.8 mm for B-spline, 3.6 ± 1.0 mm for FFD, 2.4 ± 0.9 mm for OF and 2.4 ± 1.2 mm for Demons. The results showed that reasonable accuracy was achieved in B-spline, OF and Demons, and that these algorithms have the potential to be used for 4D dose calculation, automatic image segmentation and 4D CT ventilation imaging in patients with thoracic cancer. However, for all algorithms, the accuracy might be improved by using the optimized parameter setting. Furthermore, for B-spline in Velocity AI, the 3D registration error was small with displacements of less than ∼10 mm, indicating that this software may be useful in this range of displacements.

  8. A volumetric model-based 2D to 3D registration method for measuring kinematics of natural knees with single-plane fluoroscopy

    SciTech Connect

    Tsai, Tsung-Yuan; Lu, Tung-Wu; Chen, Chung-Ming; Kuo, Mei-Ying; Hsu, Horng-Chaung

    2010-03-15

    Purpose: Accurate measurement of the three-dimensional (3D) rigid body and surface kinematics of the natural human knee is essential for many clinical applications. Existing techniques are limited either in their accuracy or lack more realistic experimental evaluation of the measurement errors. The purposes of the study were to develop a volumetric model-based 2D to 3D registration method, called the weighted edge-matching score (WEMS) method, for measuring natural knee kinematics with single-plane fluoroscopy to determine experimentally the measurement errors and to compare its performance with that of pattern intensity (PI) and gradient difference (GD) methods. Methods: The WEMS method gives higher priority to matching of longer edges of the digitally reconstructed radiograph and fluoroscopic images. The measurement errors of the methods were evaluated based on a human cadaveric knee at 11 flexion positions. Results: The accuracy of the WEMS method was determined experimentally to be less than 0.77 mm for the in-plane translations, 3.06 mm for out-of-plane translation, and 1.13 deg. for all rotations, which is better than that of the PI and GD methods. Conclusions: A new volumetric model-based 2D to 3D registration method has been developed for measuring 3D in vivo kinematics of natural knee joints with single-plane fluoroscopy. With the equipment used in the current study, the accuracy of the WEMS method is considered acceptable for the measurement of the 3D kinematics of the natural knee in clinical applications.

  9. Shearlet Features for Registration of Remotely Sensed Multitemporal Images

    NASA Technical Reports Server (NTRS)

    Murphy, James M.; Le Moigne, Jacqueline

    2015-01-01

    We investigate the role of anisotropic feature extraction methods for automatic image registration of remotely sensed multitemporal images. Building on the classical use of wavelets in image registration, we develop an algorithm based on shearlets, a mathematical generalization of wavelets that offers increased directional sensitivity. Initial experimental results on LANDSAT images are presented, which indicate superior performance of the shearlet algorithm when compared to classical wavelet algorithms.

  10. [Design of the 2D-FFT image reconstruction software based on Matlab].

    PubMed

    Xu, Hong-yu; Wang, Hong-zhi

    2008-09-01

    This paper presents a Matlab's implementation for 2D-FFT image reconstruction algorithm of magnetic resonance imaging, with the universal COM component that Windows system can identify. This allows to segregate the 2D-FFT image reconstruction algorithm from the business magnetic resonance imaging closed system, providing the ability for initial data processing before reconstruction, which would be important for improving the image quality, diagnostic value and image post-processing.

  11. Bi-sided integral imaging with 2D/3D convertibility using scattering polarizer.

    PubMed

    Yeom, Jiwoon; Hong, Keehoon; Park, Soon-gi; Hong, Jisoo; Min, Sung-Wook; Lee, Byoungho

    2013-12-16

    We propose a two-dimensional (2D) and three-dimensional (3D) convertible bi-sided integral imaging. The proposed system uses the polarization state of projected light for switching its operation mode between 2D and 3D modes. By using an optical module composed of two scattering polarizers and one linear polarizer, the proposed integral imaging system simultaneously provides 3D images with 2D background images for observers who are located in the front and the rear sides of the system. The occlusion effect between 2D images and 3D images is realized by using a compensation mask for 2D images and the elemental images. The principle of proposed system is experimentally verified.

  12. Research Issues in Image Registration for Remote Sensing

    NASA Technical Reports Server (NTRS)

    Eastman, Roger D.; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    Image registration is an important element in data processing for remote sensing with many applications and a wide range of solutions. Despite considerable investigation the field has not settled on a definitive solution for most applications and a number of questions remain open. This article looks at selected research issues by surveying the experience of operational satellite teams, application-specific requirements for Earth science, and our experiments in the evaluation of image registration algorithms with emphasis on the comparison of algorithms for subpixel accuracy. We conclude that remote sensing applications put particular demands on image registration algorithms to take into account domain-specific knowledge of geometric transformations and image content.

  13. High-accuracy registration of intraoperative CT imaging

    NASA Astrophysics Data System (ADS)

    Oentoro, A.; Ellis, R. E.

    2010-02-01

    Image-guided interventions using intraoperative 3D imaging can be less cumbersome than systems dependent on preoperative images, especially by needing neither potentially invasive image-to-patient registration nor a lengthy process of segmenting and generating a 3D surface model. In this study, a method for computer-assisted surgery using direct navigation on intraoperative imaging is presented. In this system the registration step of a navigated procedure was divided into two stages: preoperative calibration of images to a ceiling-mounted optical tracking system, and intraoperative tracking during acquisition of the 3D medical image volume. The preoperative stage used a custom-made multi-modal calibrator that could be optically tracked and also contained fiducial spheres for radiological detection; a robust registration algorithm was used to compensate for the very high false-detection rate that was due to the high physical density of the optical light-emitting diodes. Intraoperatively, a tracking device was attached to plastic bone models that were also instrumented with radio-opaque spheres; A calibrated pointer was used to contact the latter spheres as a validation of the registration. Experiments showed that the fiducial registration error of the preoperative calibration stage was approximately 0.1 mm. The target registration error in the validation stage was approximately 1.2 mm. This study suggests that direct registration, coupled with procedure-specific graphical rendering, is potentially a highly accurate means of performing image-guided interventions in a fast, simple manner.

  14. Comparison and evaluation of retrospective intermodality image registration techniques

    NASA Astrophysics Data System (ADS)

    West, Jay B.; Fitzpatrick, J. Michael; Wang, Matthew Y.; Dawant, Benoit M.; Maurer, Calvin R., Jr.; Kessler, Robert M.; Maciunas, Robert J.; Barillot, Christian; Lemoine, Didier; Collignon, Andre M. F.; Maes, Frederik; Suetens, Paul; Vandermeulen, Dirk; van den Elsen, Petra A.; Hemler, Paul F.; Napel, Sandy; Sumanaweera, Thilaka S.; Harkness, Beth A.; Hill, Derek L.; Studholme, Colin; Malandain, Gregoire; Pennec, Xavier; Noz, Marilyn E.; Maguire, Gerald Q., Jr.; Pollack, Michael; Pelizzari, Charles A.; Robb, Richard A.; Hanson, Dennis P.; Woods, Roger P.

    1996-04-01

    All retrospective image registration methods have attached to them some intrinsic estimate of registration error. However, this estimate of accuracy may not always be a good indicator of the distance between actual and estimated positions of targets within the cranial cavity. This paper describes a project whose principal goal is to use a prospective method based on fiducial markers as a 'gold standard' to perform an objective, blinded evaluation of the accuracy of several retrospective image-to-image registration techniques. Image volumes of three modalities -- CT, MR, and PET -- were taken of patients undergoing neurosurgery at Vanderbilt University Medical Center. These volumes had all traces of the fiducial markers removed, and were provided to project collaborators outside Vanderbilt, who then performed retrospective registrations on the volumes, calculating transformations from CT to MR and/or from PET to MR, and communicated their transformations to Vanderbilt where the accuracy of each registration was evaluated. In this evaluation the accuracy is measured at multiple 'regions of interest,' i.e. areas in the brain which would commonly be areas of neurological interest. A region is defined in the MR image and its centroid C is determined. Then the prospective registration is used to obtain the corresponding point C' in CT or PET. To this point the retrospective registration is then applied, producing C' in MR. Statistics are gathered on the target registration error (TRE), which is the disparity between the original point C and its corresponding point C'. A second goal of the project is to evaluate the importance of correcting geometrical distortion in MR images, by comparing the retrospective TRE in the rectified images, i.e., those which have had the distortion correction applied, with that of the same images before rectification. This paper presents preliminary results of this study along with a brief description of each registration technique and an

  15. Intra-operative 2-D ultrasound and dynamic 3-D aortic model registration for magnetic navigation of transcatheter aortic valve implantation.

    PubMed

    Luo, Zhe; Cai, Junfeng; Peters, Terry M; Gu, Lixu

    2013-11-01

    We propose a navigation system for transcatheter aortic valve implantation that employs a magnetic tracking system (MTS) along with a dynamic aortic model and intra-operative ultrasound (US) images. This work is motivated by the desire of our cardiology and cardiac surgical colleagues to minimize or eliminate the use of radiation in the interventional suite or operating room. The dynamic 3-D aortic model is constructed from a preoperative 4-D computed tomography dataset that is animated in synchrony with the real time electrocardiograph input of patient, and then preoperative planning is performed to determine the target position of the aortic valve prosthesis. The contours of the aortic root are extracted automatically from short axis US images in real-time for registering the 2-D intra-operative US image to the preoperative dynamic aortic model. The augmented MTS guides the interventionist during positioning and deployment of the aortic valve prosthesis to the target. The results of the aortic root segmentation algorithm demonstrate an error of 0.92±0.85 mm with a computational time of 36.13±6.26 ms. The navigation approach was validated in porcine studies, yielding fiducial localization errors, target registration errors, deployment distance, and tilting errors of 3.02±0.39 mm, 3.31±1.55 mm, 3.23±0.94 mm, and 5.85±3.06(°) , respectively.

  16. Web-based interactive 2D/3D medical image processing and visualization software.

    PubMed

    Mahmoudi, Seyyed Ehsan; Akhondi-Asl, Alireza; Rahmani, Roohollah; Faghih-Roohi, Shahrooz; Taimouri, Vahid; Sabouri, Ahmad; Soltanian-Zadeh, Hamid

    2010-05-01

    There are many medical image processing software tools available for research and diagnosis purposes. However, most of these tools are available only as local applications. This limits the accessibility of the software to a specific machine, and thus the data and processing power of that application are not available to other workstations. Further, there are operating system and processing power limitations which prevent such applications from running on every type of workstation. By developing web-based tools, it is possible for users to access the medical image processing functionalities wherever the internet is available. In this paper, we introduce a pure web-based, interactive, extendable, 2D and 3D medical image processing and visualization application that requires no client installation. Our software uses a four-layered design consisting of an algorithm layer, web-user-interface layer, server communication layer, and wrapper layer. To compete with extendibility of the current local medical image processing software, each layer is highly independent of other layers. A wide range of medical image preprocessing, registration, and segmentation methods are implemented using open source libraries. Desktop-like user interaction is provided by using AJAX technology in the web-user-interface. For the visualization functionality of the software, the VRML standard is used to provide 3D features over the web. Integration of these technologies has allowed implementation of our purely web-based software with high functionality without requiring powerful computational resources in the client side. The user-interface is designed such that the users can select appropriate parameters for practical research and clinical studies.

  17. Automatic nonrigid registration of whole body CT mice images.

    PubMed

    Li, Xia; Yankeelov, Thomas E; Peterson, Todd E; Gore, John C; Dawant, Benoit M

    2008-04-01

    Three-dimensional intra- and intersubject registration of image volumes is important for tasks that include quantification of temporal/longitudinal changes, atlas-based segmentation, computing population averages, or voxel and tensor-based morphometry. While a number of methods have been proposed to address this problem, few have focused on the problem of registering whole body image volumes acquired either from humans or small animals. These image volumes typically contain a large number of articulated structures, which makes registration more difficult than the registration of head images, to which the majority of registration algorithms have been applied. This article presents a new method for the automatic registration of whole body computed tomography (CT) volumes, which consists of two main steps. Skeletons are first brought into approximate correspondence with a robust point-based method. Transformations so obtained are refined with an intensity-based nonrigid registration algorithm that includes spatial adaptation of the transformation's stiffness. The approach has been applied to whole body CT images of mice, to CT images of the human upper torso, and to human head and neck CT images. To validate the authors method on soft tissue structures, which are difficult to see in CT images, the authors use coregistered magnetic resonance images. They demonstrate that the approach they propose can successfully register image volumes even when these volumes are very different in size and shape or if they have been acquired with the subjects in different positions.

  18. Automatic localization of vertebral levels in x-ray fluoroscopy using 3D-2D registration: a tool to reduce wrong-site surgery.

    PubMed

    Otake, Y; Schafer, S; Stayman, J W; Zbijewski, W; Kleinszig, G; Graumann, R; Khanna, A J; Siewerdsen, J H

    2012-09-07

    Surgical targeting of the incorrect vertebral level (wrong-level surgery) is among the more common wrong-site surgical errors, attributed primarily to the lack of uniquely identifiable radiographic landmarks in the mid-thoracic spine. The conventional localization method involves manual counting of vertebral bodies under fluoroscopy, is prone to human error and carries additional time and dose. We propose an image registration and visualization system (referred to as LevelCheck), for decision support in spine surgery by automatically labeling vertebral levels in fluoroscopy using a GPU-accelerated, intensity-based 3D-2D (namely CT-to-fluoroscopy) registration. A gradient information (GI) similarity metric and a CMA-ES optimizer were chosen due to their robustness and inherent suitability for parallelization. Simulation studies involved ten patient CT datasets from which 50 000 simulated fluoroscopic images were generated from C-arm poses selected to approximate the C-arm operator and positioning variability. Physical experiments used an anthropomorphic chest phantom imaged under real fluoroscopy. The registration accuracy was evaluated as the mean projection distance (mPD) between the estimated and true center of vertebral levels. Trials were defined as successful if the estimated position was within the projection of the vertebral body (namely mPD <5 mm). Simulation studies showed a success rate of 99.998% (1 failure in 50 000 trials) and computation time of 4.7 s on a midrange GPU. Analysis of failure modes identified cases of false local optima in the search space arising from longitudinal periodicity in vertebral structures. Physical experiments demonstrated the robustness of the algorithm against quantum noise and x-ray scatter. The ability to automatically localize target anatomy in fluoroscopy in near-real-time could be valuable in reducing the occurrence of wrong-site surgery while helping to reduce radiation exposure. The method is applicable beyond the

  19. Automatic localization of vertebral levels in x-ray fluoroscopy using 3D-2D registration: a tool to reduce wrong-site surgery

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Schafer, S.; Stayman, J. W.; Zbijewski, W.; Kleinszig, G.; Graumann, R.; Khanna, A. J.; Siewerdsen, J. H.

    2012-09-01

    Surgical targeting of the incorrect vertebral level (wrong-level surgery) is among the more common wrong-site surgical errors, attributed primarily to the lack of uniquely identifiable radiographic landmarks in the mid-thoracic spine. The conventional localization method involves manual counting of vertebral bodies under fluoroscopy, is prone to human error and carries additional time and dose. We propose an image registration and visualization system (referred to as LevelCheck), for decision support in spine surgery by automatically labeling vertebral levels in fluoroscopy using a GPU-accelerated, intensity-based 3D-2D (namely CT-to-fluoroscopy) registration. A gradient information (GI) similarity metric and a CMA-ES optimizer were chosen due to their robustness and inherent suitability for parallelization. Simulation studies involved ten patient CT datasets from which 50 000 simulated fluoroscopic images were generated from C-arm poses selected to approximate the C-arm operator and positioning variability. Physical experiments used an anthropomorphic chest phantom imaged under real fluoroscopy. The registration accuracy was evaluated as the mean projection distance (mPD) between the estimated and true center of vertebral levels. Trials were defined as successful if the estimated position was within the projection of the vertebral body (namely mPD <5 mm). Simulation studies showed a success rate of 99.998% (1 failure in 50 000 trials) and computation time of 4.7 s on a midrange GPU. Analysis of failure modes identified cases of false local optima in the search space arising from longitudinal periodicity in vertebral structures. Physical experiments demonstrated the robustness of the algorithm against quantum noise and x-ray scatter. The ability to automatically localize target anatomy in fluoroscopy in near-real-time could be valuable in reducing the occurrence of wrong-site surgery while helping to reduce radiation exposure. The method is applicable beyond

  20. Reduction of multi-fragment fractures of the distal radius using atlas-based 2D/3D registration

    NASA Astrophysics Data System (ADS)

    Gong, Ren Hui; Stewart, James; Abolmaesumi, Purang

    2009-02-01

    We describe a method to guide the surgical fixation of distal radius fractures. The method registers the fracture fragments to a volumetric intensity-based statistical anatomical atlas of distal radius, reconstructed from human cadavers and patient data, using a few intra-operative X-ray fluoroscopy images of the fracture. No pre-operative Computed Tomography (CT) images are required, hence radiation exposure to patients is substantially reduced. Intra-operatively, each bone fragment is roughly segmented from the X-ray images by a surgeon, and a corresponding segmentation volume is created from the back-projections of the 2D segmentations. An optimization procedure positions each segmentation volume at the appropriate pose on the atlas, while simultaneously deforming the atlas such that the overlap of the 2D projection of the atlas with individual fragments in the segmented regions is maximized. Our simulation results shows that this method can accurately identify the pose of large fragments using only two X-ray views, but for small fragments, more than two X-rays may be needed. The method does not assume any prior knowledge about the shape of the bone and the number of fragments, thus it is also potentially suitable for the fixation of other types of multi-fragment fractures.

  1. INTER-GROUP IMAGE REGISTRATION BY HIERARCHICAL GRAPH SHRINKAGE.

    PubMed

    Ying, Shihui; Wu, Guorong; Liao, Shu; Shen, Dinggang

    2013-12-31

    In this paper, we propose a novel inter-group image registration method to register different groups of images (e.g., young and elderly brains) simultaneously. Specifically, we use a hierarchical two-level graph to model the distribution of entire images on the manifold, with intra-graph representing the image distribution in each group and the inter-graph describing the relationship between two groups. Then the procedure of inter-group registration is formulated as a dynamic evolution of graph shrinkage. The advantage of our method is that the topology of entire image distribution is explored to guide the image registration. In this way, each image coordinates with its neighboring images on the manifold to deform towards the population center, by following the deformation pathway simultaneously optimized within the graph. Our proposed method has been also compared with other state-of-the-art inter-group registration methods, where our method achieves better registration results in terms of registration accuracy and robustness.

  2. Optimal atlas construction through hierarchical image registration

    NASA Astrophysics Data System (ADS)

    Grevera, George J.; Udupa, Jayaram K.; Odhner, Dewey; Torigian, Drew A.

    2016-03-01

    Atlases (digital or otherwise) are common in medicine. However, there is no standard framework for creating them from medical images. One traditional approach is to pick a representative subject and then proceed to label structures/regions of interest in this image. Another is to create a "mean" or average subject. Atlases may also contain more than a single representative (e.g., the Visible Human contains both a male and a female data set). Other criteria besides gender may be used as well, and the atlas may contain many examples for a given criterion. In this work, we propose that atlases be created in an optimal manner using a well-established graph theoretic approach using a min spanning tree (or more generally, a collection of them). The resulting atlases may contain many examples for a given criterion. In fact, our framework allows for the addition of new subjects to the atlas to allow it to evolve over time. Furthermore, one can apply segmentation methods to the graph (e.g., graph-cut, fuzzy connectedness, or cluster analysis) which allow it to be separated into "sub-atlases" as it evolves. We demonstrate our method by applying it to 50 3D CT data sets of the chest region, and by comparing it to a number of traditional methods using measures such as Mean Squared Difference, Mattes Mutual Information, and Correlation, and for rigid registration. Our results demonstrate that optimal atlases can be constructed in this manner and outperform other methods of construction using freely available software.

  3. Nonrigid brain MR image registration using uniform spherical region descriptor.

    PubMed

    Liao, Shu; Chung, Albert C S

    2012-01-01

    There are two main issues that make nonrigid image registration a challenging task. First, voxel intensity similarity may not be necessarily equivalent to anatomical similarity in the image correspondence searching process. Second, during the imaging process, some interferences such as unexpected rotations of input volumes and monotonic gray-level bias fields can adversely affect the registration quality. In this paper, a new feature-based nonrigid image registration method is proposed. The proposed method is based on a new type of image feature, namely, uniform spherical region descriptor (USRD), as signatures for each voxel. The USRD is rotation and monotonic gray-level transformation invariant and can be efficiently calculated. The registration process is therefore formulated as a feature matching problem. The USRD feature is integrated with the Markov random field labeling framework in which energy function is defined for registration. The energy function is then optimized by the α-expansion algorithm. The proposed method has been compared with five state-of-the-art registration approaches on both the simulated and real 3-D databases obtained from the BrainWeb and Internet Brain Segmentation Repository, respectively. Experimental results demonstrate that the proposed method can achieve high registration accuracy and reliable robustness behavior.

  4. 2D Images Recorded With a Single-Sided Magnetic Particle Imaging Scanner.

    PubMed

    Grafe, Ksenija; von Gladiss, Anselm; Bringout, Gael; Ahlborg, Mandy; Buzug, Thorsten M

    2016-04-01

    Magnetic Particle Imaging is a new medical imaging modality, which detects superparamagnetic iron oxide nanoparticles. The particles are excited by magnetic fields. Most scanners have a tube-like measurement field and therefore, both the field of view and the object size are limited. A single-sided scanner has the advantage that the object is not limited in size, only the penetration depth is limited. A single-sided scanner prototype for 1D imaging has been presented in 2009. Simulations have been published for a 2D single-sided scanner and first 1D measurements have been carried out. In this paper, the first 2D single-sided scanner prototype is presented and the first calibration-based reconstruction results of measured 2D phantoms are shown. The field free point is moved on a Lissajous trajectory inside a 30 × 30 mm2 area. Images of phantoms with a maximal distance of 10 mm perpendicular to the scanner surface have been reconstructed. Different cylindrically shaped holes of phantoms have been filled with 6.28 μl undiluted Resovist. After the measurement and image reconstruction of the phantoms, particle volumes could be distinguished with a distance of 2 mm and 6 mm in vertical and horizontal direction, respectively.

  5. Bivariate gamma distributions for image registration and change detection.

    PubMed

    Chatelain, Florent; Tourneret, Jean-Yves; Inglada, Jordi; Ferrari, André

    2007-07-01

    This paper evaluates the potential interest of using bivariate gamma distributions for image registration and change detection. The first part of this paper studies estimators for the parameters of bivariate gamma distributions based on the maximum likelihood principle and the method of moments. The performance of both methods are compared in terms of estimated mean square errors and theoretical asymptotic variances. The mutual information is a classical similarity measure which can be used for image registration or change detection. The second part of the paper studies some properties of the mutual information for bivariate Gamma distributions. Image registration and change detection techniques based on bivariate gamma distributions are finally investigated. Simulation results conducted on synthetic and real data are very encouraging. Bivariate gamma distributions are good candidates allowing us to develop new image registration algorithms and new change detectors.

  6. Registration of Heat Capacity Mapping Mission day and night images

    NASA Technical Reports Server (NTRS)

    Watson, K.; Hummer-Miller, S.; Sawatzky, D. L. (Principal Investigator)

    1982-01-01

    Neither iterative registration, using drainage intersection maps for control, nor cross correlation techniques were satisfactory in registering day and night HCMM imagery. A procedure was developed which registers the image pairs by selecting control points and mapping the night thermal image to the daytime thermal and reflectance images using an affine transformation on a 1300 by 1100 pixel image. The resulting image registration is accurate to better than two pixels (RMS) and does not exhibit the significant misregistration that was noted in the temperature-difference and thermal-inertia products supplied by NASA. The affine transformation was determined using simple matrix arithmetic, a step that can be performed rapidly on a minicomputer.

  7. MR to CT Registration of Brains using Image Synthesis.

    PubMed

    Roy, Snehashis; Carass, Aaron; Jog, Amod; Prince, Jerry L; Lee, Junghoon

    2014-03-21

    Computed tomography (CT) is the standard imaging modality for patient dose calculation for radiation therapy. Magnetic resonance (MR) imaging (MRI) is used along with CT to identify brain structures due to its superior soft tissue contrast. Registration of MR and CT is necessary for accurate delineation of the tumor and other structures, and is critical in radiotherapy planning. Mutual information (MI) or its variants are typically used as a similarity metric to register MRI to CT. However, unlike CT, MRI intensity does not have an accepted calibrated intensity scale. Therefore, MI-based MR-CT registration may vary from scan to scan as MI depends on the joint histogram of the images. In this paper, we propose a fully automatic framework for MR-CT registration by synthesizing a synthetic CT image from MRI using a co-registered pair of MR and CT images as an atlas. Patches of the subject MRI are matched to the atlas and the synthetic CT patches are estimated in a probabilistic framework. The synthetic CT is registered to the original CT using a deformable registration and the computed deformation is applied to the MRI. In contrast to most existing methods, we do not need any manual intervention such as picking landmarks or regions of interests. The proposed method was validated on ten brain cancer patient cases, showing 25% improvement in MI and correlation between MR and CT images after registration compared to state-of-the-art registration methods.

  8. MR to CT registration of brains using image synthesis

    NASA Astrophysics Data System (ADS)

    Roy, Snehashis; Carass, Aaron; Jog, Amod; Prince, Jerry L.; Lee, Junghoon

    2014-03-01

    Computed tomography (CT) is the preferred imaging modality for patient dose calculation for radiation therapy. Magnetic resonance (MR) imaging (MRI) is used along with CT to identify brain structures due to its superior soft tissue contrast. Registration of MR and CT is necessary for accurate delineation of the tumor and other structures, and is critical in radiotherapy planning. Mutual information (MI) or its variants are typically used as a similarity metric to register MRI to CT. However, unlike CT, MRI intensity does not have an accepted calibrated intensity scale. Therefore, MI-based MR-CT registration may vary from scan to scan as MI depends on the joint histogram of the images. In this paper, we propose a fully automatic framework for MR-CT registration by synthesizing a synthetic CT image from MRI using a co-registered pair of MR and CT images as an atlas. Patches of the subject MRI are matched to the atlas and the synthetic CT patches are estimated in a probabilistic framework. The synthetic CT is registered to the original CT using a deformable registration and the computed deformation is applied to the MRI. In contrast to most existing methods, we do not need any manual intervention such as picking landmarks or regions of interests. The proposed method was validated on ten brain cancer patient cases, showing 25% improvement in MI and correlation between MR and CT images after registration compared to state-of-the-art registration methods.

  9. Infrared thermal facial image sequence registration analysis and verification

    NASA Astrophysics Data System (ADS)

    Chen, Chieh-Li; Jian, Bo-Lin

    2015-03-01

    To study the emotional responses of subjects to the International Affective Picture System (IAPS), infrared thermal facial image sequence is preprocessed for registration before further analysis such that the variance caused by minor and irregular subject movements is reduced. Without affecting the comfort level and inducing minimal harm, this study proposes an infrared thermal facial image sequence registration process that will reduce the deviations caused by the unconscious head shaking of the subjects. A fixed image for registration is produced through the localization of the centroid of the eye region as well as image translation and rotation processes. Thermal image sequencing will then be automatically registered using the two-stage genetic algorithm proposed. The deviation before and after image registration will be demonstrated by image quality indices. The results show that the infrared thermal image sequence registration process proposed in this study is effective in localizing facial images accurately, which will be beneficial to the correlation analysis of psychological information related to the facial area.

  10. Avoiding Stair-Step Artifacts in Image Registration for GOES-R Navigation and Registration Assessment

    NASA Technical Reports Server (NTRS)

    Grycewicz, Thomas J.; Tan, Bin; Isaacson, Peter J.; De Luccia, Frank J.; Dellomo, John

    2016-01-01

    In developing software for independent verification and validation (IVV) of the Image Navigation and Registration (INR) capability for the Geostationary Operational Environmental Satellite R Series (GOES-R) Advanced Baseline Imager (ABI), we have encountered an image registration artifact which limits the accuracy of image offset estimation at the subpixel scale using image correlation. Where the two images to be registered have the same pixel size, subpixel image registration preferentially selects registration values where the image pixel boundaries are close to lined up. Because of the shape of a curve plotting input displacement to estimated offset, we call this a stair-step artifact. When one image is at a higher resolution than the other, the stair-step artifact is minimized by correlating at the higher resolution. For validating ABI image navigation, GOES-R images are correlated with Landsat-based ground truth maps. To create the ground truth map, the Landsat image is first transformed to the perspective seen from the GOES-R satellite, and then is scaled to an appropriate pixel size. Minimizing processing time motivates choosing the map pixels to be the same size as the GOES-R pixels. At this pixel size image processing of the shift estimate is efficient, but the stair-step artifact is present. If the map pixel is very small, stair-step is not a problem, but image correlation is computation-intensive. This paper describes simulation-based selection of the scale for truth maps for registering GOES-R ABI images.

  11. SU-E-J-137: Image Registration Tool for Patient Setup in Korea Heavy Ion Medical Accelerator Center

    SciTech Connect

    Kim, M; Suh, T; Cho, W; Jung, W

    2015-06-15

    Purpose: A potential validation tool for compensating patient positioning error was developed using 2D/3D and 3D/3D image registration. Methods: For 2D/3D registration, digitally reconstructed radiography (DRR) and three-dimensional computed tomography (3D-CT) images were applied. The ray-casting algorithm is the most straightforward method for generating DRR. We adopted the traditional ray-casting method, which finds the intersections of a ray with all objects, voxels of the 3D-CT volume in the scene. The similarity between the extracted DRR and orthogonal image was measured by using a normalized mutual information method. Two orthogonal images were acquired from a Cyber-Knife system from the anterior-posterior (AP) and right lateral (RL) views. The 3D-CT and two orthogonal images of an anthropomorphic phantom and head and neck cancer patient were used in this study. For 3D/3D registration, planning CT and in-room CT image were applied. After registration, the translation and rotation factors were calculated to position a couch to be movable in six dimensions. Results: Registration accuracies and average errors of 2.12 mm ± 0.50 mm for transformations and 1.23° ± 0.40° for rotations were acquired by 2D/3D registration using an anthropomorphic Alderson-Rando phantom. In addition, registration accuracies and average errors of 0.90 mm ± 0.30 mm for transformations and 1.00° ± 0.2° for rotations were acquired using CT image sets. Conclusion: We demonstrated that this validation tool could compensate for patient positioning error. In addition, this research could be the fundamental step for compensating patient positioning error at the first Korea heavy-ion medical accelerator treatment center.

  12. Deformable image registration between pathological images and MR image via an optical macro image.

    PubMed

    Ohnishi, Takashi; Nakamura, Yuka; Tanaka, Toru; Tanaka, Takuya; Hashimoto, Noriaki; Haneishi, Hideaki; Batchelor, Tracy T; Gerstner, Elizabeth R; Taylor, Jennie W; Snuderl, Matija; Yagi, Yukako

    2016-10-01

    Computed tomography (CT) and magnetic resonance (MR) imaging have been widely used for visualizing the inside of the human body. However, in many cases, pathological diagnosis is conducted through a biopsy or resection of an organ to evaluate the condition of tissues as definitive diagnosis. To provide more advanced information onto CT or MR image, it is necessary to reveal the relationship between tissue information and image signals. We propose a registration scheme for a set of PT images of divided specimens and a 3D-MR image by reference to an optical macro image (OM image) captured by an optical camera. We conducted a fundamental study using a resected human brain after the death of a brain cancer patient. We constructed two kinds of registration processes using the OM image as the base for both registrations to make conversion parameters between the PT and MR images. The aligned PT images had shapes similar to the OM image. On the other hand, the extracted cross-sectional MR image was similar to the OM image. From these resultant conversion parameters, the corresponding region on the PT image could be searched and displayed when an arbitrary pixel on the MR image was selected. The relationship between the PT and MR images of the whole brain can be analyzed using the proposed method. We confirmed that same regions between the PT and MR images could be searched and displayed using resultant information obtained by the proposed method. In terms of the accuracy of proposed method, the TREs were 0.56±0.39mm and 0.87±0.42mm. We can analyze the relationship between tissue information and MR signals using the proposed method.

  13. Deformable image registration between pathological images and MR image via an optical macro image

    PubMed Central

    Ohnishi, Takashi; Nakamura, Yuka; Tanaka, Toru; Tanaka, Takuya; Hashimoto, Noriaki; Haneishi, Hideaki; Batchelor, Tracy T.; Gerstner, Elizabeth R.; Taylor, Jennie W.; Snuderl, Matija; Yagi, Yukako

    2016-01-01

    Computed tomography (CT) and magnetic resonance (MR) imaging have been widely used for visualizing the inside of the human body. However, in many cases, pathological diagnosis is conducted through a biopsy or resection of an organ to evaluate the condition of tissues as definitive diagnosis. To provide more advanced information onto CT or MR image, it is necessary to reveal the relationship between tissue information and image signals. We propose a registration scheme for a set of PT images of divided specimens and a 3D-MR image by reference to an optical macro image (OM image) captured by an optical camera. We conducted a fundamental study using a resected human brain after the death of a brain cancer patient. We constructed two kinds of registration processes using the OM image as the base for both registrations to make conversion parameters between the PT and MR images. The aligned PT images had shapes similar to the OM image. On the other hand, the extracted cross-sectional MR image was similar to the OM image. From these resultant conversion parameters, the corresponding region on the PT image could be searched and displayed when an arbitrary pixel on the MR image was selected. The relationship between the PT and MR images of the whole brain can be analyzed using the proposed method. We confirmed that same regions between the PT and MR images could be searched and displayed using resultant information obtained by the proposed method. In terms of the accuracy of proposed method, the TREs were 0.56 ± 0.39 mm and 0.87 ± 0.42 mm. We can analyze the relationship between tissue information and MR signals using the proposed method. PMID:27613662

  14. On Limits of Embedding in 3D Images Based on 2D Watson's Model

    NASA Astrophysics Data System (ADS)

    Kavehvash, Zahra; Ghaemmaghami, Shahrokh

    We extend the Watson image quality metric to 3D images through the concept of integral imaging. In the Watson's model, perceptual thresholds for changes to the DCT coefficients of a 2D image are given for information hiding. These thresholds are estimated in a way that the resulting distortion in the 2D image remains undetectable by the human eyes. In this paper, the same perceptual thresholds are estimated for a 3D scene in the integral imaging method. These thresholds are obtained based on the Watson's model using the relation between 2D elemental images and resulting 3D image. The proposed model is evaluated through subjective tests in a typical image steganography scheme.

  15. Nonrigid registration of dynamic medical imaging data using nD + t B-splines and a groupwise optimization approach.

    PubMed

    Metz, C T; Klein, S; Schaap, M; van Walsum, T; Niessen, W J

    2011-04-01

    A registration method for motion estimation in dynamic medical imaging data is proposed. Registration is performed directly on the dynamic image, thus avoiding a bias towards a specifically chosen reference time point. Both spatial and temporal smoothness of the transformations are taken into account. Optionally, cyclic motion can be imposed, which can be useful for visualization (viewing the segmentation sequentially) or model building purposes. The method is based on a 3D (2D+time) or 4D (3D+time) free-form B-spline deformation model, a similarity metric that minimizes the intensity variances over time and constrained optimization using a stochastic gradient descent method with adaptive step size estimation. The method was quantitatively compared with existing registration techniques on synthetic data and 3D+t computed tomography data of the lungs. This showed subvoxel accuracy while delivering smooth transformations, and high consistency of the registration results. Furthermore, the accuracy of semi-automatic derivation of left ventricular volume curves from 3D+t computed tomography angiography data of the heart was evaluated. On average, the deviation from the curves derived from the manual annotations was approximately 3%. The potential of the method for other imaging modalities was shown on 2D+t ultrasound and 2D+t magnetic resonance images. The software is publicly available as an extension to the registration package elastix.

  16. Multifractal analysis of 2D gray soil images

    NASA Astrophysics Data System (ADS)

    González-Torres, Ivan; Losada, Juan Carlos; Heck, Richard; Tarquis, Ana M.

    2015-04-01

    Soil structure, understood as the spatial arrangement of soil pores, is one of the key factors in soil modelling processes. Geometric properties of individual and interpretation of the morphological parameters of pores can be estimated from thin sections or 3D Computed Tomography images (Tarquis et al., 2003), but there is no satisfactory method to binarized these images and quantify the complexity of their spatial arrangement (Tarquis et al., 2008, Tarquis et al., 2009; Baveye et al., 2010). The objective of this work was to apply a multifractal technique, their singularities (α) and f(α) spectra, to quantify it without applying any threshold (Gónzalez-Torres, 2014). Intact soil samples were collected from four horizons of an Argisol, formed on the Tertiary Barreiras group of formations in Pernambuco state, Brazil (Itapirema Experimental Station). The natural vegetation of the region is tropical, coastal rainforest. From each horizon, showing different porosities and spatial arrangements, three adjacent samples were taken having a set of twelve samples. The intact soil samples were imaged using an EVS (now GE Medical. London, Canada) MS-8 MicroCT scanner with 45 μm pixel-1 resolution (256x256 pixels). Though some samples required paring to fit the 64 mm diameter imaging tubes, field orientation was maintained. References Baveye, P.C., M. Laba, W. Otten, L. Bouckaert, P. Dello, R.R. Goswami, D. Grinev, A. Houston, Yaoping Hu, Jianli Liu, S. Mooney, R. Pajor, S. Sleutel, A. Tarquis, Wei Wang, Qiao Wei, Mehmet Sezgin. Observer-dependent variability of the thresholding step in the quantitative analysis of soil images and X-ray microtomography data. Geoderma, 157, 51-63, 2010. González-Torres, Iván. Theory and application of multifractal analysis methods in images for the study of soil structure. Master thesis, UPM, 2014. Tarquis, A.M., R.J. Heck, J.B. Grau; J. Fabregat, M.E. Sanchez and J.M. Antón. Influence of Thresholding in Mass and Entropy Dimension of 3-D

  17. The agreement between 3D, standard 2D and triplane 2D speckle tracking: effects of image quality and 3D volume rate.

    PubMed

    Trache, Tudor; Stöbe, Stephan; Tarr, Adrienn; Pfeiffer, Dietrich; Hagendorff, Andreas

    2014-12-01

    Comparison of 3D and 2D speckle tracking performed on standard 2D and triplane 2D datasets of normal and pathological left ventricular (LV) wall-motion patterns with a focus on the effect that 3D volume rate (3DVR), image quality and tracking artifacts have on the agreement between 2D and 3D speckle tracking. 37 patients with normal LV function and 18 patients with ischaemic wall-motion abnormalities underwent 2D and 3D echocardiography, followed by offline speckle tracking measurements. The values of 3D global, regional and segmental strain were compared with the standard 2D and triplane 2D strain values. Correlation analysis with the LV ejection fraction (LVEF) was also performed. The 3D and 2D global strain values correlated good in both normally and abnormally contracting hearts, though systematic differences between the two methods were observed. Of the 3D strain parameters, the area strain showed the best correlation with the LVEF. The numerical agreement of 3D and 2D analyses varied significantly with the volume rate and image quality of the 3D datasets. The highest correlation between 2D and 3D peak systolic strain values was found between 3D area and standard 2D longitudinal strain. Regional wall-motion abnormalities were similarly detected by 2D and 3D speckle tracking. 2DST of triplane datasets showed similar results to those of conventional 2D datasets. 2D and 3D speckle tracking similarly detect normal and pathological wall-motion patterns. Limited image quality has a significant impact on the agreement between 3D and 2D numerical strain values.

  18. Comparison of similarity measures for rigid-body CT/Dual X-ray image registrations.

    PubMed

    Kim, Jinkoo; Li, Shidong; Pradhan, Deepak; Hammoud, Rabih; Chen, Qing; Yin, Fang-Fang; Zhao, Yang; Kim, Jae Ho; Movsas, Benjamin

    2007-08-01

    A set of experiments were conducted to evaluate six similarity measures for intensity-based rigid-body 3D/2D image registration. Similarity measure is an index that measures the similarity between a digitally reconstructed radiograph (DRR) and an x-ray planar image. The registration is accomplished by maximizing the sum of the similarity measures between biplane x-ray images and the corresponding DRRs in an iterative fashion. We have evaluated the accuracy and attraction ranges of the registrations using six different similarity measures on phantom experiments for head, thorax, and pelvis. The images were acquired using Varian Medial System On-Board Imager. Our results indicated that normalized cross correlation and entropy of difference showed a wide attraction range (62 deg and 83 mm mean attraction range, omega(mean)), but the worst accuracy (4.2 mm maximum error, e(max)). The gradient-based similarity measures, gradient correlation and gradient difference, and the pattern intensity showed sub-millimeter accuracy, but narrow attraction ranges (omega(mean)=29 deg, 31 mm). Mutual information was in-between of these two groups (e(max)=2.5 mm, omega(mean)= 48 deg, 52 mm). On the data of 120 x-ray pairs from eight IRB approved prostate patients, the gradient difference showed the best accuracy. In the clinical applications, registrations starting with the mutual information followed by the gradient difference may provide the best accuracy and the most robustness.

  19. Restricted surface matching: a new registration method for medical images

    NASA Astrophysics Data System (ADS)

    Gong, JianXing; Zamorano, Lucia J.; Jiang, Zhaowei; Nolte, Lutz P.; Diaz, Fernando

    1998-06-01

    Since its introduction to neurological surgery in the early 1980's, computer assisted surgery (CAS) with and without robotics navigation has been applied to several medical fields. The common issue all CAS systems is registration between two pre-operative 3D image modalities (for example, CT/MRI/PET et al) and the 3D image references of the patient in the operative room. In Wayne State University, a new way is introduced for medical image registration, which is different from traditional fiducial point registration and surface registration. We call it restricted surface matching (RSM). The method fast, convenient, accurate and robust. It combines the advantages from two registration methods mentioned before. Because of a penalty function introduced in its cost function, it is called `RSM'. The surface of a 3D image modality is pre-operatively extracted using segmentation techniques, and a distance map is created from such surface. The surface of another 3D reference is presented by a cloud of 3D points. At least three rough landmarks are used to restrict a registration not far away from global minimum. The local minimum issue is solved by use of a restriction for in the cost function and larger number of random starting points. The accuracy of matching is achieved by gradually releasing the restriction and limiting the influence of outliers. It only needs about half a minute to find the global minimum (for 256 X 256 X 56 images) in a SunSparc 10 station.

  20. Symmetric Biomechanically Guided Prone-to-Supine Breast Image Registration.

    PubMed

    Eiben, Björn; Vavourakis, Vasileios; Hipwell, John H; Kabus, Sven; Buelow, Thomas; Lorenz, Cristian; Mertzanidou, Thomy; Reis, Sara; Williams, Norman R; Keshtgar, Mohammed; Hawkes, David J

    2016-01-01

    Prone-to-supine breast image registration has potential application in the fields of surgical and radiotherapy planning, image guided interventions, and multi-modal cancer diagnosis, staging, and therapy response prediction. However, breast image registration of three dimensional images acquired in different patient positions is a challenging problem, due to large deformations induced to the soft breast tissue caused by the change in gravity loading. We present a symmetric, biomechanical simulation based registration framework which aligns the images in a central, virtually unloaded configuration. The breast tissue is modelled as a neo-Hookean material and gravity is considered as the main source of deformation in the original images. In addition to gravity, our framework successively applies image derived forces directly into the unloading simulation in place of a subsequent image registration step. This results in a biomechanically constrained deformation. Using a finite difference scheme avoids an explicit meshing step and enables simulations to be performed directly in the image space. The explicit time integration scheme allows the motion at the interface between chest and breast to be constrained along the chest wall. The feasibility and accuracy of the approach presented here was assessed by measuring the target registration error (TRE) using a numerical phantom with known ground truth deformations, nine clinical prone MRI and supine CT image pairs, one clinical prone-supine CT image pair and four prone-supine MRI image pairs. The registration reduced the mean TRE for the numerical phantom experiment from initially 19.3 to 0.9 mm and the combined mean TRE for all fourteen clinical data sets from 69.7 to 5.6 mm.

  1. Cross contrast multi-channel image registration using image synthesis for MR brain images.

    PubMed

    Chen, Min; Carass, Aaron; Jog, Amod; Lee, Junghoon; Roy, Snehashis; Prince, Jerry L

    2017-02-01

    Multi-modal deformable registration is important for many medical image analysis tasks such as atlas alignment, image fusion, and distortion correction. Whereas a conventional method would register images with different modalities using modality independent features or information theoretic metrics such as mutual information, this paper presents a new framework that addresses the problem using a two-channel registration algorithm capable of using mono-modal similarity measures such as sum of squared differences or cross-correlation. To make it possible to use these same-modality measures, image synthesis is used to create proxy images for the opposite modality as well as intensity-normalized images from each of the two available images. The new deformable registration framework was evaluated by performing intra-subject deformation recovery, intra-subject boundary alignment, and inter-subject label transfer experiments using multi-contrast magnetic resonance brain imaging data. Three different multi-channel registration algorithms were evaluated, revealing that the framework is robust to the multi-channel deformable registration algorithm that is used. With a single exception, all results demonstrated improvements when compared against single channel registrations using the same algorithm with mutual information.

  2. A 2-D imaging heat-flux gauge

    SciTech Connect

    Noel, B.W.; Borella, H.M. ); Beshears, D.L.; Sartory, W.K.; Tobin, K.W.; Williams, R.K. ); Turley, W.D. . Santa Barbara Operations)

    1991-07-01

    This report describes a new leadless two-dimensional imaging optical heat-flux gauge. The gauge is made by depositing arrays of thermorgraphic-phosphor (TP) spots onto the faces of a polymethylpentene is insulator. In the first section of the report, we describe several gauge configurations and their prototype realizations. A satisfactory configuration is an array of right triangles on each face that overlay to form squares when the gauge is viewed normal to the surface. The next section of the report treats the thermal conductivity of TPs. We set up an experiment using a comparative longitudinal heat-flow apparatus to measure the previously unknown thermal conductivity of these materials. The thermal conductivity of one TP, Y{sub 2}O{sub 3}:Eu, is 0.0137 W/cm{center dot}K over the temperature range from about 300 to 360 K. The theories underlying the time response of TP gauges and the imaging characteristics are discussed in the next section. Then we discuss several laboratory experiments to (1) demonstrate that the TP heat-flux gauge can be used in imaging applications; (2) obtain a quantum yield that enumerates what typical optical output signal amplitudes can be obtained from TP heat-flux gauges; and (3) determine whether LANL-designed intensified video cameras have sufficient sensitivity to acquire images from the heat-flux gauges. We obtained positive results from all the measurements. Throughout the text, we note limitations, areas where improvements are needed, and where further research is necessary. 12 refs., 25 figs., 4 tabs.

  3. 3-D Deep Penetration Photoacoustic Imaging with a 2-D CMUT Array.

    PubMed

    Ma, Te-Jen; Kothapalli, Sri Rajasekhar; Vaithilingam, Srikant; Oralkan, Omer; Kamaya, Aya; Wygant, Ira O; Zhuang, Xuefeng; Gambhir, Sanjiv S; Jeffrey, R Brooke; Khuri-Yakub, Butrus T

    2010-10-11

    In this work, we demonstrate 3-D photoacoustic imaging of optically absorbing targets embedded as deep as 5 cm inside a highly scattering background medium using a 2-D capacitive micromachined ultrasonic transducer (CMUT) array with a center frequency of 5.5 MHz. 3-D volumetric images and 2-D maximum intensity projection images are presented to show the objects imaged at different depths. Due to the close proximity of the CMUT to the integrated frontend circuits, the CMUT array imaging system has a low noise floor. This makes the CMUT a promising technology for deep tissue photoacoustic imaging.

  4. Estimating the joint statistics of images using nonparametric windows with application to registration using mutual information.

    PubMed

    Dowson, Nicholas; Kadir, Timor; Bowden, Richard

    2008-10-01

    Recently, the Nonparametric (NP) Windows has been proposed to estimate the statistics of real 1D and 2D signals. NP Windows is accurate, because it is equivalent to sampling images at a high (infinite) resolution for an assumed interpolation model. This paper extends the proposed approach to consider joint distributions of image-pairs. Second, Green's Theorem is used to simplify the previous NP Windows algorithm. Finally, a resolution-aware NP Windows algorithm is proposed to improve robustness to relative scaling between an image pair. Comparative testing of 2D image registration was performed using translation-only and affine transformations. Although more expensive than other methods, NP Windows frequently demonstrated superior performance for bias (distance between ground truth and global maximum) and frequency of convergence. Unlike other methods, the number of samples and the number of bins have little effect on NP Windows and the prior selection of a kernel is not required.

  5. Semi-automatic elastic registration on thyroid gland ultrasonic image

    NASA Astrophysics Data System (ADS)

    Xu, Xia; Zhong, Yue; Luo, Yan; Li, Deyu; Lin, Jiangli; Wang, Tianfu

    2007-12-01

    Knowledge of in vivo thyroid volume has both diagnostic and therapeutic importance and could lead to a more precise quantification of absolute activity contained in the thyroid gland. However, the shape of thyroid gland is irregular and difficult to calculate. For precise estimation of thyroid volume by ultrasound imaging, this paper presents a novel semiautomatic minutiae matching method in thyroid gland ultrasonic image by means of thin-plate spline model. Registration consists of four basic steps: feature detection, feature matching, mapping function design, and image transformation and resampling. Due to the connectivity of thyroid gland boundary, we choose active contour model as feature detector, and radials from centric points for feature matching. The proposed approach has been used in thyroid gland ultrasound images registration. Registration results of 18 healthy adults' thyroid gland ultrasound images show this method consumes less time and energy with good objectivity than algorithms selecting landmarks manually.

  6. Registration and 3D visualization of large microscopy images

    NASA Astrophysics Data System (ADS)

    Mosaliganti, Kishore; Pan, Tony; Sharp, Richard; Ridgway, Randall; Iyengar, Srivathsan; Gulacy, Alexandra; Wenzel, Pamela; de Bruin, Alain; Machiraju, Raghu; Huang, Kun; Leone, Gustavo; Saltz, Joel

    2006-03-01

    Inactivation of the retinoblastoma gene in mouse embryos causes tissue infiltrations into critical sections of the placenta, which has been shown to affect fetal survivability. Our collaborators in cancer genetics are extremely interested in examining the three dimensional nature of these infiltrations given a stack of two dimensional light microscopy images. Three sets of wildtype and mutant placentas was sectioned serially and digitized using a commercial light microscopy scanner. Each individual placenta dataset consisted of approximately 1000 images totaling 700 GB in size, which were registered into a volumetric dataset using National Library of Medicine's (NIH/NLM) Insight Segmentation and Registration Toolkit (ITK). This paper describes our method for image registration to aid in volume visualization of tissue level intermixing for both wildtype and Rb - specimens. The registration process faces many challenges arising from the large image sizes, damages during sectioning, staining gradients both within and across sections, and background noise. These issues limit the direct application of standard registration techniques due to frequent convergence to local solutions. In this work, we develop a mixture of automated and semi-automated enhancements with ground-truth validation for the mutual information-based registration algorithm. Our final volume renderings clearly show tissue intermixing differences between both wildtype and Rb - specimens which are not obvious prior to registration.

  7. Evaluating the utility of 3D TRUS image information in guiding intra-procedure registration for motion compensation

    NASA Astrophysics Data System (ADS)

    De Silva, Tharindu; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2014-03-01

    In targeted 3D transrectal ultrasound (TRUS)-guided biopsy, patient and prostate movement during the procedure can cause target misalignments that hinder accurate sampling of pre-planned suspicious tissue locations. Multiple solutions have been proposed for motion compensation via registration of intra-procedural TRUS images to a baseline 3D TRUS image acquired at the beginning of the biopsy procedure. While 2D TRUS images are widely used for intra-procedural guidance, some solutions utilize richer intra-procedural images such as bi- or multi-planar TRUS or 3D TRUS, acquired by specialized probes. In this work, we measured the impact of such richer intra-procedural imaging on motion compensation accuracy, to evaluate the tradeoff between cost and complexity of intra-procedural imaging versus improved motion compensation. We acquired baseline and intra-procedural 3D TRUS images from 29 patients at standard sextant-template biopsy locations. We used the planes extracted from the 3D intra-procedural scans to simulate 2D and 3D information available in different clinically relevant scenarios for registration. The registration accuracy was evaluated by calculating the target registration error (TRE) using manually identified homologous fiducial markers (micro-calcifications). Our results indicate that TRE improves gradually when the number of intra-procedural imaging planes used in registration is increased. Full 3D TRUS information helps the registration algorithm to robustly converge to more accurate solutions. These results can also inform the design of a fail-safe workflow during motion compensation in a system using a tracked 2D TRUS probe, by prescribing rotational acquisitions that can be performed quickly and easily by the physician immediately prior to needle targeting.

  8. Diffeomorphic Registration of Images with Variable Contrast Enhancement

    PubMed Central

    Janssens, Guillaume; Jacques, Laurent; Orban de Xivry, Jonathan; Geets, Xavier; Macq, Benoit

    2011-01-01

    Nonrigid image registration is widely used to estimate tissue deformations in highly deformable anatomies. Among the existing methods, nonparametric registration algorithms such as optical flow, or Demons, usually have the advantage of being fast and easy to use. Recently, a diffeomorphic version of the Demons algorithm was proposed. This provides the advantage of producing invertible displacement fields, which is a necessary condition for these to be physical. However, such methods are based on the matching of intensities and are not suitable for registering images with different contrast enhancement. In such cases, a registration method based on the local phase like the Morphons has to be used. In this paper, a diffeomorphic version of the Morphons registration method is proposed and compared to conventional Morphons, Demons, and diffeomorphic Demons. The method is validated in the context of radiotherapy for lung cancer patients on several 4D respiratory-correlated CT scans of the thorax with and without variable contrast enhancement. PMID:21197460

  9. Robust methods for automatic image-to-world registration in cone-beam CT interventional guidance

    SciTech Connect

    Dang, H.; Otake, Y.; Schafer, S.; Stayman, J. W.; Kleinszig, G.; Siewerdsen, J. H.

    2012-10-15

    Purpose: Real-time surgical navigation relies on accurate image-to-world registration to align the coordinate systems of the image and patient. Conventional manual registration can present a workflow bottleneck and is prone to manual error and intraoperator variability. This work reports alternative means of automatic image-to-world registration, each method involving an automatic registration marker (ARM) used in conjunction with C-arm cone-beam CT (CBCT). The first involves a Known-Model registration method in which the ARM is a predefined tool, and the second is a Free-Form method in which the ARM is freely configurable. Methods: Studies were performed using a prototype C-arm for CBCT and a surgical tracking system. A simple ARM was designed with markers comprising a tungsten sphere within infrared reflectors to permit detection of markers in both x-ray projections and by an infrared tracker. The Known-Model method exercised a predefined specification of the ARM in combination with 3D-2D registration to estimate the transformation that yields the optimal match between forward projection of the ARM and the measured projection images. The Free-Form method localizes markers individually in projection data by a robust Hough transform approach extended from previous work, backprojected to 3D image coordinates based on C-arm geometric calibration. Image-domain point sets were transformed to world coordinates by rigid-body point-based registration. The robustness and registration accuracy of each method was tested in comparison to manual registration across a range of body sites (head, thorax, and abdomen) of interest in CBCT-guided surgery, including cases with interventional tools in the radiographic scene. Results: The automatic methods exhibited similar target registration error (TRE) and were comparable or superior to manual registration for placement of the ARM within {approx}200 mm of C-arm isocenter. Marker localization in projection data was robust across all

  10. Image panoramic mosaicing with global and local registration

    NASA Astrophysics Data System (ADS)

    Li, Qi; Ji, Zhen; Zhang, Jihong

    2001-09-01

    This paper presents techniques for constructing full view panoramic mosaics from sequences of images. The goal of this work is to remove too many limitations for pure panning motion. The best reference block is critical for the block- matching method for improving the robustness and performance. It is automatically selected in the high- frequency image, which always contains the plenty visible features. In order to reduce accumulated registration errors, the global registration using the phase-correlation matching method with rotation adjustment is applied to the whole sequence of images, which results in an optimal image mosaic with resolving translational or rotational motion. The local registration using the Levenberg-Marquardt iterative non-linear minimization algorithm is applied to compensate for small amounts of motion parallax introduced by translations of the camera and other unmodeled distortions, when minimize the discrepancy after applying the global registration. The accumulated misregistration errors may cause a visible gap between the two images. A smoothing filter is introduced, derived from Marr's computer vision theory for removing the visible artifact. By combining both global and local registration, together with artifact smoothing, the quality of the image mosaics is significantly improved, thereby enabling the creation of full view panoramic mosaics with hand-held cameras.

  11. Aircraft target identification based on 2D ISAR images using multiresolution analysis wavelet

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Xiao, Huaitie; Hu, Xiangjiang

    2001-09-01

    The formation of 2D ISAR images for radar target identification hold much promise for additional distinguish- ability between targets. Since an image contains important information is a wide range of scales, and this information is often independent from one scale to another, wavelet analysis provides a method of identifying the spatial frequency content of an image and the local regions within the image where those spatial frequencies exist. In this paper, a multiresolution analysis wavelet method based on 2D ISAR images was proposed for use in aircraft radar target identification under the wide band high range resolution radar background. The proposed method was performed in three steps; first, radar backscatter signals were processed in the form of 2D ISAR images, then, Mallat's wavelet algorithm was used in the decomposition of images, finally, a three layer perceptron neural net was used as classifier. The result of experiments demonstrated that the feasibility of using multiresolution analysis wavelet for target identification.

  12. HipMatch: an object-oriented cross-platform program for accurate determination of cup orientation using 2D-3D registration of single standard X-ray radiograph and a CT volume.

    PubMed

    Zheng, Guoyan; Zhang, Xuan; Steppacher, Simon D; Murphy, Stephen B; Siebenrock, Klaus A; Tannast, Moritz

    2009-09-01

    The widely used procedure of evaluation of cup orientation following total hip arthroplasty using single standard anteroposterior (AP) radiograph is known inaccurate, largely due to the wide variability in individual pelvic orientation relative to X-ray plate. 2D-3D image registration methods have been introduced for an accurate determination of the post-operative cup alignment with respect to an anatomical reference extracted from the CT data. Although encouraging results have been reported, their extensive usage in clinical routine is still limited. This may be explained by their requirement of a CAD model of the prosthesis, which is often difficult to be organized from the manufacturer due to the proprietary issue, and by their requirement of either multiple radiographs or a radiograph-specific calibration, both of which are not available for most retrospective studies. To address these issues, we developed and validated an object-oriented cross-platform program called "HipMatch" where a hybrid 2D-3D registration scheme combining an iterative landmark-to-ray registration with a 2D-3D intensity-based registration was implemented to estimate a rigid transformation between a pre-operative CT volume and the post-operative X-ray radiograph for a precise estimation of cup alignment. No CAD model of the prosthesis is required. Quantitative and qualitative results evaluated on cadaveric and clinical datasets are given, which indicate the robustness and the accuracy of the program. HipMatch is written in object-oriented programming language C++ using cross-platform software Qt (TrollTech, Oslo, Norway), VTK, and Coin3D and is transportable to any platform.

  13. Biomechanical based image registration for head and neck radiation treatment

    NASA Astrophysics Data System (ADS)

    Al-Mayah, Adil; Moseley, Joanne; Hunter, Shannon; Velec, Mike; Chau, Lily; Breen, Stephen; Brock, Kristy

    2010-02-01

    Deformable image registration of four head and neck cancer patients was conducted using biomechanical based model. Patient specific 3D finite element models have been developed using CT and cone beam CT image data of the planning and a radiation treatment session. The model consists of seven vertebrae (C1 to C7), mandible, larynx, left and right parotid glands, tumor and body. Different combinations of boundary conditions are applied in the model in order to find the configuration with a minimum registration error. Each vertebra in the planning session is individually aligned with its correspondence in the treatment session. Rigid alignment is used for each individual vertebra and to the mandible since deformation is not expected in the bones. In addition, the effect of morphological differences in external body between the two image sessions is investigated. The accuracy of the registration is evaluated using the tumor, and left and right parotid glands by comparing the calculated Dice similarity index of these structures following deformation in relation to their true surface defined in the image of the second session. The registration improves when the vertebrae and mandible are aligned in the two sessions with the highest Dice index of 0.86+/-0.08, 0.84+/-0.11, and 0.89+/-0.04 for the tumor, left and right parotid glands, respectively. The accuracy of the center of mass location of tumor and parotid glands is also improved by deformable image registration where the error in the tumor and parotid glands decreases from 4.0+/-1.1, 3.4+/-1.5, and 3.8+/-0.9 mm using rigid registration to 2.3+/-1.0, 2.5+/-0.8 and 2.0+/-0.9 mm in the deformable image registration when alignment of vertebrae and mandible is conducted in addition to the surface projection of the body.

  14. SAR image registration based on SIFT and MSA

    NASA Astrophysics Data System (ADS)

    Yi, Zhaoxiang; Zhang, Xiongmei; Mu, Xiaodong; Wang, Kui; Song, Jianshe

    2014-02-01

    Referring to the problem of SAR image registration, an image registration method based on Scale Invariant Feature Transform (SIFT) and Multi-Scale Autoconvolution (MSA) is proposed. Based on the extraction of SIFT descriptors and the MSA affine invariant moments of the region around the keypoints, the feature fusion method based on canonical correlation analysis (CCA) is employed to fuse them together to be a new descriptor. After the control points are rough matched, the distance and gray correlation around the rough matched points are combined to build the similarity matrix and the singular value decomposition (SVD) method is employed to realize precise image registration. Finally, the affine transformation parameters are obtained and the images are registered. Experimental results show that the proposed method outperforms the SIFT method and achieves high accuracy in sub-pixel level.

  15. Registration of multitemporal aerial optical images using line features

    NASA Astrophysics Data System (ADS)

    Zhao, Chenyang; Goshtasby, A. Ardeshir

    2016-07-01

    Registration of multitemporal images is generally considered difficult because scene changes can occur between the times the images are obtained. Since the changes are mostly radiometric in nature, features are needed that are insensitive to radiometric differences between the images. Lines are geometric features that represent straight edges of rigid man-made structures. Because such structures rarely change over time, lines represent stable geometric features that can be used to register multitemporal remote sensing images. An algorithm to establish correspondence between lines in two images of a planar scene is introduced and formulas to relate the parameters of a homography transformation to the parameters of corresponding lines in images are derived. Results of the proposed image registration on various multitemporal images are presented and discussed.

  16. Longitudinal image registration with non-uniform appearance change.

    PubMed

    Csapo, Istvan; Davis, Brad; Shi, Yundi; Sanchez, Mar; Styner, Martin; Niethammer, Marc

    2012-01-01

    Longitudinal imaging studies are frequently used to investigate temporal changes in brain morphology. Image intensity may also change over time, for example when studying brain maturation. However, such intensity changes are not accounted for in image similarity measures for standard image registration methods. Hence, (i) local similarity measures, (ii) methods estimating intensity transformations between images, and (iii) metamorphosis approaches have been developed to either achieve robustness with respect to intensity changes or to simultaneously capture spatial and intensity changes. For these methods, longitudinal intensity changes are not explicitly modeled and images are treated as independent static samples. Here, we propose a model-based image similarity measure for longitudinal image registration in the presence of spatially non-uniform intensity change.

  17. Improved elastic medical image registration using mutual information

    NASA Astrophysics Data System (ADS)

    Ens, Konstantin; Schumacher, Hanno; Franz, Astrid; Fischer, Bernd

    2007-03-01

    One of the future-oriented areas of medical image processing is to develop fast and exact algorithms for image registration. By joining multi-modal images we are able to compensate the disadvantages of one imaging modality with the advantages of another modality. For instance, a Computed Tomography (CT) image containing the anatomy can be combined with metabolic information of a Positron Emission Tomography (PET) image. It is quite conceivable that a patient will not have the same position in both imaging systems. Furthermore some regions for instance in the abdomen can vary in shape and position due to different filling of the rectum. So a multi-modal image registration is needed to calculate a deformation field for one image in order to maximize the similarity between the two images, described by a so-called distance measure. In this work, we present a method to adapt a multi-modal distance measure, here mutual information (MI), with weighting masks. These masks are used to enhance relevant image structures and suppress image regions which otherwise would disturb the registration process. The performance of our method is tested on phantom data and real medical images.

  18. Multimodal registration of remotely sensed images based on Jeffrey's divergence

    NASA Astrophysics Data System (ADS)

    Xu, Xiaocong; Li, Xia; Liu, Xiaoping; Shen, Huanfeng; Shi, Qian

    2016-12-01

    Entropy-based measures (e.g., mutual information, also known as Kullback-Leiber divergence), which quantify the similarity between two signals, are widely used as similarity measures for image registration. Although they are proven superior to many classical statistical measures, entropy-based measures, such as mutual information, may fail to yield the optimum registration if the multimodal image pair has insufficient scene overlap region. To overcome this challenge, we proposed using the symmetric form of Kullback-Leiber divergence, namely Jeffrey's divergence, as the similarity measure in practical multimodal image registration tasks. Mathematical analysis was performed to investigate the causes accounting for the limitation of mutual information when dealing with insufficient scene overlap image pairs. Experimental registrations of SPOT image, Landsat TM image, ALOS PalSAR image, and DEM data were carried out to compare the performance of Jeffrey's divergence and mutual information. Results indicate that Jeffrey's divergence is capable of providing larger feasible search space, which is favorable for exploring optimum transformation parameters in a larger range. This superiority of Jeffrey's divergence was further confirmed by a series of paradigms. Thus, the proposed model is more applicable for registering image pairs that are greatly misaligned or have an insufficient scene overlap region.

  19. Automatic image registration performance for two different CBCT systems; variation with imaging dose

    NASA Astrophysics Data System (ADS)

    Barber, J.; Sykes, J. R.; Holloway, L.; Thwaites, D. I.

    2014-03-01

    The performance of an automatic image registration algorithm was compared on image sets collected with two commercial CBCT systems, and the relationship with imaging dose was explored. CBCT images of a CIRS Virtually Human Male Pelvis phantom (VHMP) were collected on Varian TrueBeam/OBI and Elekta Synergy/XVI linear accelerators, across a range of mAs settings. Each CBCT image was registered 100 times, with random initial offsets introduced. Image registration was performed using the grey value correlation ratio algorithm in the Elekta XVI software, to a mask of the prostate volume with 5 mm expansion. Residual registration errors were calculated after correcting for the initial introduced phantom set-up error. Registration performance with the OBI images was similar to that of XVI. There was a clear dependence on imaging dose for the XVI images with residual errors increasing below 4mGy. It was not possible to acquire images with doses lower than ~5mGy with the OBI system and no evidence of reduced performance was observed at this dose. Registration failures (maximum target registration error > 3.6 mm on the surface of a 30mm sphere) occurred in 5% to 9% of registrations except for the lowest dose XVI scan (31%). The uncertainty in automatic image registration with both OBI and XVI images was found to be adequate for clinical use within a normal range of acquisition settings.

  20. PCA-based groupwise image registration for quantitative MRI.

    PubMed

    Huizinga, W; Poot, D H J; Guyader, J-M; Klaassen, R; Coolen, B F; van Kranenburg, M; van Geuns, R J M; Uitterdijk, A; Polfliet, M; Vandemeulebroucke, J; Leemans, A; Niessen, W J; Klein, S

    2016-04-01

    Quantitative magnetic resonance imaging (qMRI) is a technique for estimating quantitative tissue properties, such as the T1 and T2 relaxation times, apparent diffusion coefficient (ADC), and various perfusion measures. This estimation is achieved by acquiring multiple images with different acquisition parameters (or at multiple time points after injection of a contrast agent) and by fitting a qMRI signal model to the image intensities. Image registration is often necessary to compensate for misalignments due to subject motion and/or geometric distortions caused by the acquisition. However, large differences in image appearance make accurate image registration challenging. In this work, we propose a groupwise image registration method for compensating misalignment in qMRI. The groupwise formulation of the method eliminates the requirement of choosing a reference image, thus avoiding a registration bias. The method minimizes a cost function that is based on principal component analysis (PCA), exploiting the fact that intensity changes in qMRI can be described by a low-dimensional signal model, but not requiring knowledge on the specific acquisition model. The method was evaluated on 4D CT data of the lungs, and both real and synthetic images of five different qMRI applications: T1 mapping in a porcine heart, combined T1 and T2 mapping in carotid arteries, ADC mapping in the abdomen, diffusion tensor mapping in the brain, and dynamic contrast-enhanced mapping in the abdomen. Each application is based on a different acquisition model. The method is compared to a mutual information-based pairwise registration method and four other state-of-the-art groupwise registration methods. Registration accuracy is evaluated in terms of the precision of the estimated qMRI parameters, overlap of segmented structures, distance between corresponding landmarks, and smoothness of the deformation. In all qMRI applications the proposed method performed better than or equally well as

  1. Manifold learning based registration algorithms applied to multimodal images.

    PubMed

    Azampour, Mohammad Farid; Ghaffari, Aboozar; Hamidinekoo, Azam; Fatemizadeh, Emad

    2014-01-01

    Manifold learning algorithms are proposed to be used in image processing based on their ability in preserving data structures while reducing the dimension and the exposure of data structure in lower dimension. Multi-modal images have the same structure and can be registered together as monomodal images if only structural information is shown. As a result, manifold learning is able to transform multi-modal images to mono-modal ones and subsequently do the registration using mono-modal methods. Based on this application, in this paper novel similarity measures are proposed for multi-modal images in which Laplacian eigenmaps are employed as manifold learning algorithm and are tested against rigid registration of PET/MR images. Results show the feasibility of using manifold learning as a way of calculating the similarity between multimodal images.

  2. Intraoperative ultrasound to stereocamera registration using interventional photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Vyas, Saurabh; Su, Steven; Kim, Robert; Kuo, Nathanael; Taylor, Russell H.; Kang, Jin U.; Boctor, Emad M.

    2012-02-01

    There are approximately 6000 hospitals in the United States, of which approximately 5400 employ minimally invasive surgical robots for a variety of procedures. Furthermore, 95% of these robots require extensive registration before they can be fitted into the operating room. These "registrations" are performed by surgical navigation systems, which allow the surgical tools, the robot and the surgeon to be synchronized together-hence operating in concert. The most common surgical navigation modalities include: electromagnetic (EM) tracking and optical tracking. Currently, these navigation systems are large, intrusive, come with a steep learning curve, require sacrifices on the part of the attending medical staff, and are quite expensive (since they require several components). Recently, photoacoustic (PA) imaging has become a practical and promising new medical imaging technology. PA imaging only requires the minimal equipment standard with most modern ultrasound (US) imaging systems as well as a common laser source. In this paper, we demonstrate that given a PA imaging system, as well as a stereocamera (SC), the registration between the US image of a particular anatomy and the SC image of the same anatomy can be obtained with reliable accuracy. In our experiments, we collected data for N = 80 trials of sample 3D US and SC coordinates. We then computed the registration between the SC and the US coordinates. Upon validation, the mean error and standard deviation between the predicted sample coordinates and the corresponding ground truth coordinates were found to be 3.33 mm and 2.20 mm respectively.

  3. Simultaneous registration of multiple images: similarity metrics and efficient optimization.

    PubMed

    Wachinger, Christian; Navab, Nassir

    2013-05-01

    We address the alignment of a group of images with simultaneous registration. Therefore, we provide further insights into a recently introduced framework for multivariate similarity measures, referred to as accumulated pair-wise estimates (APE), and derive efficient optimization methods for it. More specifically, we show a strict mathematical deduction of APE from a maximum-likelihood framework and establish a connection to the congealing framework. This is only possible after an extension of the congealing framework with neighborhood information. Moreover, we address the increased computational complexity of simultaneous registration by deriving efficient gradient-based optimization strategies for APE: Gauss-Newton and the efficient second-order minimization (ESM). We present next to SSD the usage of intrinsically nonsquared similarity measures in this least squares optimization framework. The fundamental assumption of ESM, the approximation of the perfectly aligned moving image through the fixed image, limits its application to monomodal registration. We therefore incorporate recently proposed structural representations of images which allow us to perform multimodal registration with ESM. Finally, we evaluate the performance of the optimization strategies with respect to the similarity measures, leading to very good results for ESM. The extension to multimodal registration is in this context very interesting because it offers further possibilities for evaluations, due to publicly available datasets with ground-truth alignment.

  4. Regional manifold learning for deformable registration of brain MR images.

    PubMed

    Ye, Dong Hye; Hamm, Jihun; Kwon, Dongjin; Davatzikos, Christos; Pohl, Kilian M

    2012-01-01

    We propose a method for deformable registration based on learning the manifolds of individual brain regions. Recent publications on registration of medical images advocate the use of manifold learning in order to confine the search space to anatomically plausible deformations. Existing methods construct manifolds based on a single metric over the entire image domain thus frequently miss regional brain variations. We address this issue by first learning manifolds for specific regions and then computing region-specific deformations from these manifolds. We then determine deformations for the entire image domain by learning the global manifold in such a way that it preserves the region-specific deformations. We evaluate the accuracy of our method by applying it to the LPBA40 dataset and measuring the overlap of the deformed segmentations. The result shows significant improvement in registration accuracy on cortex regions compared to other state of the art methods.

  5. Stochastic optimization with randomized smoothing for image registration.

    PubMed

    Sun, Wei; Poot, Dirk H J; Smal, Ihor; Yang, Xuan; Niessen, Wiro J; Klein, Stefan

    2017-01-01

    Image registration is typically formulated as an optimization process, which aims to find the optimal transformation parameters of a given transformation model by minimizing a cost function. Local minima may exist in the optimization landscape, which could hamper the optimization process. To eliminate local minima, smoothing the cost function would be desirable. In this paper, we investigate the use of a randomized smoothing (RS) technique for stochastic gradient descent (SGD) optimization, to effectively smooth the cost function. In this approach, Gaussian noise is added to the transformation parameters prior to computing the cost function gradient in each iteration of the SGD optimizer. The approach is suitable for both rigid and nonrigid registrations. Experiments on synthetic images, cell images, public CT lung data, and public MR brain data demonstrate the effectiveness of the novel RS technique in terms of registration accuracy and robustness.

  6. Multi-Image Registration for an Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn

    2002-01-01

    An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.

  7. Deformable image registration for tissues with large displacements.

    PubMed

    Huang, Xishi; Ren, Jing; Abdalbari, Anwar; Green, Mark

    2017-01-01

    Image registration for internal organs and soft tissues is considered extremely challenging due to organ shifts and tissue deformation caused by patients' movements such as respiration and repositioning. In our previous work, we proposed a fast registration method for deformable tissues with small rotations. We extend our method to deformable registration of soft tissues with large displacements. We analyzed the deformation field of the liver by decomposing the deformation into shift, rotation, and pure deformation components and concluded that in many clinical cases, the liver deformation contains large rotations and small deformations. This analysis justified the use of linear elastic theory in our image registration method. We also proposed a region-based neuro-fuzzy transformation model to seamlessly stitch together local affine and local rigid models in different regions. We have performed the experiments on a liver MRI image set and showed the effectiveness of the proposed registration method. We have also compared the performance of the proposed method with the previous method on tissues with large rotations and showed that the proposed method outperformed the previous method when dealing with the combination of pure deformation and large rotations. Validation results show that we can achieve a target registration error of [Formula: see text] and an average centerline distance error of [Formula: see text]. The proposed technique has the potential to significantly improve registration capabilities and the quality of intraoperative image guidance. To the best of our knowledge, this is the first time that the complex displacement of the liver is explicitly separated into local pure deformation and rigid motion.

  8. Large Deformation Diffeomorphic Metric Mapping Registration of Reconstructed 3D Histological Section Images and in vivo MR Images

    PubMed Central

    Ceritoglu, Can; Wang, Lei; Selemon, Lynn D.; Csernansky, John G.; Miller, Michael I.; Ratnanather, J. Tilak

    2009-01-01

    Our current understanding of neuroanatomical abnormalities in neuropsychiatric diseases is based largely on magnetic resonance imaging (MRI) and post mortem histological analyses of the brain. Further advances in elucidating altered brain structure in these human conditions might emerge from combining MRI and histological methods. We propose a multistage method for registering 3D volumes reconstructed from histological sections to corresponding in vivo MRI volumes from the same subjects: (1) manual segmentation of white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) compartments in histological sections, (2) alignment of consecutive histological sections using 2D rigid transformation to construct a 3D histological image volume from the aligned sections, (3) registration of reconstructed 3D histological volumes to the corresponding 3D MRI volumes using 3D affine transformation, (4) intensity normalization of images via histogram matching, and (5) registration of the volumes via intensity based large deformation diffeomorphic metric (LDDMM) image matching algorithm. Here we demonstrate the utility of our method in the transfer of cytoarchitectonic information from histological sections to identify regions of interest in MRI scans of nine adult macaque brains for morphometric analyses. LDDMM improved the accuracy of the registration via decreased distances between GM/CSF surfaces after LDDMM (0.39 ± 0.13 mm) compared to distances after affine registration (0.76 ± 0.41 mm). Similarly, WM/GM distances decreased to 0.28 ± 0.16 mm after LDDMM compared to 0.54 ± 0.39 mm after affine registration. The multistage registration method may find broad application for mapping histologically based information, for example, receptor distributions, gene expression, onto MRI volumes. PMID:20577633

  9. 3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine

    NASA Astrophysics Data System (ADS)

    Hamamoto, Kazuhiko; Sato, Motoyoshi

    3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.

  10. The Direct Registration of LIDAR Point Clouds and High Resolution Image Based on Linear Feature by Introducing AN Unknown Parameter

    NASA Astrophysics Data System (ADS)

    Chunjing, Y.; Guang, G.

    2012-07-01

    The registration between optical images and point clouds is the first task when the combination of these two datasets is concerned. Due to the discrete nature of the point clouds, and the 2D-3D transformation in particular, a tie points based registration strategy which is commonly adopted in image-to-image registration is hard to be used directly in this scenario. A derived collinear equation describing the map relationship between an image point and a ground point is used as the mathematical model for registration, with the point in the LiDAR space expressed by its parametric form. such a map relation can be viewed as the mathematical model which registers the image pixels to point clouds. This model is not only suitable for a single image registration but also applicable to multiple consecutive images. We also studied scale problem in image and point clouds registration, with scale problem is defined by the optimal corresponding between the image resolution and the density of point clouds. Test dataset includes the DMC images and point clouds acquired by the Leica ALS50 II over an area in Henan Prov., China. Main contributions of the paper includes: [1] an derived collinear equation is introduced by which a ground point is expressed by its parametric form, which makes it possible to replace point feature by linear feature, hence avoiding the problem that it is almost impossible to find a point in the point clouds which is accurately corresponds to a point in the image space; [2] least square method is used to calculate the registration transformation parameters and the unknown parameter λ in the same time;[3] scale problem is analyzed semi-quantitatively and to the authors' best knowledge, it is the first time in literature that clearly defines the scale problem and carries out semi-quantitative analysis in the context of LiDAR data processing.

  11. Cross Correlation versus Normalized Mutual Information on Image Registration

    NASA Technical Reports Server (NTRS)

    Tan, Bin; Tilton, James C.; Lin, Guoqing

    2016-01-01

    This is the first study to quantitatively assess and compare cross correlation and normalized mutual information methods used to register images in subpixel scale. The study shows that the normalized mutual information method is less sensitive to unaligned edges due to the spectral response differences than is cross correlation. This characteristic makes the normalized image resolution a better candidate for band to band registration. Improved band-to-band registration in the data from satellite-borne instruments will result in improved retrievals of key science measurements such as cloud properties, vegetation, snow and fire.

  12. Analysis of deformable image registration accuracy using computational modeling

    SciTech Connect

    Zhong Hualiang; Kim, Jinkoo; Chetty, Indrin J.

    2010-03-15

    Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results show that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter

  13. Towards local estimation of emphysema progression using image registration

    NASA Astrophysics Data System (ADS)

    Staring, M.; Bakker, M. E.; Shamonin, D. P.; Stolk, J.; Reiber, J. H. C.; Stoel, B. C.

    2009-02-01

    Progression measurement of emphysema is required to evaluate the health condition of a patient and the effect of drugs. To locally estimate progression we use image registration, which allows for volume correction using the determinant of the Jacobian of the transformation. We introduce an adaptation of the so-called sponge model that circumvents its constant-mass assumption. Preliminary results from CT scans of a lung phantom and from CT data sets of three patients suggest that image registration may be a suitable method to locally estimate emphysema progression.

  14. Scope and applications of translation invariant wavelets to image registration

    NASA Technical Reports Server (NTRS)

    Chettri, Samir; LeMoigne, Jacqueline; Campbell, William

    1997-01-01

    The first part of this article introduces the notion of translation invariance in wavelets and discusses several wavelets that have this property. The second part discusses the possible applications of such wavelets to image registration. In the case of registration of affinely transformed images, we would conclude that the notion of translation invariance is not really necessary. What is needed is affine invariance and one way to do this is via the method of moment invariants. Wavelets or, in general, pyramid processing can then be combined with the method of moment invariants to reduce the computational load.

  15. Vascular image registration techniques: A living review.

    PubMed

    Matl, Stefan; Brosig, Richard; Baust, Maximilian; Navab, Nassir; Demirci, Stefanie

    2017-01-01

    Registration of vascular structures is crucial for preoperative planning, intraoperative navigation, and follow-up assessment. Typical applications include, but are not limited to, Trans-catheter Aortic Valve Implantation and monitoring of tumor vasculature or aneurysm growth. In order to achieve the aforementioned goals, a large number of various registration algorithms has been developed. With this review paper we provide a comprehensive overview over the plethora of existing techniques with a particular focus on the suitable classification criteria such as the involved modalities of the employed optimization methods. However, we wish to go beyond a static literature review which is naturally doomed to be outdated after a certain period of time due to the research progress. We augment this review paper with an extendable and interactive database in order to obtain a living review whose currency goes beyond the one of a printed paper. All papers in this database are labeled with one or multiple tags according to 13 carefully defined categories. The classification of all entries can then be visualized as one or multiple trees which are presented via a web-based interactive app (http://livingreview.in.tum.de) allowing the user to choose a unique perspective for literature review. In addition, the user can search the underlying database for specific tags or publications related to vessel registration. Many applications of this framework are conceivable, including the use for getting a general overview on the topic or the utilization by physicians for deciding about the best-suited algorithm for a specific application.

  16. Agile multi-scale decompositions for automatic image registration

    NASA Astrophysics Data System (ADS)

    Murphy, James M.; Leija, Omar Navarro; Le Moigne, Jacqueline

    2016-05-01

    In recent works, the first and third authors developed an automatic image registration algorithm based on a multiscale hybrid image decomposition with anisotropic shearlets and isotropic wavelets. This prototype showed strong performance, improving robustness over registration with wavelets alone. However, this method imposed a strict hierarchy on the order in which shearlet and wavelet features were used in the registration process, and also involved an unintegrated mixture of MATLAB and C code. In this paper, we introduce a more agile model for generating features, in which a flexible and user-guided mix of shearlet and wavelet features are computed. Compared to the previous prototype, this method introduces a flexibility to the order in which shearlet and wavelet features are used in the registration process. Moreover, the present algorithm is now fully coded in C, making it more efficient and portable than the mixed MATLAB and C prototype. We demonstrate the versatility and computational efficiency of this approach by performing registration experiments with the fully-integrated C algorithm. In particular, meaningful timing studies can now be performed, to give a concrete analysis of the computational costs of the flexible feature extraction. Examples of synthetically warped and real multi-modal images are analyzed.

  17. Methods for 2-D and 3-D Endobronchial Ultrasound Image Segmentation.

    PubMed

    Zang, Xiaonan; Bascom, Rebecca; Gilbert, Christopher; Toth, Jennifer; Higgins, William

    2016-07-01

    Endobronchial ultrasound (EBUS) is now commonly used for cancer-staging bronchoscopy. Unfortunately, EBUS is challenging to use and interpreting EBUS video sequences is difficult. Other ultrasound imaging domains, hampered by related difficulties, have benefited from computer-based image-segmentation methods. Yet, so far, no such methods have been proposed for EBUS. We propose image-segmentation methods for 2-D EBUS frames and 3-D EBUS sequences. Our 2-D method adapts the fast-marching level-set process, anisotropic diffusion, and region growing to the problem of segmenting 2-D EBUS frames. Our 3-D method builds upon the 2-D method while also incorporating the geodesic level-set process for segmenting EBUS sequences. Tests with lung-cancer patient data showed that the methods ran fully automatically for nearly 80% of test cases. For the remaining cases, the only user-interaction required was the selection of a seed point. When compared to ground-truth segmentations, the 2-D method achieved an overall Dice index = 90.0% ±4.9%, while the 3-D method achieved an overall Dice index = 83.9 ± 6.0%. In addition, the computation time (2-D, 0.070 s/frame; 3-D, 0.088 s/frame) was two orders of magnitude faster than interactive contour definition. Finally, we demonstrate the potential of the methods for EBUS localization in a multimodal image-guided bronchoscopy system.

  18. Piecewise-diffeomorphic image registration: application to the motion estimation between 3D CT lung images with sliding conditions.

    PubMed

    Risser, Laurent; Vialard, François-Xavier; Baluwala, Habib Y; Schnabel, Julia A

    2013-02-01

    In this paper, we propose a new strategy for modelling sliding conditions when registering 3D images in a piecewise-diffeomorphic framework. More specifically, our main contribution is the development of a mathematical formalism to perform Large Deformation Diffeomorphic Metric Mapping registration with sliding conditions. We also show how to adapt this formalism to the LogDemons diffeomorphic registration framework. We finally show how to apply this strategy to estimate the respiratory motion between 3D CT pulmonary images. Quantitative tests are performed on 2D and 3D synthetic images, as well as on real 3D lung images from the MICCAI EMPIRE10 challenge. Results show that our strategy estimates accurate mappings of entire 3D thoracic image volumes that exhibit a sliding motion, as opposed to conventional registration methods which are not capable of capturing discontinuous deformations at the thoracic cage boundary. They also show that although the deformations are not smooth across the location of sliding conditions, they are almost always invertible in the whole image domain. This would be helpful for radiotherapy planning and delivery.

  19. High Speed Method for in Situ Multispectral Image Registration

    SciTech Connect

    Perrine, Kenneth A.; Lamarche, Brian L.; Hopkins, Derek F.; Budge, Scott E.; Opresko, Lee; Wiley, H. S.; Sowa, Marianne B.

    2007-01-29

    Multispectral confocal spinning disk microscopy provides a high resolution method for real-time live cell imaging. However, optical distortions and the physical misalignments introduced by the use of multiple acquisition cameras can obscure spatial information contained in the captured images. In this manuscript, we describe a multispectral method for real-time image registration whereby the image from one camera is warped onto the image from a second camera via a polynomial correction. This method provides a real-time pixel-for-pixel match between images obtained over physically distinct optical paths. Using an in situ calibration method, the polynomial is characterized by a set of coefficients using a least squares solver. Error analysis demonstrates optimal performance results from the use of cubic polynomials. High-speed evaluation of the warp is then performed through forward differencing with fixed-point data types. Image reconstruction errors are reduced through bilinear interpolation. The registration techniques described here allow for successful registration of multispectral images in real-time (exceeding 15 frame/sec) and have a broad applicability to imaging methods requiring pixel matching over multiple data channels.

  20. Reducing uncertainties in volumetric image based deformable organ registration.

    PubMed

    Liang, J; Yan, D

    2003-08-01

    Applying volumetric image feedback in radiotherapy requires image based deformable organ registration. The foundation of this registration is the ability of tracking subvolume displacement in organs of interest. Subvolume displacement can be calculated by applying biomechanics model and the finite element method to human organs manifested on the multiple volumetric images. The calculation accuracy, however, is highly dependent on the determination of the corresponding organ boundary points. Lacking sufficient information for such determination, uncertainties are inevitable-thus diminishing the registration accuracy. In this paper, a method of consuming energy minimization was developed to reduce these uncertainties. Starting from an initial selection of organ boundary point correspondence on volumetric image sets, the subvolume displacement and stress distribution of the whole organ are calculated and the consumed energy due to the subvolume displacements is computed accordingly. The corresponding positions of the initially selected boundary points are then iteratively optimized to minimize the consuming energy under geometry and stress constraints. In this study, a rectal wall delineated from patient CT image was artificially deformed using a computer simulation and utilized to test the optimization. Subvolume displacements calculated based on the optimized boundary point correspondence were compared to the true displacements, and the calculation accuracy was thereby evaluated. Results demonstrate that a significant improvement on the accuracy of the deformable organ registration can be achieved by applying the consuming energy minimization in the organ deformation calculation.

  1. Morphological Feature Extraction for Automatic Registration of Multispectral Images

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.

  2. Diffusion Tensor Image Registration Using Hybrid Connectivity and Tensor Features

    PubMed Central

    Wang, Qian; Yap, Pew-Thian; Wu, Guorong; Shen, Dinggang

    2014-01-01

    Most existing diffusion tensor imaging (DTI) registration methods estimate structural correspondences based on voxelwise matching of tensors. The rich connectivity information that is given by DTI, however, is often neglected. In this article, we propose to integrate complementary information given by connectivity features and tensor features for improved registration accuracy. To utilize connectivity information, we place multiple anchors representing different brain anatomies in the image space, and define the connectivity features for each voxel as the geodesic distances from all anchors to the voxel under consideration. The geodesic distance, which is computed in relation to the tensor field, encapsulates information of brain connectivity. We also extract tensor features for every voxel to reflect the local statistics of tensors in its neighborhood. We then combine both connectivity features and tensor features for registration of tensor images. From the images, landmarks are selected automatically and their correspondences are determined based on their connectivity and tensor feature vectors. The deformation field that deforms one tensor image to the other is iteratively estimated and optimized according to the landmarks and their associated correspondences. Experimental results show that, by using connectivity features and tensor features simultaneously, registration accuracy is increased substantially compared with the cases using either type of features alone. PMID:24293159

  3. Line-Based Registration of Panoramic Images and LiDAR Point Clouds for Mobile Mapping

    PubMed Central

    Cui, Tingting; Ji, Shunping; Shan, Jie; Gong, Jianya; Liu, Kejian

    2016-01-01

    For multi-sensor integrated systems, such as the mobile mapping system (MMS), data fusion at sensor-level, i.e., the 2D-3D registration between an optical camera and LiDAR, is a prerequisite for higher level fusion and further applications. This paper proposes a line-based registration method for panoramic images and a LiDAR point cloud collected by a MMS. We first introduce the system configuration and specification, including the coordinate systems of the MMS, the 3D LiDAR scanners, and the two panoramic camera models. We then establish the line-based transformation model for the panoramic camera. Finally, the proposed registration method is evaluated for two types of camera models by visual inspection and quantitative comparison. The results demonstrate that the line-based registration method can significantly improve the alignment of the panoramic image and the LiDAR datasets under either the ideal spherical or the rigorous panoramic camera model, with the latter being more reliable. PMID:28042855

  4. Line-Based Registration of Panoramic Images and LiDAR Point Clouds for Mobile Mapping.

    PubMed

    Cui, Tingting; Ji, Shunping; Shan, Jie; Gong, Jianya; Liu, Kejian

    2016-12-31

    For multi-sensor integrated systems, such as the mobile mapping system (MMS), data fusion at sensor-level, i.e., the 2D-3D registration between an optical camera and LiDAR, is a prerequisite for higher level fusion and further applications. This paper proposes a line-based registration method for panoramic images and a LiDAR point cloud collected by a MMS. We first introduce the system configuration and specification, including the coordinate systems of the MMS, the 3D LiDAR scanners, and the two panoramic camera models. We then establish the line-based transformation model for the panoramic camera. Finally, the proposed registration method is evaluated for two types of camera models by visual inspection and quantitative comparison. The results demonstrate that the line-based registration method can significantly improve the alignment of the panoramic image and the LiDAR datasets under either the ideal spherical or the rigorous panoramic camera model, with the latter being more reliable.

  5. Automatic image-to-world registration based on x-ray projections in cone-beam CT-guided interventions.

    PubMed

    Hamming, N M; Daly, M J; Irish, J C; Siewerdsen, J H

    2009-05-01

    Intraoperative imaging offers a means to account for morphological changes occurring during the procedure and resolve geometric uncertainties via integration with a surgical navigation system. Such integration requires registration of the image and world reference frames, conventionally a time consuming, error-prone manual process. This work presents a method of automatic image-to-world registration of intraoperative cone-beam computed tomography (CBCT) and an optical tracking system. Multimodality (MM) markers consisting of an infrared (IR) reflective sphere with a 2 mm tungsten sphere (BB) placed precisely at the center were designed to permit automatic detection in both the image and tracking (world) reference frames. Image localization is performed by intensity thresholding and pattern matching directly in 2D projections acquired in each CBCT scan, with 3D image coordinates computed using backprojection and accounting for C-arm geometric calibration. The IR tracking system localized MM markers in the world reference frame, and the image-to-world registration was computed by rigid point matching of image and tracker point sets. The accuracy and reproducibility of the automatic registration technique were compared to conventional (manual) registration using a variety of marker configurations suitable to neurosurgery (markers fixed to cranium) and head and neck surgery (markers suspended on a subcranial frame). The automatic technique exhibited subvoxel marker localization accuracy (< 0.8 mm) for all marker configurations. The fiducial registration error of the automatic technique was (0.35 +/-0.01) mm, compared to (0.64 +/- 0.07 mm) for the manual technique, indicating improved accuracy and reproducibility. The target registration error (TRE) averaged over all configurations was 1.14 mm for the automatic technique, compared to 1.29 mm for the manual in accuracy, although the difference was not statistically significant (p = 0.3). A statistically significant

  6. Landsat image registration - A study of system parameters

    NASA Technical Reports Server (NTRS)

    Wacker, A. G.; Juday, R. D.; Wolfe, R. H., Jr.

    1984-01-01

    Some applications of Landsat data, particularily agricultural and forestry applications, require the ability to geometrically superimpose or register data acquired at different times and possibly by different satellites. An experimental investigation relating to a registration processor used by the Johnson Space Center for this purpose is the subject of this paper. Correlation of small subareas of images is at the heart of this registration processor and the manner in which various system parameters affect the correlation process is the prime area of investigation. Parameters investigated include preprocessing methods, methods for detecting sucessful correlations, fitting a surface to the correlation patch, fraction of pixels designated as edge pixels in edge detection adn local versus global generation of edge images. A suboptimum search procedure is used to find a good parameter set for this registration processor.

  7. Towards real-time registration of 4D ultrasound images.

    PubMed

    Foroughi, Pezhman; Abolmaesumi, Purang; Hashtrudi-Zaad, Keyvan

    2006-01-01

    In this paper, we demonstrate a method for fast registration of sequences of 3D liver images, which could be used for the future real-time applications. In our method, every image is elastically registered to a so called fixed ultrasound image exploiting the information from previous registration. A few feature points are automatically selected, and tracked inside the images, while the deformation of other points are extrapolated with respect to the tracked points employing a fast free-form approach. The main intended application of the proposed method is real-time tracking of tumors for radiosurgery. The algorithm is evaluated on both naturally and artificially deformed images. Experimental results show that for around 85 percent accuracy, the process of tracking is completed very close to real time.

  8. Automated Image Registration Using Morphological Region of Interest Feature Extraction

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2005-01-01

    With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.

  9. A translational registration system for LANDSAT image segments

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Erthal, G. J.; Velasco, F. R. D.; Mascarenhas, N. D. D.

    1983-01-01

    The use of satellite images obtained from various dates is essential for crop forecast systems. In order to make possible a multitemporal analysis, it is necessary that images belonging to each acquisition have pixel-wise correspondence. A system developed to obtain, register and record image segments from LANDSAT images in computer compatible tapes is described. The translational registration of the segments is performed by correlating image edges in different acquisitions. The system was constructed for the Burroughs B6800 computer in ALGOL language.

  10. The Insight ToolKit image registration framework

    PubMed Central

    Avants, Brian B.; Tustison, Nicholas J.; Stauffer, Michael; Song, Gang; Wu, Baohua; Gee, James C.

    2014-01-01

    Publicly available scientific resources help establish evaluation standards, provide a platform for teaching and improve reproducibility. Version 4 of the Insight ToolKit (ITK4) seeks to establish new standards in publicly available image registration methodology. ITK4 makes several advances in comparison to previous versions of ITK. ITK4 supports both multivariate images and objective functions; it also unifies high-dimensional (deformation field) and low-dimensional (affine) transformations with metrics that are reusable across transform types and with composite transforms that allow arbitrary series of geometric mappings to be chained together seamlessly. Metrics and optimizers take advantage of multi-core resources, when available. Furthermore, ITK4 reduces the parameter optimization burden via principled heuristics that automatically set scaling across disparate parameter types (rotations vs. translations). A related approach also constrains steps sizes for gradient-based optimizers. The result is that tuning for different metrics and/or image pairs is rarely necessary allowing the researcher to more easily focus on design/comparison of registration strategies. In total, the ITK4 contribution is intended as a structure to support reproducible research practices, will provide a more extensive foundation against which to evaluate new work in image registration and also enable application level programmers a broad suite of tools on which to build. Finally, we contextualize this work with a reference registration evaluation study with application to pediatric brain labeling.1 PMID:24817849

  11. Registration of multimodal brain images: some experimental results

    NASA Astrophysics Data System (ADS)

    Chen, Hua-mei; Varshney, Pramod K.

    2002-03-01

    Joint histogram of two images is required to uniquely determine the mutual information between the two images. It has been pointed out that, under certain conditions, existing joint histogram estimation algorithms like partial volume interpolation (PVI) and linear interpolation may result in different types of artifact patterns in the MI based registration function by introducing spurious maxima. As a result, the artifacts may hamper the global optimization process and limit registration accuracy. In this paper we present an extensive study of interpolation-induced artifacts using simulated brain images and show that similar artifact patterns also exist when other intensity interpolation algorithms like cubic convolution interpolation and cubic B-spline interpolation are used. A new joint histogram estimation scheme named generalized partial volume estimation (GPVE) is proposed to eliminate the artifacts. A kernel function is involved in the proposed scheme and when the 1st order B-spline is chosen as the kernel function, it is equivalent to the PVI. A clinical brain image database furnished by Vanderbilt University is used to compare the accuracy of our algorithm with that of PVI. Our experimental results show that the use of higher order kernels can effectively remove the artifacts and, in cases when MI based registration result suffers from the artifacts, registration accuracy can be improved significantly.

  12. A Block-matching based technique for the analysis of 2D gel images.

    PubMed

    Freire, Ana; Seoane, José A; Rodríguez, Alvaro; Ruiz-Romero, Cristina; López-Campos, Guillermo; Dorado, Julián

    2010-01-01

    Research at protein level is a useful practice in personalized medicine. More specifically, 2D gel images obtained after electrophoresis process can lead to an accurate diagnosis. Several computational approaches try to help the clinicians to establish the correspondence between pairs of proteins of multiple 2D gel images. Most of them perform the alignment of a patient image referred to a reference image. In this work, an approach based on block-matching techniques is developed. Its main characteristic is that it does not need to perform the whole alignment between two images considering each protein separately. A comparison with other published methods is presented. It can be concluded that this method works over broad range of proteomic images, although they have a high level of difficulty.

  13. Warped document image correction method based on heterogeneous registration strategies

    NASA Astrophysics Data System (ADS)

    Tong, Lijing; Zhan, Guoliang; Peng, Quanyao; Li, Yang; Li, Yifan

    2013-03-01

    With the popularity of digital camera and the application requirement of digitalized document images, using digital cameras to digitalize document images has become an irresistible trend. However, the warping of the document surface impacts on the quality of the Optical Character Recognition (OCR) system seriously. To improve the warped document image's vision quality and the OCR rate, this paper proposed a warped document image correction method based on heterogeneous registration strategies. This method mosaics two warped images of the same document from different viewpoints. Firstly, two feature points are selected from one image. Then the two feature points are registered in the other image base on heterogeneous registration strategies. At last, image mosaics are done for the two images, and the best mosaiced image is selected by OCR recognition results. As a result, for the best mosaiced image, the distortions are mostly removed and the OCR results are improved markedly. Experimental results show that the proposed method can resolve the issue of warped document image correction more effectively.

  14. 3D surface reconstruction of apples from 2D NIR images

    NASA Astrophysics Data System (ADS)

    Zhu, Bin; Jiang, Lu; Cheng, Xuemei; Tao, Yang

    2005-11-01

    Machine vision methods are widely used in apple defect detection and quality grading applications. Currently, 2D near-infrared (NIR) imaging of apples is often used to detect apple defects because the image intensity of defects is different from normal apple parts. However, a drawback of this method is that the apple calyx also exhibits similar image intensity to the apple defects. Since an apple calyx often appears in the NIR image, the false alarm rate is high with the 2D NIR imaging method. In this paper, a 2D NIR imaging method is extended to a 3D reconstruction so that the apple calyx can be differentiated from apple defects according to their different 3D depth information. The Lambertian model is used to evaluate the reflectance map of the apple surface, and then Pentland's Shape-From-Shading (SFS) method is applied to reconstruct the 3D surface information of the apple based on Fast Fourier Transform (FFT). Pentland's method is directly derived from human perception properties, making it close to the way human eyes recover 3D information from a 2D scene. In addition, the FFT reduces the computation time significantly. The reconstructed 3D apple surface maps are shown in the results, and different depths of apple calyx and defects are obtained correctly.

  15. Multiresolution image representation using combined 2-D and 1-D directional filter banks.

    PubMed

    Tanaka, Yuichi; Ikehara, Masaaki; Nguyen, Truong Q

    2009-02-01

    In this paper, effective multiresolution image representations using a combination of 2-D filter bank (FB) and directional wavelet transform (WT) are presented. The proposed methods yield simple implementation and low computation costs compared to previous 1-D and 2-D FB combinations or adaptive directional WT methods. Furthermore, they are nonredundant transforms and realize quad-tree like multiresolution representations. In applications on nonlinear approximation, image coding, and denoising, the proposed filter banks show visual quality improvements and have higher PSNR than the conventional separable WT or the contourlet.

  16. 2-D nonlinear IIR-filters for image processing - An exploratory analysis

    NASA Technical Reports Server (NTRS)

    Bauer, P. H.; Sartori, M.

    1991-01-01

    A new nonlinear IIR filter structure is introduced and its deterministic properties are analyzed. It is shown to be better suited for image processing applications than its linear shift-invariant counterpart. The new structure is obtained from causality inversion of a 2D quarterplane causal linear filter with respect to the two directions of propagation. It is demonstrated, that by using this design, a nonlinear 2D lowpass filter can be constructed, which is capable of effectively suppressing Gaussian or impulse noise without destroying important image information.

  17. Dense image registration through MRFs and efficient linear programming.

    PubMed

    Glocker, Ben; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir; Paragios, Nikos

    2008-12-01

    In this paper, we introduce a novel and efficient approach to dense image registration, which does not require a derivative of the employed cost function. In such a context, the registration problem is formulated using a discrete Markov random field objective function. First, towards dimensionality reduction on the variables we assume that the dense deformation field can be expressed using a small number of control points (registration grid) and an interpolation strategy. Then, the registration cost is expressed using a discrete sum over image costs (using an arbitrary similarity measure) projected on the control points, and a smoothness term that penalizes local deviations on the deformation field according to a neighborhood system on the grid. Towards a discrete approach, the search space is quantized resulting in a fully discrete model. In order to account for large deformations and produce results on a high resolution level, a multi-scale incremental approach is considered where the optimal solution is iteratively updated. This is done through successive morphings of the source towards the target image. Efficient linear programming using the primal dual principles is considered to recover the lowest potential of the cost function. Very promising results using synthetic data with known deformations and real data demonstrate the potentials of our approach.

  18. Finite-Dimensional Lie Algebras for Fast Diffeomorphic Image Registration.

    PubMed

    Zhang, Miaomiao; Fletcher, P Thomas

    2015-01-01

    This paper presents a fast geodesic shooting algorithm for diffeomorphic image registration. We first introduce a novel finite-dimensional Lie algebra structure on the space of bandlimited velocity fields. We then show that this space can effectively represent initial velocities for diffeomorphic image registration at much lower dimensions than typically used, with little to no loss in registration accuracy. We then leverage the fact that the geodesic evolution equations, as well as the adjoint Jacobi field equations needed for gradient descent methods, can be computed entirely in this finite-dimensional Lie algebra. The result is a geodesic shooting method for large deformation metric mapping (LDDMM) that is dramatically faster and less memory intensive than state-of-the-art methods. We demonstrate the effectiveness of our model to register 3D brain images and compare its registration accuracy, run-time, and memory consumption with leading LDDMM methods. We also show how our algorithm breaks through the prohibitive time and memory requirements of diffeomorphic atlas building.

  19. Nanohole-array-based device for 2D snapshot multispectral imaging.

    PubMed

    Najiminaini, Mohamadreza; Vasefi, Fartash; Kaminska, Bozena; Carson, Jeffrey J L

    2013-01-01

    We present a two-dimensional (2D) snapshot multispectral imager that utilizes the optical transmission characteristics of nanohole arrays (NHAs) in a gold film to resolve a mixture of input colors into multiple spectral bands. The multispectral device consists of blocks of NHAs, wherein each NHA has a unique periodicity that results in transmission resonances and minima in the visible and near-infrared regions. The multispectral device was illuminated over a wide spectral range, and the transmission was spectrally unmixed using a least-squares estimation algorithm. A NHA-based multispectral imaging system was built and tested in both reflection and transmission modes. The NHA-based multispectral imager was capable of extracting 2D multispectral images representative of four independent bands within the spectral range of 662 nm to 832 nm for a variety of targets. The multispectral device can potentially be integrated into a variety of imaging sensor systems.

  20. Video image stabilization and registration--plus

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor)

    2009-01-01

    A method of stabilizing a video image displayed in multiple video fields of a video sequence includes the steps of: subdividing a selected area of a first video field into nested pixel blocks; determining horizontal and vertical translation of each of the pixel blocks in each of the pixel block subdivision levels from the first video field to a second video field; and determining translation of the image from the first video field to the second video field by determining a change in magnification of the image from the first video field to the second video field in each of horizontal and vertical directions, and determining shear of the image from the first video field to the second video field in each of the horizontal and vertical directions.

  1. 3D reconstruction of a carotid bifurcation from 2D transversal ultrasound images.

    PubMed

    Yeom, Eunseop; Nam, Kweon-Ho; Jin, Changzhu; Paeng, Dong-Guk; Lee, Sang-Joon

    2014-12-01

    Visualizing and analyzing the morphological structure of carotid bifurcations are important for understanding the etiology of carotid atherosclerosis, which is a major cause of stroke and transient ischemic attack. For delineation of vasculatures in the carotid artery, ultrasound examinations have been widely employed because of a noninvasive procedure without ionizing radiation. However, conventional 2D ultrasound imaging has technical limitations in observing the complicated 3D shapes and asymmetric vasodilation of bifurcations. This study aims to propose image-processing techniques for better 3D reconstruction of a carotid bifurcation in a rat by using 2D cross-sectional ultrasound images. A high-resolution ultrasound imaging system with a probe centered at 40MHz was employed to obtain 2D transversal images. The lumen boundaries in each transverse ultrasound image were detected by using three different techniques; an ellipse-fitting, a correlation mapping to visualize the decorrelation of blood flow, and the ellipse-fitting on the correlation map. When the results are compared, the third technique provides relatively good boundary extraction. The incomplete boundaries of arterial lumen caused by acoustic artifacts are somewhat resolved by adopting the correlation mapping and the distortion in the boundary detection near the bifurcation apex was largely reduced by using the ellipse-fitting technique. The 3D lumen geometry of a carotid artery was obtained by volumetric rendering of several 2D slices. For the 3D vasodilatation of the carotid bifurcation, lumen geometries at the contraction and expansion states were simultaneously depicted at various view angles. The present 3D reconstruction methods would be useful for efficient extraction and construction of the 3D lumen geometries of carotid bifurcations from 2D ultrasound images.

  2. Demons deformable registration for cone-beam CT guidance: registration of pre- and intra-operative images

    NASA Astrophysics Data System (ADS)

    Nithiananthan, S.; Brock, K. K.; Daly, M. J.; Chan, H.; Irish, J. C.; Siewerdsen, J. H.

    2010-02-01

    High-quality intraoperative 3D imaging systems such as cone-beam CT (CBCT) hold considerable promise for imageguided surgical procedures in the head and neck. With a large amount of preoperative imaging and planning information available in addition to the intraoperative images, it becomes desirable to be able to integrate all sources of imaging information within the same anatomical frame of reference using deformable image registration. Fast intensity-based algorithms are available which can perform deformable image registration within a period of time short enough for intraoperative use. However, CBCT images often contain voxel intensity inaccuracy which can hinder registration accuracy - for example, due to x-ray scatter, truncation, and/or erroneous scaling normalization within the 3D reconstruction algorithm. In this work, we present a method of integrating an iterative intensity matching step within the operation of a multi-scale Demons registration algorithm. Registration accuracy was evaluated in a cadaver model and showed that a conventional Demons implementation (with either no intensity match or a single histogram match) introduced anatomical distortion and degradation in target registration error (TRE). The iterative intensity matching procedure, on the other hand, provided robust registration across a broad range of intensity inaccuracies.

  3. Elastic image registration via rigid object motion induced deformation

    NASA Astrophysics Data System (ADS)

    Zheng, Xiaofen; Udupa, Jayaram K.; Hirsch, Bruce E.

    2011-03-01

    In this paper, we estimate the deformations induced on soft tissues by the rigid independent movements of hard objects and create an admixture of rigid and elastic adaptive image registration transformations. By automatically segmenting and independently estimating the movement of rigid objects in 3D images, we can maintain rigidity in bones and hard tissues while appropriately deforming soft tissues. We tested our algorithms on 20 pairs of 3D MRI datasets pertaining to a kinematic study of the flexibility of the ankle complex of normal feet as well as ankles affected by abnormalities in foot architecture and ligament injuries. The results show that elastic image registration via rigid object-induced deformation outperforms purely rigid and purely nonrigid approaches.

  4. Simultaneous reconstruction of the activity image and registration of the CT image in TOF-PET

    NASA Astrophysics Data System (ADS)

    Rezaei, Ahmadreza; Michel, Christian; Casey, Michael E.; Nuyts, Johan

    2016-02-01

    Previously, maximum-likelihood methods have been proposed to jointly estimate the activity image and the attenuation image or the attenuation sinogram from time-of-flight (TOF) positron emission tomography (PET) data. In this contribution, we propose a method that addresses the possible alignment problem of the TOF-PET emission data and the computed tomography (CT) attenuation data, by combining reconstruction and registration. The method, called MLRR, iteratively reconstructs the activity image while registering the available CT-based attenuation image, so that the pair of activity and attenuation images maximise the likelihood of the TOF emission sinogram. The algorithm is slow to converge, but some acceleration could be achieved by using Nesterov’s momentum method and by applying a multi-resolution scheme for the non-rigid displacement estimation. The latter also helps to avoid local optima, although convergence to the global optimum cannot be guaranteed. The results are evaluated on 2D and 3D simulations as well as a respiratory gated clinical scan. Our experiments indicate that the proposed method is able to correct for possible misalignment of the CT-based attenuation image, and is therefore a very promising approach to suppressing attenuation artefacts in clinical PET/CT. When applied to respiratory gated data of a patient scan, it produced deformations that are compatible with breathing motion and which reduced the well known attenuation artefact near the dome of the liver. Since the method makes use of the energy-converted CT attenuation image, the scale problem of joint reconstruction is automatically solved.

  5. SU-E-J-237: Image Feature Based DRR and Portal Image Registration

    SciTech Connect

    Wang, X; Chang, J

    2014-06-01

    Purpose: Two-dimensional (2D) matching of the kV X-ray and digitally reconstructed radiography (DRR) images is an important setup technique for image-guided radiotherapy (IGRT). In our clinics, mutual information based methods are used for this purpose on commercial linear accelerators, but with often needs for manual corrections. This work proved the feasibility that feature based image transform can be used to register kV and DRR images. Methods: The scale invariant feature transform (SIFT) method was implemented to detect the matching image details (or key points) between the kV and DRR images. These key points represent high image intensity gradients, and thus the scale invariant features. Due to the poor image contrast from our kV image, direct application of the SIFT method yielded many detection errors. To assist the finding of key points, the center coordinates of the kV and DRR images were read from the DICOM header, and the two groups of key points with similar relative positions to their corresponding centers were paired up. Using these points, a rigid transform (with scaling, horizontal and vertical shifts) was estimated. We also artificially introduced vertical and horizontal shifts to test the accuracy of our registration method on anterior-posterior (AP) and lateral pelvic images. Results: The results provided a satisfactory overlay of the transformed kV onto the DRR image. The introduced vs. detected shifts were fit into a linear regression. In the AP image experiments, linear regression analysis showed a slope of 1.15 and 0.98 with an R2 of 0.89 and 0.99 for the horizontal and vertical shifts, respectively. The results are 1.2 and 1.3 with R2 of 0.72 and 0.82 for the lateral image shifts. Conclusion: This work provided an alternative technique for kV to DRR alignment. Further improvements in the estimation accuracy and image contrast tolerance are underway.

  6. Elastic registration of multiphase CT images of liver

    NASA Astrophysics Data System (ADS)

    Heldmann, Stefan; Zidowitz, Stephan

    2009-02-01

    In this work we present a novel approach for elastic image registration of multi-phase contrast enhanced CT images of liver. A problem in registration of multiphase CT is that the images contain similar but complementary structures. In our application each image shows a different part of the vessel system, e.g., portal/hepatic venous/arterial, or biliary vessels. Portal, arterial and biliary vessels run in parallel and abut on each other forming the so called portal triad, while hepatic veins run independent. Naive registration will tend to align complementary vessel. Our new approach is based on minimizing a cost function consisting of a distance measure and a regularizer. For the distance we use the recently proposed normalized gradient field measure that focuses on the alignment of edges. For the regularizer we use the linear elastic potential. The key feature of our approach is an additional penalty term using segmentations of the different vessel systems in the images to avoid overlaps of complementary structures. We successfully demonstrate our new method by real data examples.

  7. The ANACONDA algorithm for deformable image registration in radiotherapy

    SciTech Connect

    Weistrand, Ola; Svensson, Stina

    2015-01-15

    Purpose: The purpose of this work was to describe a versatile algorithm for deformable image registration with applications in radiotherapy and to validate it on thoracic 4DCT data as well as CT/cone beam CT (CBCT) data. Methods: ANAtomically CONstrained Deformation Algorithm (ANACONDA) combines image information (i.e., intensities) with anatomical information as provided by contoured image sets. The registration problem is formulated as a nonlinear optimization problem and solved with an in-house developed solver, tailored to this problem. The objective function, which is minimized during optimization, is a linear combination of four nonlinear terms: 1. image similarity term; 2. grid regularization term, which aims at keeping the deformed image grid smooth and invertible; 3. a shape based regularization term which works to keep the deformation anatomically reasonable when regions of interest are present in the reference image; and 4. a penalty term which is added to the optimization problem when controlling structures are used, aimed at deforming the selected structure in the reference image to the corresponding structure in the target image. Results: To validate ANACONDA, the authors have used 16 publically available thoracic 4DCT data sets for which target registration errors from several algorithms have been reported in the literature. On average for the 16 data sets, the target registration error is 1.17 ± 0.87 mm, Dice similarity coefficient is 0.98 for the two lungs, and image similarity, measured by the correlation coefficient, is 0.95. The authors have also validated ANACONDA using two pelvic cases and one head and neck case with planning CT and daily acquired CBCT. Each image has been contoured by a physician (radiation oncologist) or experienced radiation therapist. The results are an improvement with respect to rigid registration. However, for the head and neck case, the sample set is too small to show statistical significance. Conclusions: ANACONDA

  8. Quantification of local changes in myocardial motion by diffeomorphic registration via currents: application to paced hypertrophic obstructive cardiomyopathy in 2D echocardiographic sequences.

    PubMed

    Duchateau, Nicolas; Giraldeau, Geneviève; Gabrielli, Luigi; Fernández-Armenta, Juan; Penela, Diego; Evertz, Reinder; Mont, Lluis; Brugada, Josep; Berruezo, Antonio; Sitges, Marta; Bijnens, Bart H

    2015-01-01

    Time-to-peak measurements and single-parameter observations are cumbersome and often confusing for quantifying local changes in myocardial function. Recent spatiotemporal normalization techniques can provide a global picture of myocardial motion and strain patterns and overcome some of these limitations. Despite these advances, the quantification of pattern changes remains descriptive, which limits their relevance for longitudinal studies. Our paper provides a new perspective to the longitudinal analysis of myocardial motion. Non-rigid registration (diffeomorphic registration via currents) is used to match pairs of patterns, and pattern changes are inferred from the registration output. Scalability is added to the different components of the input patterns in order to tune up the contributions of the spatial, temporal and magnitude dimensions to data changes, which are of interest for our application. The technique is illustrated on 2D echocardiographic sequences from 15 patients with hypertrophic obstructive cardiomyopathy. These patients underwent biventricular pacing, which aims at provoking mechanical dyssynchrony to reduce left ventricular outflow tract (LVOT) obstruction. We demonstrate that our method can automatically quantify timing and magnitude changes in myocardial motion between baseline (non-paced) and 1 year follow-up (pacing on), resulting in a more robust analysis of complex patterns and subtle changes. Our method helps confirming that the reduction of LVOT pressure gradient actually comes from the induction of the type of dyssynchrony that was expected.

  9. 2D electron temperature diagnostic using soft x-ray imaging technique

    SciTech Connect

    Nishimura, K. Sanpei, A. Tanaka, H.; Ishii, G.; Kodera, R.; Ueba, R.; Himura, H.; Masamune, S.; Ohdachi, S.; Mizuguchi, N.

    2014-03-15

    We have developed a two-dimensional (2D) electron temperature (T{sub e}) diagnostic system for thermal structure studies in a low-aspect-ratio reversed field pinch (RFP). The system consists of a soft x-ray (SXR) camera with two pin holes for two-kinds of absorber foils, combined with a high-speed camera. Two SXR images with almost the same viewing area are formed through different absorber foils on a single micro-channel plate (MCP). A 2D T{sub e} image can then be obtained by calculating the intensity ratio for each element of the images. We have succeeded in distinguishing T{sub e} image in quasi-single helicity (QSH) from that in multi-helicity (MH) RFP states, where the former is characterized by concentrated magnetic fluctuation spectrum and the latter, by broad spectrum of edge magnetic fluctuations.

  10. Accelerated nonrigid intensity-based image registration using importance sampling.

    PubMed

    Bhagalia, Roshni; Fessler, Jeffrey A; Kim, Boklye

    2009-08-01

    Nonrigid image registration methods using intensity-based similarity metrics are becoming increasingly common tools to estimate many types of deformations. Nonrigid warps can be very flexible with a large number of parameters and gradient optimization schemes are widely used to estimate them. However, for large datasets, the computation of the gradient of the similarity metric with respect to these many parameters becomes very time consuming. Using a small random subset of image voxels to approximate the gradient can reduce computation time. This work focuses on the use of importance sampling to reduce the variance of this gradient approximation. The proposed importance sampling framework is based on an edge-dependent adaptive sampling distribution designed for use with intensity-based registration algorithms. We compare the performance of registration based on stochastic approximations with and without importance sampling to that using deterministic gradient descent. Empirical results, on simulated magnetic resonance brain data and real computed tomography inhale-exhale lung data from eight subjects, show that a combination of stochastic approximation methods and importance sampling accelerates the registration process while preserving accuracy.

  11. Real-time 2D Imaging of Thermal and Mechanical Tissue Response to Focused Ultrasound

    NASA Astrophysics Data System (ADS)

    Liu, Dalong; Ebbini, Emad S.

    2010-03-01

    An integrated system capable of performing high frame-rate two-dimensional (2D) temperature imaging in realtime is has been developed. The system consists of a SonixRP ultrasound scanner and a custom built data processing unit connected with Gigabit Ethernet (GbE). The SonixRP scanner which serves as the frontend of the integrated system allows us to have flexibilities of controlling the beam sequence and accessing the radio frequency (RF) data in realtime through its research interface. The RF data is then streamlined to the backend of the system through GbE, where the data is processed using a 2D temperature estimation algorithm running in a general purpose graphics processing unit (GPU). Using this system, we have developed a 2D high frame-rate imaging mode, M2D, for imaging the mechanical and thermal tissue response to subtherapeutic HIFU beams. In this paper, we present results from imaging subtherapetic HIFU beams in vitro porcine heart before and after lesion formation. The results demonstrate the feasibility of tissue parameter changes due to HIFU-induced lesions.

  12. Detection of Leptomeningeal Metastasis by Contrast-Enhanced 3D T1-SPACE: Comparison with 2D FLAIR and Contrast-Enhanced 2D T1-Weighted Images

    PubMed Central

    Gil, Bomi; Hwang, Eo-Jin; Lee, Song; Jang, Jinhee; Jung, So-Lyung; Ahn, Kook-Jin; Kim, Bum-soo

    2016-01-01

    Introduction To compare the diagnostic accuracy of contrast-enhanced 3D(dimensional) T1-weighted sampling perfection with application-optimized contrasts by using different flip angle evolutions (T1-SPACE), 2D fluid attenuated inversion recovery (FLAIR) images and 2D contrast-enhanced T1-weighted image in detection of leptomeningeal metastasis except for invasive procedures such as a CSF tapping. Materials and Methods Three groups of patients were included retrospectively for 9 months (from 2013-04-01 to 2013-12-31). Group 1 patients with positive malignant cells in CSF cytology (n = 22); group 2, stroke patients with steno-occlusion in ICA or MCA (n = 16); and group 3, patients with negative results on MRI, whose symptom were dizziness or headache (n = 25). A total of 63 sets of MR images are separately collected and randomly arranged: (1) CE 3D T1-SPACE; (2) 2D FLAIR; and (3) CE T1-GRE using a 3-Tesla MR system. A faculty neuroradiologist with 8-year-experience and another 2nd grade trainee in radiology reviewed each MR image- blinded by the results of CSF cytology and coded their observations as positives or negatives of leptomeningeal metastasis. The CSF cytology result was considered as a gold standard. Sensitivity and specificity of each MR images were calculated. Diagnostic accuracy was compared using a McNemar’s test. A Cohen's kappa analysis was performed to assess inter-observer agreements. Results Diagnostic accuracy was not different between 3D T1-SPACE and CSF cytology by both raters. However, the accuracy test of 2D FLAIR and 2D contrast-enhanced T1-weighted GRE was inconsistent by the two raters. The Kappa statistic results were 0.657 (3D T1-SPACE), 0.420 (2D FLAIR), and 0.160 (2D contrast-enhanced T1-weighted GRE). The 3D T1-SPACE images showed the highest inter-observer agreements between the raters. Conclusions Compared to 2D FLAIR and 2D contrast-enhanced T1-weighted GRE, contrast-enhanced 3D T1 SPACE showed a better detection rate of

  13. a Novel Image Registration Algorithm for SAR and Optical Images Based on Virtual Points

    NASA Astrophysics Data System (ADS)

    Ai, C.; Feng, T.; Wang, J.; Zhang, S.

    2013-07-01

    Optical image is rich in spectral information, while SAR instrument can work in both day and night and obtain images through fog and clouds. Combination of these two types of complementary images shows the great advantages of better image interpretation. Image registration is an inevitable and critical problem for the applications of multi-source remote sensing images, such as image fusion, pattern recognition and change detection. However, the different characteristics between SAR and optical images, which are due to the difference in imaging mechanism and the speckle noises in SAR image, bring great challenges to the multi-source image registration. Therefore, a novel image registration algorithm based on the virtual points, derived from the corresponding region features, is proposed in this paper. Firstly, image classification methods are adopted to extract closed regions from SAR and optical images respectively. Secondly, corresponding region features are matched by constructing cost function with rotate invariant region descriptors such as area, perimeter, and the length of major and minor axes. Thirdly, virtual points derived from corresponding region features, such as the centroids, endpoints and cross points of major and minor axes, are used to calculate initial registration parameters. Finally, the parameters are corrected by an iterative calculation, which will be terminated when the overlap of corresponding region features reaches its maximum. In the experiment, WordView-2 and Radasat-2 with 0.5 m and 4.7 m spatial resolution respectively, obtained in August 2010 in Suzhou, are used to test the registration method. It is shown that the multi-source image registration algorithm presented above is effective, and the accuracy of registration is up to pixel level.

  14. Combining 2D synchrosqueezed wave packet transform with optimization for crystal image analysis

    NASA Astrophysics Data System (ADS)

    Lu, Jianfeng; Wirth, Benedikt; Yang, Haizhao

    2016-04-01

    We develop a variational optimization method for crystal analysis in atomic resolution images, which uses information from a 2D synchrosqueezed transform (SST) as input. The synchrosqueezed transform is applied to extract initial information from atomic crystal images: crystal defects, rotations and the gradient of elastic deformation. The deformation gradient estimate is then improved outside the identified defect region via a variational approach, to obtain more robust results agreeing better with the physical constraints. The variational model is optimized by a nonlinear projected conjugate gradient method. Both examples of images from computer simulations and imaging experiments are analyzed, with results demonstrating the effectiveness of the proposed method.

  15. Snapshot 2D tomography via coded aperture x-ray scatter imaging

    PubMed Central

    MacCabe, Kenneth P.; Holmgren, Andrew D.; Tornai, Martin P.; Brady, David J.

    2015-01-01

    This paper describes a fan beam coded aperture x-ray scatter imaging system which acquires a tomographic image from each snapshot. This technique exploits cylindrical symmetry of the scattering cross section to avoid the scanning motion typically required by projection tomography. We use a coded aperture with a harmonic dependence to determine range, and a shift code to determine cross-range. Here we use a forward-scatter configuration to image 2D objects and use serial exposures to acquire tomographic video of motion within a plane. Our reconstruction algorithm also estimates the angular dependence of the scattered radiance, a step toward materials imaging and identification. PMID:23842254

  16. Nonlinear image registration with bidirectional metric and reciprocal regularization

    PubMed Central

    Ying, Shihui; Li, Dan; Xiao, Bin; Peng, Yaxin; Du, Shaoyi; Xu, Meifeng

    2017-01-01

    Nonlinear registration is an important technique to align two different images and widely applied in medical image analysis. In this paper, we develop a novel nonlinear registration framework based on the diffeomorphic demons, where a reciprocal regularizer is introduced to assume that the deformation between two images is an exact diffeomorphism. In detail, first, we adopt a bidirectional metric to improve the symmetry of the energy functional, whose variables are two reciprocal deformations. Secondly, we slack these two deformations into two independent variables and introduce a reciprocal regularizer to assure the deformations being the exact diffeomorphism. Then, we utilize an alternating iterative strategy to decouple the model into two minimizing subproblems, where a new closed form for the approximate velocity of deformation is calculated. Finally, we compare our proposed algorithm on two data sets of real brain MR images with two relative and conventional methods. The results validate that our proposed method improves accuracy and robustness of registration, as well as the gained bidirectional deformations are actually reciprocal. PMID:28231342

  17. Nonlinear image registration with bidirectional metric and reciprocal regularization.

    PubMed

    Ying, Shihui; Li, Dan; Xiao, Bin; Peng, Yaxin; Du, Shaoyi; Xu, Meifeng

    2017-01-01

    Nonlinear registration is an important technique to align two different images and widely applied in medical image analysis. In this paper, we develop a novel nonlinear registration framework based on the diffeomorphic demons, where a reciprocal regularizer is introduced to assume that the deformation between two images is an exact diffeomorphism. In detail, first, we adopt a bidirectional metric to improve the symmetry of the energy functional, whose variables are two reciprocal deformations. Secondly, we slack these two deformations into two independent variables and introduce a reciprocal regularizer to assure the deformations being the exact diffeomorphism. Then, we utilize an alternating iterative strategy to decouple the model into two minimizing subproblems, where a new closed form for the approximate velocity of deformation is calculated. Finally, we compare our proposed algorithm on two data sets of real brain MR images with two relative and conventional methods. The results validate that our proposed method improves accuracy and robustness of registration, as well as the gained bidirectional deformations are actually reciprocal.

  18. Estimation of lung lobar sliding using image registration

    NASA Astrophysics Data System (ADS)

    Amelon, Ryan; Cao, Kunlin; Reinhardt, Joseph M.; Christensen, Gary E.; Raghavan, Madhavan

    2012-03-01

    MOTIVATION: The lobes of the lungs slide relative to each other during breathing. Quantifying lobar sliding can aid in better understanding lung function, better modeling of lung dynamics, and a better understanding of the limits of image registration performance near fissures. We have developed a method to estimate lobar sliding in the lung from image registration of CT scans. METHODS: Six human lungs were analyzed using CT scans spanning functional residual capacity (FRC) to total lung capacity (TLC). The lung lobes were segmented and registered on a lobe-by-lobe basis. The displacement fields from the independent lobe registrations were then combined into a single image. This technique allows for displacement discontinuity at lobar boundaries. The displacement field was then analyzed as a continuum by forming finite elements from the voxel grid of the FRC image. Elements at a discontinuity will appear to have undergone significantly elevated 'shear stretch' compared to those within the parenchyma. Shear stretch is shown to be a good measure of sliding magnitude in this context. RESULTS: The sliding map clearly delineated the fissures of the lung. The fissure between the right upper and right lower lobes showed the greatest sliding in all subjects while the fissure between the right upper and right middle lobe showed the least sliding.

  19. Translation and Rotation Invariant Multiscale Image Registration

    DTIC Science & Technology

    2002-03-01

    wavelet transform . We extend this work by creating a new multiscale transform to register two images with translation or rotation differences, independent...continuous wavelet transform to mimic the two-dimensional redundant discrete wavelet transform . This allows us to obtain multiple subbands at various scales...while maintaining the desirable properties of the redundant discrete wavelet transform . Whereas the discrete wavelet transform produces results only

  20. Robustness and Accuracy of Feature-Based Single Image 2-D–3-D Registration Without Correspondences for Image-Guided Intervention

    PubMed Central

    Armand, Mehran; Otake, Yoshito; Yau, Wai-Pan; Cheung, Paul Y. S.; Hu, Yong; Taylor, Russell H.

    2015-01-01

    2-D-to-3-D registration is critical and fundamental in image-guided interventions. It could be achieved from single image using paired point correspondences between the object and the image. The common assumption that such correspondences can readily be established does not necessarily hold for image guided interventions. Intraoperative image clutter and an imperfect feature extraction method may introduce false detection and, due to the physics of X-ray imaging, the 2-D image point features may be indistinguishable from each other and/or obscured by anatomy causing false detection of the point features. These create difficulties in establishing correspondences between image features and 3-D data points. In this paper, we propose an accurate, robust, and fast method to accomplish 2-D–3-D registration using a single image without the need for establishing paired correspondences in the presence of false detection. We formulate 2-D–3-D registration as a maximum likelihood estimation problem, which is then solved by coupling expectation maximization with particle swarm optimization. The proposed method was evaluated in a phantom and a cadaver study. In the phantom study, it achieved subdegree rotation errors and submillimeter in-plane (X –Y plane) translation errors. In both studies, it outperformed the state-of-the-art methods that do not use paired correspondences and achieved the same accuracy as a state-of-the-art global optimal method that uses correct paired correspondences. PMID:23955696

  1. A comparison of 2D and 3D digital image correlation for a membrane under inflation

    PubMed Central

    Murienne, Barbara J.; Nguyen, Thao D.

    2015-01-01

    Three-dimensional (3D) digital image correlation (DIC) is becoming widely used to characterize the behavior of structures undergoing 3D deformations. However, the use of 3D-DIC can be challenging under certain conditions, such as high magnification, and therefore small depth of field, or a highly controlled environment with limited access for two-angled cameras. The purpose of this study is to compare 2D-DIC and 3D-DIC for the same inflation experiment and evaluate whether 2D-DIC can be used when conditions discourage the use of a stereo-vision system. A latex membrane was inflated vertically to 5.41 kPa (reference pressure), then to 7.87 kPa (deformed pressure). A two-camera stereo-vision system acquired top-down images of the membrane, while a single camera system simultaneously recorded images of the membrane in profile. 2D-DIC and 3D-DIC were used to calculate horizontal (in the membrane plane) and vertical (out of the membrane plane) displacements, and meridional strain. Under static conditions, the baseline uncertainty in horizontal displacement and strain were smaller for 3D-DIC than 2D-DIC. However, the opposite was observed for the vertical displacement, for which 2D-DIC had a smaller baseline uncertainty. The baseline absolute error in vertical displacement and strain were similar for both DIC methods, but it was larger for 2D-DIC than 3D-DIC for the horizontal displacement. Under inflation, the variability in the measurements were larger than under static conditions for both DIC methods. 2D-DIC showed a smaller variability in displacements than 3D-DIC, especially for the vertical displacement, but a similar strain uncertainty. The absolute difference in the average displacements and strain between 3D-DIC and 2D-DIC were in the range of the 3D-DIC variability. Those findings suggest that 2D-DIC might be used as an alternative to 3D-DIC to study the inflation response of materials under certain conditions. PMID:26543296

  2. A comparison of 2D and 3D digital image correlation for a membrane under inflation.

    PubMed

    Murienne, Barbara J; Nguyen, Thao D

    2016-02-01

    Three-dimensional (3D) digital image correlation (DIC) is becoming widely used to characterize the behavior of structures undergoing 3D deformations. However, the use of 3D-DIC can be challenging under certain conditions, such as high magnification, and therefore small depth of field, or a highly controlled environment with limited access for two-angled cameras. The purpose of this study is to compare 2D-DIC and 3D-DIC for the same inflation experiment and evaluate whether 2D-DIC can be used when conditions discourage the use of a stereo-vision system. A latex membrane was inflated vertically to 5.41 kPa (reference pressure), then to 7.87 kPa (deformed pressure). A two-camera stereo-vision system acquired top-down images of the membrane, while a single camera system simultaneously recorded images of the membrane in profile. 2D-DIC and 3D-DIC were used to calculate horizontal (in the membrane plane) and vertical (out of the membrane plane) displacements, and meridional strain. Under static conditions, the baseline uncertainty in horizontal displacement and strain were smaller for 3D-DIC than 2D-DIC. However, the opposite was observed for the vertical displacement, for which 2D-DIC had a smaller baseline uncertainty. The baseline absolute error in vertical displacement and strain were similar for both DIC methods, but it was larger for 2D-DIC than 3D-DIC for the horizontal displacement. Under inflation, the variability in the measurements were larger than under static conditions for both DIC methods. 2D-DIC showed a smaller variability in displacements than 3D-DIC, especially for the vertical displacement, but a similar strain uncertainty. The absolute difference in the average displacements and strain between 3D-DIC and 2D-DIC were in the range of the 3D-DIC variability. Those findings suggest that 2D-DIC might be used as an alternative to 3D-DIC to study the inflation response of materials under certain conditions.

  3. A comparison of 2D and 3D digital image correlation for a membrane under inflation

    NASA Astrophysics Data System (ADS)

    Murienne, Barbara J.; Nguyen, Thao D.

    2016-02-01

    Three-dimensional (3D) digital image correlation (DIC) is becoming widely used to characterize the behavior of structures undergoing 3D deformations. However, the use of 3D-DIC can be challenging under certain conditions, such as high magnification, and therefore small depth of field, or a highly controlled environment with limited access for two-angled cameras. The purpose of this study is to compare 2D-DIC and 3D-DIC for the same inflation experiment and evaluate whether 2D-DIC can be used when conditions discourage the use of a stereo-vision system. A latex membrane was inflated vertically to 5.41 kPa (reference pressure), then to 7.87 kPa (deformed pressure). A two-camera stereo-vision system acquired top-down images of the membrane, while a single camera system simultaneously recorded images of the membrane in profile. 2D-DIC and 3D-DIC were used to calculate horizontal (in the membrane plane) and vertical (out of the membrane plane) displacements, and meridional strain. Under static conditions, the baseline uncertainty in horizontal displacement and strain were smaller for 3D-DIC than 2D-DIC. However, the opposite was observed for the vertical displacement, for which 2D-DIC had a smaller baseline uncertainty. The baseline absolute error in vertical displacement and strain were similar for both DIC methods, but it was larger for 2D-DIC than 3D-DIC for the horizontal displacement. Under inflation, the variability in the measurements were larger than under static conditions for both DIC methods. 2D-DIC showed a smaller variability in displacements than 3D-DIC, especially for the vertical displacement, but a similar strain uncertainty. The absolute difference in the average displacements and strain between 3D-DIC and 2D-DIC were in the range of the 3D-DIC variability. Those findings suggest that 2D-DIC might be used as an alternative to 3D-DIC to study the inflation response of materials under certain conditions.

  4. The Ultrasonic Measurement of Crystallographic Orientation for Imaging Anisotropic Components with 2d Arrays

    NASA Astrophysics Data System (ADS)

    Lane, C. J. L.; Dunhill, A. K.; Drinkwater, B. W.; Wilcox, P. D.

    2011-06-01

    Single crystal components are used widely in the gas-turbine industry. However, these components are elastically anisotropic which causes difficulties when performing NDE inspections with ultrasound. Recently an ultrasonic algorithm for a 2D array has been corrected to perform the reliable volumetric inspection of single crystals. For the algorithm to be implemented the crystallographic orientation of the components must be known. This paper, therefore, develops and reviews crystallographic orientation methods using 2D ultrasonic arrays. The methods under examination are based on the anisotropic propagation of surface and bulk waves and an image-based orientation method is also considered.

  5. Evaluation of registration strategies for multi-modality images of rat brain slices

    NASA Astrophysics Data System (ADS)

    Palm, Christoph; Vieten, Andrea; Salber, Dagmar; Pietrzyk, Uwe

    2009-05-01

    In neuroscience, small-animal studies frequently involve dealing with series of images from multiple modalities such as histology and autoradiography. The consistent and bias-free restacking of multi-modality image series is obligatory as a starting point for subsequent non-rigid registration procedures and for quantitative comparisons with positron emission tomography (PET) and other in vivo data. Up to now, consistency between 2D slices without cross validation using an inherent 3D modality is frequently presumed to be close to the true morphology due to the smooth appearance of the contours of anatomical structures. However, in multi-modality stacks consistency is difficult to assess. In this work, consistency is defined in terms of smoothness of neighboring slices within a single modality and between different modalities. Registration bias denotes the distortion of the registered stack in comparison to the true 3D morphology and shape. Based on these metrics, different restacking strategies of multi-modality rat brain slices are experimentally evaluated. Experiments based on MRI-simulated and real dual-tracer autoradiograms reveal a clear bias of the restacked volume despite quantitatively high consistency and qualitatively smooth brain structures. However, different registration strategies yield different inter-consistency metrics. If no genuine 3D modality is available, the use of the so-called SOP (slice-order preferred) or MOSOP (modality-and-slice-order preferred) strategy is recommended.

  6. Adaptive registration of diffusion tensor images on lie groups

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Chen, LeiTing; Cai, HongBin; Qiu, Hang; Fei, Nanxi

    2016-08-01

    With diffusion tensor imaging (DTI), more exquisite information on tissue microstructure is provided for medical image processing. In this paper, we present a locally adaptive topology preserving method for DTI registration on Lie groups. The method aims to obtain more plausible diffeomorphisms for spatial transformations via accurate approximation for the local tangent space on the Lie group manifold. In order to capture an exact geometric structure of the Lie group, the local linear approximation is efficiently optimized by using the adaptive selection of the local neighborhood sizes on the given set of data points. Furthermore, numerical comparative experiments are conducted on both synthetic data and real DTI data to demonstrate that the proposed method yields a higher degree of topology preservation on a dense deformation tensor field while improving the registration accuracy.

  7. Error analysis of two methods for range-images registration

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoli; Yin, Yongkai; Li, Ameng; He, Dong; Peng, Xiang

    2010-08-01

    With the improvements in range image registration techniques, this paper focuses on error analysis of two registration methods being generally applied in industry metrology including the algorithm comparison, matching error, computing complexity and different application areas. One method is iterative closest points, by which beautiful matching results with little error can be achieved. However some limitations influence its application in automatic and fast metrology. The other method is based on landmarks. We also present a algorithm for registering multiple range-images with non-coding landmarks, including the landmarks' auto-identification and sub-pixel location, 3D rigid motion, point pattern matching, global iterative optimization techniques et al. The registering results by the two methods are illustrated and a thorough error analysis is performed.

  8. Concepts for on-board satellite image registration, volume 1

    NASA Technical Reports Server (NTRS)

    Ruedger, W. H.; Daluge, D. R.; Aanstoos, J. V.

    1980-01-01

    The NASA-NEEDS program goals present a requirement for on-board signal processing to achieve user-compatible, information-adaptive data acquisition. One very specific area of interest is the preprocessing required to register imaging sensor data which have been distorted by anomalies in subsatellite-point position and/or attitude control. The concepts and considerations involved in using state-of-the-art positioning systems such as the Global Positioning System (GPS) in concert with state-of-the-art attitude stabilization and/or determination systems to provide the required registration accuracy are discussed with emphasis on assessing the accuracy to which a given image picture element can be located and identified, determining those algorithms required to augment the registration procedure and evaluating the technology impact on performing these procedures on-board the satellite.

  9. 2D Doppler backscattering using synthetic aperture microwave imaging of MAST edge plasmas

    NASA Astrophysics Data System (ADS)

    Thomas, D. A.; Brunner, K. J.; Freethy, S. J.; Huang, B. K.; Shevchenko, V. F.; Vann, R. G. L.

    2016-02-01

    Doppler backscattering (DBS) is already established as a powerful diagnostic; its extension to 2D enables imaging of turbulence characteristics from an extended region of the cut-off surface. The Synthetic Aperture Microwave Imaging (SAMI) diagnostic has conducted proof-of-principle 2D DBS experiments of MAST edge plasma. SAMI actively probes the plasma edge using a wide (±40° vertical and horizontal) and tuneable (10-34.5 GHz) beam. The Doppler backscattered signal is digitised in vector form using an array of eight Vivaldi PCB antennas. This allows the receiving array to be focused in any direction within the field of view simultaneously to an angular range of 6-24° FWHM at 10-34.5 GHz. This capability is unique to SAMI and is a novel way of conducting DBS experiments. In this paper the feasibility of conducting 2D DBS experiments is explored. Initial observations of phenomena previously measured by conventional DBS experiments are presented; such as momentum injection from neutral beams and an abrupt change in power and turbulence velocity coinciding with the onset of H-mode. In addition, being able to carry out 2D DBS imaging allows a measurement of magnetic pitch angle to be made; preliminary results are presented. Capabilities gained through steering a beam using a phased array and the limitations of this technique are discussed.

  10. New applications for the touchscreen in 2D and 3D medical imaging workstations

    NASA Astrophysics Data System (ADS)

    Hinckley, Ken; Goble, John C.; Pausch, Randy; Kassell, Neal F.

    1995-04-01

    We present a new interface technique which augments a 3D user interface based on the physical manipulation of tools, or props, with a touchscreen. This hybrid interface intuitively and seamlessly combines 3D input with more traditional 2D input in the same user interface. Example 2D interface tasks of interest include selecting patient images from a database, browsing through axial, coronal, and sagittal image slices, or adjusting image center and window parameters. Note the facility with which a touchscreen can be used: the surgeon can move in 3D using the props, and then, without having to put the props down, the surgeon can reach out and touch the screen to perform 2D tasks. Based on previous work by Sears, we provide touchscreen users with visual feedback in the form of a small cursor which appears above the finger, allowing targets much smaller than the finger itself to be selected. Based on our informal user observations to date, this touchscreen stabilization algorithm allows targets as small as 1.08 mm X 1.08 mm to be selected by novices, and makes possible selection of targets as small as 0.27 mm X 0.27 mm after some training. Based on implemented prototype systems, we suggest that touchscreens offer not only intuitive 2D input which is well accepted by physicians, but that touchscreens also offer fast and accurate input which blends well with 3D interaction techniques.

  11. A positioning QA procedure for 2D/2D (kV/MV) and 3D/3D (CT/CBCT) image matching for radiotherapy patient setup.

    PubMed

    Guan, Huaiqun; Hammoud, Rabih; Yin, Fang-Fang

    2009-10-06

    A positioning QA procedure for Varian's 2D/2D (kV/MV) and 3D/3D (planCT/CBCT) matching was developed. The procedure was to check: (1) the coincidence of on-board imager (OBI), portal imager (PI), and cone beam CT (CBCT)'s isocenters (digital graticules) to a linac's isocenter (to a pre-specified accuracy); (2) that the positioning difference detected by 2D/2D (kV/MV) and 3D/3D(planCT/CBCT) matching can be reliably transferred to couch motion. A cube phantom with a 2 mm metal ball (bb) at the center was used. The bb was used to define the isocenter. Two additional bbs were placed on two phantom surfaces in order to define a spatial location of 1.5 cm anterior, 1.5 cm inferior, and 1.5 cm right from the isocenter. An axial scan of the phantom was acquired from a multislice CT simulator. The phantom was set at the linac's isocenter (lasers); either AP MV/R Lat kV images or CBCT images were taken for 2D/2D or 3D/3D matching, respectively. For 2D/2D, the accuracy of each device's isocenter was obtained by checking the distance between the central bb and the digital graticule. Then the central bb in orthogonal DRRs was manually moved to overlay to the off-axis bbs in kV/MV images. For 3D/3D, CBCT was first matched to planCT to check the isocenter difference between the two CTs. Manual shifts were then made by moving CBCT such that the point defined by the two off-axis bbs overlay to the central bb in planCT. (PlanCT can not be moved in the current version of OBI1.4.) The manual shifts were then applied to remotely move the couch. The room laser was used to check the accuracy of the couch movement. For Trilogy (or Ix-21) linacs, the coincidence of imager and linac's isocenter was better than 1 mm (or 1.5 mm). The couch shift accuracy was better than 2 mm.

  12. Nonrigid Registration of Brain Tumor Resection MR Images Based on Joint Saliency Map and Keypoint Clustering

    PubMed Central

    Gu, Zhijun; Qin, Binjie

    2009-01-01

    This paper proposes a novel global-to-local nonrigid brain MR image registration to compensate for the brain shift and the unmatchable outliers caused by the tumor resection. The mutual information between the corresponding salient structures, which are enhanced by the joint saliency map (JSM), is maximized to achieve a global rigid registration of the two images. Being detected and clustered at the paired contiguous matching areas in the globally registered images, the paired pools of DoG keypoints in combination with the JSM provide a useful cluster-to-cluster correspondence to guide the local control-point correspondence detection and the outlier keypoint rejection. Lastly, a quasi-inverse consistent deformation is smoothly approximated to locally register brain images through the mapping the clustered control points by compact support radial basis functions. The 2D implementation of the method can model the brain shift in brain tumor resection MR images, though the theory holds for the 3D case. PMID:22303173

  13. A fast rigid-registration method of inferior limb X-ray image and 3D CT images for TKA surgery

    NASA Astrophysics Data System (ADS)

    Ito, Fumihito; O. D. A, Prima; Uwano, Ikuko; Ito, Kenzo

    2010-03-01

    In this paper, we propose a fast rigid-registration method of inferior limb X-ray films (two-dimensional Computed Radiography (CR) images) and three-dimensional Computed Tomography (CT) images for Total Knee Arthroplasty (TKA) surgery planning. The position of the each bone, such as femur and tibia (shin bone), in X-ray film and 3D CT images is slightly different, and we must pay attention how to use the two different images, since X-ray film image is captured in the standing position, and 3D CT is captured in decubitus (face up) position, respectively. Though the conventional registration mainly uses cross-correlation function between two images,and utilizes optimization techniques, it takes enormous calculation time and it is difficult to use it in interactive operations. In order to solve these problems, we calculate the center line (bone axis) of femur and tibia (shin bone) automatically, and we use them as initial positions for the registration. We evaluate our registration method by using three patient's image data, and we compare our proposed method and a conventional registration, which uses down-hill simplex algorithm. The down-hill simplex method is an optimization algorithm that requires only function evaluations, and doesn't need the calculation of derivatives. Our registration method is more effective than the downhill simplex method in computational time and the stable convergence. We have developed the implant simulation system on a personal computer, in order to support the surgeon in a preoperative planning of TKA. Our registration method is implemented in the simulation system, and user can manipulate 2D/3D translucent templates of implant components on X-ray film and 3D CT images.

  14. Synthetic aperture radar/LANDSAT MSS image registration

    NASA Technical Reports Server (NTRS)

    Maurer, H. E. (Editor); Oberholtzer, J. D. (Editor); Anuta, P. E. (Editor)

    1979-01-01

    Algorithms and procedures necessary to merge aircraft synthetic aperture radar (SAR) and LANDSAT multispectral scanner (MSS) imagery were determined. The design of a SAR/LANDSAT data merging system was developed. Aircraft SAR images were registered to the corresponding LANDSAT MSS scenes and were the subject of experimental investigations. Results indicate that the registration of SAR imagery with LANDSAT MSS imagery is feasible from a technical viewpoint, and useful from an information-content viewpoint.

  15. Fast Threshold image segmentation based on 2D Fuzzy Fisher and Random Local Optimized QPSO.

    PubMed

    Zhang, Chunming; Xie, Yongchun; Liu, Da; Wang, Li

    2016-10-26

    In the paper, a real-time segmentation method which separates the target signal from the navigation image is proposed. In the approaching docking stage, the navigation image is composed of target and nontarget signal, which are separately bright spot and space vehicle itself. Since the non-target signals is the main part of the navigation image, the traditional entropy-related criterions and Ostu-related criterions will bring inadequate segmentation, while the mere 2D Fisher criterion will causes over-segmentation, all the methods show their shortages in dealing with this kind of case. To guarantee a precise image segmentation, a revised 2D fuzzy Fisher is proposed in the paper to make a trade-off between positioning target regions and retaining target fuzzy boundaries. Firstly, to reduce redundant computations in finding the threshold pair, a 2D fuzzy Fisher criterion based integral image is established by way of simplifying the corresponding fuzzy domains. And then, to quicken the convergence, a random orthogonal component is added in its quasioptimum particle to enhance its local searching capacity in each iteration. Experimental results show its competence of quick segmentation.

  16. A wavelet relational fuzzy C-means algorithm for 2D gel image segmentation.

    PubMed

    Rashwan, Shaheera; Faheem, Mohamed Talaat; Sarhan, Amany; Youssef, Bayumy A B

    2013-01-01

    One of the most famous algorithms that appeared in the area of image segmentation is the Fuzzy C-Means (FCM) algorithm. This algorithm has been used in many applications such as data analysis, pattern recognition, and image segmentation. It has the advantages of producing high quality segmentation compared to the other available algorithms. Many modifications have been made to the algorithm to improve its segmentation quality. The proposed segmentation algorithm in this paper is based on the Fuzzy C-Means algorithm adding the relational fuzzy notion and the wavelet transform to it so as to enhance its performance especially in the area of 2D gel images. Both proposed modifications aim to minimize the oversegmentation error incurred by previous algorithms. The experimental results of comparing both the Fuzzy C-Means (FCM) and the Wavelet Fuzzy C-Means (WFCM) to the proposed algorithm on real 2D gel images acquired from human leukemias, HL-60 cell lines, and fetal alcohol syndrome (FAS) demonstrate the improvement achieved by the proposed algorithm in overcoming the segmentation error. In addition, we investigate the effect of denoising on the three algorithms. This investigation proves that denoising the 2D gel image before segmentation can improve (in most of the cases) the quality of the segmentation.

  17. A quantitative damage imaging technique based on enhanced CCRTM for composite plates using 2D scan

    NASA Astrophysics Data System (ADS)

    He, Jiaze; Yuan, Fuh-Gwo

    2016-10-01

    A two-dimensional (2D) non-contact areal scan system was developed to image and quantify impact damage in a composite plate using an enhanced zero-lag cross-correlation reverse-time migration (E-CCRTM) technique. The system comprises a single piezoelectric wafer mounted on the composite plate and a laser Doppler vibrometer (LDV) for scanning a region in the vicinity of the PZT to capture the scattered wavefield. The proposed damage imaging technique takes into account the amplitude, phase, geometric spreading, and all of the frequency content of the Lamb waves propagating in the plate; thus, a reflectivity coefficients of the delamination is calculated and potentially related to damage severity. Comparisons are made in terms of damage imaging quality between 2D areal scans and 1D line scans as well as between the proposed and existing imaging conditions. The experimental results show that the 2D E-CCRTM performs robustly when imaging and quantifying impact damage in large-scale composites using a single PZT actuator with a nearby areal scan using LDV.

  18. Validation of 3D ultrasound: CT registration of prostate images

    NASA Astrophysics Data System (ADS)

    Firle, Evelyn A.; Wesarg, Stefan; Karangelis, Grigoris; Dold, Christian

    2003-05-01

    All over the world 20% of men are expected to develop prostate cancer sometime in his life. In addition to surgery - being the traditional treatment for cancer - the radiation treatment is getting more popular. The most interesting radiation treatment regarding prostate cancer is Brachytherapy radiation procedure. For the safe delivery of that therapy imaging is critically important. In several cases where a CT device is available a combination of the information provided by CT and 3D Ultrasound (U/S) images offers advantages in recognizing the borders of the lesion and delineating the region of treatment. For these applications the CT and U/S scans should be registered and fused in a multi-modal dataset. Purpose of the present development is a registration tool (registration, fusion and validation) for available CT volumes with 3D U/S images of the same anatomical region, i.e. the prostate. The combination of these two imaging modalities interlinks the advantages of the high-resolution CT imaging and low cost real-time U/S imaging and offers a multi-modality imaging environment for further target and anatomy delineation. This tool has been integrated into the visualization software "InViVo" which has been developed over several years in Fraunhofer IGD in Darmstadt.

  19. 2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.

    PubMed

    Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

    2014-01-01

    registration techniques. Different strategies for automatic serial image registration applied to MS datasets are outlined in detail. The third image modality is histology driven, i.e. a digital scan of the histological stained slices in high-resolution. After fusion of reconstructed scan images and MRI the slice-related coordinates of the mass spectra can be propagated into 3D-space. After image registration of scan images and histological stained images, the anatomical information from histology is fused with the mass spectra from MALDI-MSI. As a result of the described pipeline we have a set of 3 dimensional images representing the same anatomies, i.e. the reconstructed slice scans, the spectral images as well as corresponding clustering results, and the acquired MRI. Great emphasis is put on the fact that the co-registered MRI providing anatomical details improves the interpretation of 3D MALDI images. The ability to relate mass spectrometry derived molecular information with in vivo and in vitro imaging has potentially important implications. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan.

  20. Preparation of 2D sequences of corneal images for 3D model building.

    PubMed

    Elbita, Abdulhakim; Qahwaji, Rami; Ipson, Stanley; Sharif, Mhd Saeed; Ghanchi, Faruque

    2014-04-01

    A confocal microscope provides a sequence of images, at incremental depths, of the various corneal layers and structures. From these, medical practioners can extract clinical information on the state of health of the patient's cornea. In this work we are addressing problems associated with capturing and processing these images including blurring, non-uniform illumination and noise, as well as the displacement of images laterally and in the anterior-posterior direction caused by subject movement. The latter may cause some of the captured images to be out of sequence in terms of depth. In this paper we introduce automated algorithms for classification, reordering, registration and segmentation to solve these problems. The successful implementation of these algorithms could open the door for another interesting development, which is the 3D modelling of these sequences.

  1. Comparison of spatiotemporal interpolators for 4D image reconstruction from 2D transesophageal ultrasound

    NASA Astrophysics Data System (ADS)

    Haak, Alexander; van Stralen, Marijn; van Burken, Gerard; Klein, Stefan; Pluim, Josien P. W.; de Jong, Nico; van der Steen, Antonius F. W.; Bosch, Johan G.

    2012-03-01

    °For electrophysiology intervention monitoring, we intend to reconstruct 4D ultrasound (US) of structures in the beating heart from 2D transesophageal US by scanplane rotation. The image acquisition is continuous but unsynchronized to the heart rate, which results in a sparsely and irregularly sampled dataset and a spatiotemporal interpolation method is desired. Previously, we showed the potential of normalized convolution (NC) for interpolating such datasets. We explored 4D interpolation by 3 different methods: NC, nearest neighbor (NN), and temporal binning followed by linear interpolation (LTB). The test datasets were derived by slicing three 4D echocardiography datasets at random rotation angles (θ, range: 0-180) and random normalized cardiac phase (τ, range: 0-1). Four different distributions of rotated 2D images with 600, 900, 1350, and 1800 2D input images were created from all TEE sets. A 2D Gaussian kernel was used for NC and optimal kernel sizes (σθ and στ) were found by performing an exhaustive search. The RMS gray value error (RMSE) of the reconstructed images was computed for all interpolation methods. The estimated optimal kernels were in the range of σθ = 3.24 - 3.69°/ στ = 0.045 - 0.048, σθ = 2.79°/ στ = 0.031 - 0.038, σθ = 2.34°/ στ = 0.023 - 0.026, and σθ = 1.89°/ στ = 0.021 - 0.023 for 600, 900, 1350, and 1800 input images respectively. We showed that NC outperforms NN and LTB. For a small number of input images the advantage of NC is more pronounced.

  2. Regional lung function and mechanics using image registration

    NASA Astrophysics Data System (ADS)

    Ding, Kai

    The main function of the respiratory system is gas exchange. Since many disease or injury conditions can cause biomechanical or material property changes that can alter lung function, there is a great interest in measuring regional lung function and mechanics. In this thesis, we present a technique that uses multiple respiratory-gated CT images of the lung acquired at different levels of inflation with both breath-hold static scans and retrospectively reconstructed 4D dynamic scans, along with non-rigid 3D image registration, to make local estimates of lung tissue function and mechanics. We validate our technique using anatomical landmarks and functional Xe-CT estimated specific ventilation. The major contributions of this thesis include: (1) developing the registration derived regional expansion estimation approach in breath-hold static scans and dynamic 4DCT scans, (2) developing a method to quantify lobar sliding from image registration derived displacement field, (3) developing a method for measurement of radiation-induced pulmonary function change following a course of radiation therapy, (4) developing and validating different ventilation measures in 4DCT. The ability of our technique to estimate regional lung mechanics and function as a surrogate of the Xe-CT ventilation imaging for the entire lung from quickly and easily obtained respiratory-gated images, is a significant contribution to functional lung imaging because of the potential increase in resolution, and large reductions in imaging time, radiation, and contrast agent exposure. Our technique may be useful to detect and follow the progression of lung disease such as COPD, may be useful as a planning tool during RT planning, may be useful for tracking the progression of toxicity to nearby normal tissue during RT, and can be used to evaluate the effectiveness of a treatment post-therapy.

  3. Explicit B-spline regularization in diffeomorphic image registration

    PubMed Central

    Tustison, Nicholas J.; Avants, Brian B.

    2013-01-01

    Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline “flavored” diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools. PMID:24409140

  4. Explicit B-spline regularization in diffeomorphic image registration.

    PubMed

    Tustison, Nicholas J; Avants, Brian B

    2013-01-01

    Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline "flavored" diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools.

  5. Hierarchical segmentation-assisted multimodal registration for MR brain images.

    PubMed

    Lu, Huanxiang; Beisteiner, Roland; Nolte, Lutz-Peter; Reyes, Mauricio

    2013-04-01

    Information theory-based metric such as mutual information (MI) is widely used as similarity measurement for multimodal registration. Nevertheless, this metric may lead to matching ambiguity for non-rigid registration. Moreover, maximization of MI alone does not necessarily produce an optimal solution. In this paper, we propose a segmentation-assisted similarity metric based on point-wise mutual information (PMI). This similarity metric, termed SPMI, enhances the registration accuracy by considering tissue classification probabilities as prior information, which is generated from an expectation maximization (EM) algorithm. Diffeomorphic demons is then adopted as the registration model and is optimized in a hierarchical framework (H-SPMI) based on different levels of anatomical structure as prior knowledge. The proposed method is evaluated using Brainweb synthetic data and clinical fMRI images. Both qualitative and quantitative assessment were performed as well as a sensitivity analysis to the segmentation error. Compared to the pure intensity-based approaches which only maximize mutual information, we show that the proposed algorithm provides significantly better accuracy on both synthetic and clinical data.

  6. Physical Constraint Finite Element Model for Medical Image Registration

    PubMed Central

    Zhang, Jingya; Wang, Jiajun; Wang, Xiuying; Gao, Xin; Feng, Dagan

    2015-01-01

    Due to being derived from linear assumption, most elastic body based non-rigid image registration algorithms are facing challenges for soft tissues with complex nonlinear behavior and with large deformations. To take into account the geometric nonlinearity of soft tissues, we propose a registration algorithm on the basis of Newtonian differential equation. The material behavior of soft tissues is modeled as St. Venant-Kirchhoff elasticity, and the nonlinearity of the continuum represents the quadratic term of the deformation gradient under the Green- St.Venant strain. In our algorithm, the elastic force is formulated as the derivative of the deformation energy with respect to the nodal displacement vectors of the finite element; the external force is determined by the registration similarity gradient flow which drives the floating image deforming to the equilibrium condition. We compared our approach to three other models: 1) the conventional linear elastic finite element model (FEM); 2) the dynamic elastic FEM; 3) the robust block matching (RBM) method. The registration accuracy was measured using three similarities: MSD (Mean Square Difference), NC (Normalized Correlation) and NMI (Normalized Mutual Information), and was also measured using the mean and max distance between the ground seeds and corresponding ones after registration. We validated our method on 60 image pairs including 30 medical image pairs with artificial deformation and 30 clinical image pairs for both the chest chemotherapy treatment in different periods and brain MRI normalization. Our method achieved a distance error of 0.320±0.138 mm in x direction and 0.326±0.111 mm in y direction, MSD of 41.96±13.74, NC of 0.9958±0.0019, NMI of 1.2962±0.0114 for images with large artificial deformations; and average NC of 0.9622±0.008 and NMI of 1.2764±0.0089 for the real clinical cases. Student’s t-test demonstrated that our model statistically outperformed the other methods in comparison (p

  7. Shadow scanning lens-free microscopy with tomographic reconstruction of 2D images

    NASA Astrophysics Data System (ADS)

    Manturov, Alexey O.; Blushtein, Eugeny A.; Morev, Vladislav S.

    2016-04-01

    Shadow Scanning Lens-free Microscopy (SSLM) is a possible method for optical imaging that can potentially achieve high spatial resolution. At present work we discuss the SSLM and analyse the resolution limit conditioned by the light scattering from the edge scanning imaging system that uses a shadow from moving knife edge or wire to collect the sets of tomographic projection data of two-dimensional objects. The results of numerical estimation of the SSLM resolution for reconstruction of 2D object image are presented. The experimental setup of SSLM with wire scanning element was developed. The developed device works in a UV band range and shows the spatial resolution about 90 nm.

  8. Image Quality Improvement in Adaptive Optics Scanning Laser Ophthalmoscopy Assisted Capillary Visualization Using B-spline-based Elastic Image Registration

    PubMed Central

    Uji, Akihito; Ooto, Sotaro; Hangai, Masanori; Arichika, Shigeta; Yoshimura, Nagahisa

    2013-01-01

    Purpose To investigate the effect of B-spline-based elastic image registration on adaptive optics scanning laser ophthalmoscopy (AO-SLO)-assisted capillary visualization. Methods AO-SLO videos were acquired from parafoveal areas in the eyes of healthy subjects and patients with various diseases. After nonlinear image registration, the image quality of capillary images constructed from AO-SLO videos using motion contrast enhancement was compared before and after B-spline-based elastic (nonlinear) image registration performed using ImageJ. For objective comparison of image quality, contrast-to-noise ratios (CNRS) for vessel images were calculated. For subjective comparison, experienced ophthalmologists ranked images on a 5-point scale. Results All AO-SLO videos were successfully stabilized by elastic image registration. CNR was significantly higher in capillary images stabilized by elastic image registration than in those stabilized without registration. The average ratio of CNR in images with elastic image registration to CNR in images without elastic image registration was 2.10 ± 1.73, with no significant difference in the ratio between patients and healthy subjects. Improvement of image quality was also supported by expert comparison. Conclusions Use of B-spline-based elastic image registration in AO-SLO-assisted capillary visualization was effective for enhancing image quality both objectively and subjectively. PMID:24265796

  9. Advances and challenges in deformable image registration: From image fusion to complex motion modelling.

    PubMed

    Schnabel, Julia A; Heinrich, Mattias P; Papież, Bartłomiej W; Brady, Sir J Michael

    2016-10-01

    Over the past 20 years, the field of medical image registration has significantly advanced from multi-modal image fusion to highly non-linear, deformable image registration for a wide range of medical applications and imaging modalities, involving the compensation and analysis of physiological organ motion or of tissue changes due to growth or disease patterns. While the original focus of image registration has predominantly been on correcting for rigid-body motion of brain image volumes acquired at different scanning sessions, often with different modalities, the advent of dedicated longitudinal and cross-sectional brain studies soon necessitated the development of more sophisticated methods that are able to detect and measure local structural or functional changes, or group differences. Moving outside of the brain, cine imaging and dynamic imaging required the development of deformable image registration to directly measure or compensate for local tissue motion. Since then, deformable image registration has become a general enabling technology. In this work we will present our own contributions to the state-of-the-art in deformable multi-modal fusion and complex motion modelling, and then discuss remaining challenges and provide future perspectives to the field.

  10. Development of a novel 2D color map for interactive segmentation of histological images

    PubMed Central

    Chaudry, Qaiser; Sharma, Yachna; Raza, Syed H.; Wang, May D.

    2016-01-01

    We present a color segmentation approach based on a two-dimensional color map derived from the input image. Pathologists stain tissue biopsies with various colored dyes to see the expression of biomarkers. In these images, because of color variation due to inconsistencies in experimental procedures and lighting conditions, the segmentation used to analyze biological features is usually ad-hoc. Many algorithms like K-means use a single metric to segment the image into different color classes and rarely provide users with powerful color control. Our 2D color map interactive segmentation technique based on human color perception information and the color distribution of the input image, enables user control without noticeable delay. Our methodology works for different staining types and different types of cancer tissue images. Our proposed method’s results show good accuracy with low response and computational time making it a feasible method for user interactive applications involving segmentation of histological images.

  11. Occluded target viewing and identification high-resolution 2D imaging laser radar

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Dippel, George F.; Cecchetti, Kristen D.; Wikman, John C.; Drouin, David P.; Egbert, Paul I.

    2007-09-01

    BAE SYSTEMS has developed a high-resolution 2D imaging laser radar (LADAR) system that has proven its ability to detect and identify hard targets in occluded environments, through battlefield obscurants, and through naturally occurring image-degrading atmospheres. Limitations of passive infrared imaging for target identification using medium wavelength infrared (MWIR) and long wavelength infrared (LWIR) atmospheric windows are well known. Of particular concern is that as wavelength is increased the aperture must be increased to maintain resolution, hence, driving apertures to be very larger for long-range identification; impractical because of size, weight, and optics cost. Conversely, at smaller apertures and with large f-numbers images may become photon starved with long integration times. Here, images are most susceptible to distortion from atmospheric turbulence, platform vibration, or both. Additionally, long-range identification using passive thermal imaging is clutter limited arising from objects in close proximity to the target object.

  12. Multimodality imaging combination in small animal via point-based registration

    NASA Astrophysics Data System (ADS)

    Yang, C. C.; Wu, T. H.; Lin, M. H.; Huang, Y. H.; Guo, W. Y.; Chen, C. L.; Wang, T. C.; Yin, W. H.; Lee, J. S.

    2006-12-01

    We present a system of image co-registration in small animal study. Marker-based registration is chosen because of its considerable advantage that the fiducial feature is independent of imaging modality. We also experimented with different scanning protocols and different fiducial marker sizes to improve registration accuracy. Co-registration was conducted using rat phantom fixed by stereotactic frame. Overall, the co-registration accuracy was in sub-millimeter level and close to intrinsic system error. Therefore, we conclude that the system is an accurate co-registration method to be used in small animal studies.

  13. MO-DE-202-02: Advances in Image Registration and Reconstruction for Image-Guided Neurosurgery.

    PubMed

    Siewerdsen, J

    2016-06-01

    At least three major trends in surgical intervention have emerged over the last decade: a move toward more minimally invasive (or non-invasive) approach to the surgical target; the development of high-precision treatment delivery techniques; and the increasing role of multi-modality intraoperative imaging in support of such procedures. This symposium includes invited presentations on recent advances in each of these areas and the emerging role for medical physics research in the development and translation of high-precision interventional techniques. The four speakers are: (1) Keyvan Farahani, "Image-guided focused ultrasound surgery and therapy" (2) Jeffrey H. Siewerdsen, "Advances in image registration and reconstruction for image-guided neurosurgery" (3) Tina Kapur, "Image-guided surgery and interventions in the advanced multimodality image-guided operating (AMIGO) suite" (4) Raj Shekhar, "Multimodality image-guided interventions: Multimodality for the rest of us" Learning Objectives: 1. Understand the principles and applications of HIFU in surgical ablation. 2. Learn about recent advances in 3D-2D and 3D deformable image registration in support of surgical safety and precision. 3. Learn about recent advances in model-based 3D image reconstruction in application to intraoperative 3D imaging. 4. Understand the multi-modality imaging technologies and clinical applications investigated in the AMIGO suite. 5. Understand the emerging need and techniques to implement multi-modality image guidance in surgical applications such as neurosurgery, orthopaedic surgery, vascular surgery, and interventional radiology. Research supported by the NIH and Siemens Healthcare.; J. Siewerdsen; Grant Support - National Institutes of Health; Grant Support - Siemens Healthcare; Grant Support - Carestream Health; Advisory Board - Carestream Health; Licensing Agreement - Carestream Health; Licensing Agreement - Elekta Oncology.; T. Kapur, P41EB015898; R. Shekhar, Funding: R42CA137886 and

  14. Ultrasound calibration using intensity-based image registration: for application in cardiac catheterization procedures

    NASA Astrophysics Data System (ADS)

    Ma, Y. L.; Rhode, K. S.; Gao, G.; King, A. P.; Chinchapatnam, P.; Schaeffter, T.; Hawkes, D. J.; Razavi, R.; Penney, G. P.

    2008-03-01

    We present a novel method to calibrate a 3D ultrasound probe which has a 2D transducer array. By optically tracking a calibrated 3D probe we are able to produce extended field of view 3D ultrasound images. Tracking also enables us to register our ultrasound images to other tracked and calibrated surgical instruments or to other tracked and calibrated imaging devices. Our method applies rigid intensity-based image registration to three or more ultrasound images. These images can either be of a simple phantom, or could potentially be images of the patient. In this latter case we would have an automated calibration system which required no phantom, no image segmentation and was optimized to the patient's ultrasound characteristics i.e. speed of sound. We have carried out experiments using a simple calibration phantom and with ultrasound images of a volunteer's liver. Results are compared to an independent gold-standard. These showed our method to be accurate to 1.43mm using the phantom images and 1.56mm using the liver data, which is slightly better than the traditional point-based calibration method (1.7mm in our experiments).

  15. TU-A-19A-01: Image Registration I: Deformable Image Registration, Contour Propagation and Dose Mapping: 101 and 201

    SciTech Connect

    Kessler, M

    2014-06-15

    Deformable image registration, contour propagation and dose mapping have become common, possibly essential tools for modern image-guided radiation therapy. Historically, these tools have been largely developed at academic medical centers and used in a rather limited and well controlled fashion. Today these tools are now available to the radiotherapy community at large, both as stand-alone applications and as integrated components of both treatment planning and treatment delivery systems. Unfortunately, the details of how these tools work and their limitations are not generally documented or described by the vendors that provide them. Although “it looks right”, determining that unphysical deformations may have occurred is crucial. Because of this, understanding how and when to use, and not use these tools to support everyday clinical decisions is far from straight forward. The goal of this session will be to present both the theory (basic and advanced) and practical clinical use of deformable image registration, contour propagation and dose mapping. To the extent possible, the “secret sauce” that different vendor use to produce reasonable/acceptable results will be described. A detailed explanation of the possible sources of errors and actual examples of these will be presented. Knowing the underlying principles of the process and understanding the confounding factors will help the practicing medical physicist be better able to make decisions (about making decisions) using these tools available. Learning Objectives: Understand the basic (101) and advanced (201) principles of deformable image registration, contour propagation and dose mapping data mapping. Understand the sources and impact of errors in registration and data mapping and the methods for evaluating the performance of these tools. Understand the clinical use and value of these tools, especially when used as a “black box”.

  16. A one-bit approach for image registration

    NASA Astrophysics Data System (ADS)

    Nguyen, An Hung; Pickering, Mark; Lambert, Andrew

    2015-02-01

    Motion estimation or optic flow computation for automatic navigation and obstacle avoidance programs running on Unmanned Aerial Vehicles (UAVs) is a challenging task. These challenges come from the requirements of real-time processing speed and small light-weight image processing hardware with very limited resources (especially memory space) embedded on the UAVs. Solutions towards both simplifying computation and saving hardware resources have recently received much interest. This paper presents an approach for image registration using binary images which addresses these two requirements. This approach uses translational information between two corresponding patches of binary images to estimate global motion. These low bit-resolution images require a very small amount of memory space to store them and allow simple logic operations such as XOR and AND to be used instead of more complex computations such as subtractions and multiplications.

  17. A computationally efficient method for automatic registration of orthogonal x-ray images with volumetric CT data.

    PubMed

    Chen, Xin; Varley, Martin R; Shark, Lik-Kwan; Shentall, Glyn S; Kirby, Mike C

    2008-02-21

    The paper presents a computationally efficient 3D-2D image registration algorithm for automatic pre-treatment validation in radiotherapy. The novel aspects of the algorithm include (a) a hybrid cost function based on partial digitally reconstructed radiographs (DRRs) generated along projected anatomical contours and a level set term for similarity measurement; and (b) a fast search method based on parabola fitting and sensitivity-based search order. Using CT and orthogonal x-ray images from a skull and a pelvis phantom, the proposed algorithm is compared with the conventional ray-casting full DRR based registration method. Not only is the algorithm shown to be computationally more efficient with registration time being reduced by a factor of 8, but also the algorithm is shown to offer 50% higher capture range allowing the initial patient displacement up to 15 mm (measured by mean target registration error). For the simulated data, high registration accuracy with average errors of 0.53 mm +/- 0.12 mm for translation and 0.61 +/- 0.29 degrees for rotation within the capture range has been achieved. For the tested phantom data, the algorithm has also shown to be robust without being affected by artificial markers in the image.

  18. A computationally efficient method for automatic registration of orthogonal x-ray images with volumetric CT data

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Varley, Martin R.; Shark, Lik-Kwan; Shentall, Glyn S.; Kirby, Mike C.

    2008-02-01

    The paper presents a computationally efficient 3D-2D image registration algorithm for automatic pre-treatment validation in radiotherapy. The novel aspects of the algorithm include (a) a hybrid cost function based on partial digitally reconstructed radiographs (DRRs) generated along projected anatomical contours and a level set term for similarity measurement; and (b) a fast search method based on parabola fitting and sensitivity-based search order. Using CT and orthogonal x-ray images from a skull and a pelvis phantom, the proposed algorithm is compared with the conventional ray-casting full DRR based registration method. Not only is the algorithm shown to be computationally more efficient with registration time being reduced by a factor of 8, but also the algorithm is shown to offer 50% higher capture range allowing the initial patient displacement up to 15 mm (measured by mean target registration error). For the simulated data, high registration accuracy with average errors of 0.53 mm ± 0.12 mm for translation and 0.61° ± 0.29° for rotation within the capture range has been achieved. For the tested phantom data, the algorithm has also shown to be robust without being affected by artificial markers in the image.

  19. Towards the clinical integration of an image-guided navigation system for percutaneous liver tumor ablation using freehand 2D ultrasound images.

    PubMed

    Spinczyk, Dominik

    2015-01-01

    Primary and metastatic liver tumors constitute a significant challenge for contemporary medicine. Several improvements are currently being developed and implemented to advance image navigation systems for percutaneous liver focal lesion ablation in clinical applications at the diagnosis, planning and intervention stages. First, the automatic generation of an anatomically accurate parametric model of the preoperative patient liver was proposed in addition to a method to visually evaluate and make manual corrections. Second, a marker was designed to facilitate rigid registration between the model of the preoperative patient liver and the patient during treatment. A specific approach was implemented and tested for rigid mapping by continuously tracking a set of uniquely identified markers and by accounting for breathing motion, facilitating the determination of the optimal breathing phase for needle insertion into the liver tissue. Third, to overcome the challenge of tracking the absolute position of the planned target point, an intra-operative ultrasound (US) system was integrated based on the Public Software Library for UltraSound and OpenIGTLink protocol, which tracks breathing motion in a 2D time sequence of US images. Additionally, to improve the visibility of liver focal lesions, an approach to determine spatio-temporal correspondence between the US sequence and the 4D computed tomography (CT) examination was developed, implemented and tested. This proposed method of processing anatomical model, rigid registration approach and the implemented US tracking and fusion method were tested in 20 anonymized CT and in 10 clinical cases, respectively. The presented methodology can be applied and used with any older 2D US systems, which are currently commonly used in clinical practice.

  20. Using image synthesis for multi-channel registration of different image modalities.

    PubMed

    Chen, Min; Jog, Amod; Carass, Aaron; Prince, Jerry L

    2015-02-21

    This paper presents a multi-channel approach for performing registration between magnetic resonance (MR) images with different modalities. In general, a multi-channel registration cannot be used when the moving and target images do not have analogous modalities. In this work, we address this limitation by using a random forest regression technique to synthesize the missing modalities from the available ones. This allows a single channel registration between two different modalities to be converted into a multi-channel registration with two mono-modal channels. To validate our approach, two openly available registration algorithms and five cost functions were used to compare the label transfer accuracy of the registration with (and without) our multi-channel synthesis approach. Our results show that the proposed method produced statistically significant improvements in registration accuracy (at an α level of 0.001) for both algorithms and all cost functions when compared to a standard multi-modal registration using the same algorithms with mutual information.

  1. MIND Demons for MR-to-CT Deformable Image Registration In Image-Guided Spine Surgery

    PubMed Central

    Reaungamornrat, S.; De Silva, T.; Uneri, A.; Wolinsky, J.-P.; Khanna, A. J.; Kleinszig, G.; Vogt, S.; Prince, J. L.; Siewerdsen, J. H.

    2016-01-01

    Purpose Localization of target anatomy and critical structures defined in preoperative MR images can be achieved by means of multi-modality deformable registration to intraoperative CT. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. Method The method, called MIND Demons, solves for the deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the velocity fields and the diffeomorphisms, a modality-insensitive similarity function suitable to multi-modality images, and constraints on geodesics in Lagrangian coordinates. Direct optimization (without relying on an exponential map of stationary velocity fields used in conventional diffeomorphic Demons) is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, in phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to conventional mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, and normalized MI (NMI) Demons. Result The method yielded sub-voxel invertibility (0.006 mm) and nonsingular spatial Jacobians with capability to preserve local orientation and topology. It demonstrated improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.5 mm compared to 10.9, 2.3, and 4.6 mm for MI FFD, LMI FFD, and NMI Demons methods, respectively. Validation in clinical studies demonstrated realistic deformation with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. Conclusions A modality-independent deformable registration method has been developed to estimate a viscoelastic diffeomorphic map between preoperative MR and intraoperative CT. The

  2. MIND Demons for MR-to-CT deformable image registration in image-guided spine surgery

    NASA Astrophysics Data System (ADS)

    Reaungamornrat, S.; De Silva, T.; Uneri, A.; Wolinsky, J.-P.; Khanna, A. J.; Kleinszig, G.; Vogt, S.; Prince, J. L.; Siewerdsen, J. H.

    2016-03-01

    Purpose: Localization of target anatomy and critical structures defined in preoperative MR images can be achieved by means of multi-modality deformable registration to intraoperative CT. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. Method: The method, called MIND Demons, solves for the deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the velocity fields and the diffeomorphisms, a modality-insensitive similarity function suitable to multi-modality images, and constraints on geodesics in Lagrangian coordinates. Direct optimization (without relying on an exponential map of stationary velocity fields used in conventional diffeomorphic Demons) is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, in phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to conventional mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, and normalized MI (NMI) Demons. Result: The method yielded sub-voxel invertibility (0.006 mm) and nonsingular spatial Jacobians with capability to preserve local orientation and topology. It demonstrated improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.5 mm compared to 10.9, 2.3, and 4.6 mm for MI FFD, LMI FFD, and NMI Demons methods, respectively. Validation in clinical studies demonstrated realistic deformation with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. Conclusions: A modality-independent deformable registration method has been developed to estimate a

  3. A review of biomechanically informed breast image registration

    NASA Astrophysics Data System (ADS)

    Hipwell, John H.; Vavourakis, Vasileios; Han, Lianghao; Mertzanidou, Thomy; Eiben, Björn; Hawkes, David J.

    2016-01-01

    Breast radiology encompasses the full range of imaging modalities from routine imaging via x-ray mammography, magnetic resonance imaging and ultrasound (both two- and three-dimensional), to more recent technologies such as digital breast tomosynthesis, and dedicated breast imaging systems for positron emission mammography and ultrasound tomography. In addition new and experimental modalities, such as Photoacoustics, Near Infrared Spectroscopy and Electrical Impedance Tomography etc, are emerging. The breast is a highly deformable structure however, and this greatly complicates visual comparison of imaging modalities for the purposes of breast screening, cancer diagnosis (including image guided biopsy), tumour staging, treatment monitoring, surgical planning and simulation of the effects of surgery and wound healing etc. Due primarily to the challenges posed by these gross, non-rigid deformations, development of automated methods which enable registration, and hence fusion, of information within and across breast imaging modalities, and between the images and the physical space of the breast during interventions, remains an active research field which has yet to translate suitable methods into clinical practice. This review describes current research in the field of breast biomechanical modelling and identifies relevant publications where the resulting models have been incorporated into breast image registration and simulation algorithms. Despite these developments there remain a number of issues that limit clinical application of biomechanical modelling. These include the accuracy of constitutive modelling, implementation of representative boundary conditions, failure to meet clinically acceptable levels of computational cost, challenges associated with automating patient-specific model generation (i.e. robust image segmentation and mesh generation) and the complexity of applying biomechanical modelling methods in routine clinical practice.

  4. Preliminary work of real-time ultrasound imaging system for 2-D array transducer.

    PubMed

    Li, Xu; Yang, Jiali; Ding, Mingyue; Yuchi, Ming

    2015-01-01

    Ultrasound (US) has emerged as a non-invasive imaging modality that can provide anatomical structure information in real time. To enable the experimental analysis of new 2-D array ultrasound beamforming methods, a pre-beamformed parallel raw data acquisition system was developed for 3-D data capture of 2D array transducer. The transducer interconnection adopted the row-column addressing (RCA) scheme, where the columns and rows were active in sequential for transmit and receive events, respectively. The DAQ system captured the raw data in parallel and the digitized data were fed through the field programmable gate array (FPGA) to implement the pre-beamforming. Finally, 3-D images were reconstructed through the devised platform in real-time.

  5. Image Pretreatment Tools II: Normalization Techniques for 2-DE and 2-D DIGE.

    PubMed

    Robotti, Elisa; Marengo, Emilio; Quasso, Fabio

    2016-01-01

    Gel electrophoresis is usually applied to identify different protein expression profiles in biological samples (e.g., control vs. pathological, control vs. treated). Information about the effect to be investigated (a pathology, a drug, a ripening effect, etc.) is however generally confounded with experimental variability that is quite large in 2-DE and may arise from small variations in the sample preparation, reagents, sample loading, electrophoretic conditions, staining and image acquisition. Obtaining valid quantitative estimates of protein abundances in each map, before the differential analysis, is therefore fundamental to provide robust candidate biomarkers. Normalization procedures are applied to reduce experimental noise and make the images comparable, improving the accuracy of differential analysis. Certainly, they may deeply influence the final results, and to this respect they have to be applied with care. Here, the most widespread normalization procedures are described both for what regards the applications to 2-DE and 2D Difference Gel-electrophoresis (2-D DIGE) maps.

  6. Robust image registration using adaptive coherent point drift method

    NASA Astrophysics Data System (ADS)

    Yang, Lijuan; Tian, Zheng; Zhao, Wei; Wen, Jinhuan; Yan, Weidong

    2016-04-01

    Coherent point drift (CPD) method is a powerful registration tool under the framework of the Gaussian mixture model (GMM). However, the global spatial structure of point sets is considered only without other forms of additional attribute information. The equivalent simplification of mixing parameters and the manual setting of the weight parameter in GMM make the CPD method less robust to outlier and have less flexibility. An adaptive CPD method is proposed to automatically determine the mixing parameters by embedding the local attribute information of features into the construction of GMM. In addition, the weight parameter is treated as an unknown parameter and automatically determined in the expectation-maximization algorithm. In image registration applications, the block-divided salient image disk extraction method is designed to detect sparse salient image features and local self-similarity is used as attribute information to describe the local neighborhood structure of each feature. The experimental results on optical images and remote sensing images show that the proposed method can significantly improve the matching performance.

  7. Fluid Registration of Diffusion Tensor Images Using Information Theory

    PubMed Central

    Chiang, Ming-Chang; Leow, Alex D.; Klunder, Andrea D.; Dutton, Rebecca A.; Barysheva, Marina; Rose, Stephen E.; McMahon, Katie L.; de Zubicaray, Greig I.; Toga, Arthur W.; Thompson, Paul M.

    2008-01-01

    We apply an information-theoretic cost metric, the symmetrized Kullback-Leibler (sKL) divergence, or J-divergence, to fluid registration of diffusion tensor images. The difference between diffusion tensors is quantified based on the sKL-divergence of their associated probability density functions (PDFs). Three-dimensional DTI data from 34 subjects were fluidly registered to an optimized target image. To allow large image deformations but preserve image topology, we regularized the flow with a large-deformation diffeomorphic mapping based on the kinematics of a Navier-Stokes fluid. A driving force was developed to minimize the J-divergence between the deforming source and target diffusion functions, while reorienting the flowing tensors to preserve fiber topography. In initial experiments, we showed that the sKL-divergence based on full diffusion PDFs is adaptable to higher-order diffusion models, such as high angular resolution diffusion imaging (HARDI). The sKL-divergence was sensitive to subtle differences between two diffusivity profiles, showing promise for nonlinear registration applications and multisubject statistical analysis of HARDI data. PMID:18390342

  8. Iris-based cyclotorsional image alignment method for wavefront registration.

    PubMed

    Chernyak, Dimitri A

    2005-12-01

    In refractive surgery, especially wavefront-guided refractive surgery, correct registration of the treatment to the cornea is of paramount importance. The specificity of the custom ablation formula requires that the ablation be applied to the cornea only when it has been precisely aligned with the mapped area. If, however, the eye has rotated between measurement and ablation, and this cyclotorsion is not compensated for, the rotational misalignment could impair the effectiveness of the refractive surgery. To achieve precise registration, a noninvasive method for torsional rotational alignment of the captured wavefront image to the patient's eyes at surgery has been developed. This method applies a common coordinate system to the wavefront and the eye. Video cameras on the laser and wavefront devices precisely establish the spatial relationship between the optics of the eye and the natural features of the iris, enabling the surgeon to identify and compensate for cyclotorsional eye motion, whatever its cause.

  9. Image compression and encryption scheme based on 2D compressive sensing and fractional Mellin transform

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Li, Haolin; Wang, Di; Pan, Shumin; Zhou, Zhihong

    2015-05-01

    Most of the existing image encryption techniques bear security risks for taking linear transform or suffer encryption data expansion for adopting nonlinear transformation directly. To overcome these difficulties, a novel image compression-encryption scheme is proposed by combining 2D compressive sensing with nonlinear fractional Mellin transform. In this scheme, the original image is measured by measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the nonlinear fractional Mellin transform. The measurement matrices are controlled by chaos map. The Newton Smoothed l0 Norm (NSL0) algorithm is adopted to obtain the decryption image. Simulation results verify the validity and the reliability of this scheme.

  10. Filters in 2D and 3D Cardiac SPECT Image Processing

    PubMed Central

    Ploussi, Agapi; Synefia, Stella

    2014-01-01

    Nuclear cardiac imaging is a noninvasive, sensitive method providing information on cardiac structure and physiology. Single photon emission tomography (SPECT) evaluates myocardial perfusion, viability, and function and is widely used in clinical routine. The quality of the tomographic image is a key for accurate diagnosis. Image filtering, a mathematical processing, compensates for loss of detail in an image while reducing image noise, and it can improve the image resolution and limit the degradation of the image. SPECT images are then reconstructed, either by filter back projection (FBP) analytical technique or iteratively, by algebraic methods. The aim of this study is to review filters in cardiac 2D, 3D, and 4D SPECT applications and how these affect the image quality mirroring the diagnostic accuracy of SPECT images. Several filters, including the Hanning, Butterworth, and Parzen filters, were evaluated in combination with the two reconstruction methods as well as with a specified MatLab program. Results showed that for both 3D and 4D cardiac SPECT the Butterworth filter, for different critical frequencies and orders, produced the best results. Between the two reconstruction methods, the iterative one might be more appropriate for cardiac SPECT, since it improves lesion detectability due to the significant improvement of image contrast. PMID:24804144

  11. Filters in 2D and 3D Cardiac SPECT Image Processing.

    PubMed

    Lyra, Maria; Ploussi, Agapi; Rouchota, Maritina; Synefia, Stella

    2014-01-01

    Nuclear cardiac imaging is a noninvasive, sensitive method providing information on cardiac structure and physiology. Single photon emission tomography (SPECT) evaluates myocardial perfusion, viability, and function and is widely used in clinical routine. The quality of the tomographic image is a key for accurate diagnosis. Image filtering, a mathematical processing, compensates for loss of detail in an image while reducing image noise, and it can improve the image resolution and limit the degradation of the image. SPECT images are then reconstructed, either by filter back projection (FBP) analytical technique or iteratively, by algebraic methods. The aim of this study is to review filters in cardiac 2D, 3D, and 4D SPECT applications and how these affect the image quality mirroring the diagnostic accuracy of SPECT images. Several filters, including the Hanning, Butterworth, and Parzen filters, were evaluated in combination with the two reconstruction methods as well as with a specified MatLab program. Results showed that for both 3D and 4D cardiac SPECT the Butterworth filter, for different critical frequencies and orders, produced the best results. Between the two reconstruction methods, the iterative one might be more appropriate for cardiac SPECT, since it improves lesion detectability due to the significant improvement of image contrast.

  12. 2D-pattern matching image and video compression: theory, algorithms, and experiments.

    PubMed

    Alzina, Marc; Szpankowski, Wojciech; Grama, Ananth

    2002-01-01

    In this paper, we propose a lossy data compression framework based on an approximate two-dimensional (2D) pattern matching (2D-PMC) extension of the Lempel-Ziv (1977, 1978) lossless scheme. This framework forms the basis upon which higher level schemes relying on differential coding, frequency domain techniques, prediction, and other methods can be built. We apply our pattern matching framework to image and video compression and report on theoretical and experimental results. Theoretically, we show that the fixed database model used for video compression leads to suboptimal but computationally efficient performance. The compression ratio of this model is shown to tend to the generalized entropy. For image compression, we use a growing database model for which we provide an approximate analysis. The implementation of 2D-PMC is a challenging problem from the algorithmic point of view. We use a range of techniques and data structures such as k-d trees, generalized run length coding, adaptive arithmetic coding, and variable and adaptive maximum distortion level to achieve good compression ratios at high compression speeds. We demonstrate bit rates in the range of 0.25-0.5 bpp for high-quality images and data rates in the range of 0.15-0.5 Mbps for a baseline video compression scheme that does not use any prediction or interpolation. We also demonstrate that this asymmetric compression scheme is capable of extremely fast decompression making it particularly suitable for networked multimedia applications.

  13. JetCurry: Modeling 3D geometry of AGN jets from 2D images

    NASA Astrophysics Data System (ADS)

    Kosak, Katie; Li, KunYang; Avachat, Sayali S.; Perlman, Eric S.

    2017-02-01

    Written in Python, JetCurry models the 3D geometry of jets from 2-D images. JetCurry requires NumPy and SciPy and incorporates emcee (ascl:1303.002) and AstroPy (ascl:1304.002), and optionally uses VPython. From a defined initial part of the jet that serves as a reference point, JetCurry finds the position of highest flux within a bin of data in the image matrix and fits along the x axis for the general location of the bends in the jet. A spline fitting is used to smooth out the resulted jet stream.

  14. 2D-CELL: image processing software for extraction and analysis of 2-dimensional cellular structures

    NASA Astrophysics Data System (ADS)

    Righetti, F.; Telley, H.; Leibling, Th. M.; Mocellin, A.

    1992-01-01

    2D-CELL is a software package for the processing and analyzing of photographic images of cellular structures in a largely interactive way. Starting from a binary digitized image, the programs extract the line network (skeleton) of the structure and determine the graph representation that best models it. Provision is made for manually correcting defects such as incorrect node positions or dangling bonds. Then a suitable algorithm retrieves polygonal contours which define individual cells — local boundary curvatures are neglected for simplicity. Using elementary analytical geometry relations, a range of metric and topological parameters describing the population are then computed, organized into statistical distributions and graphically displayed.

  15. JetCurry: Modeling 3D geometry of AGN jets from 2D images

    NASA Astrophysics Data System (ADS)

    Li, Kunyang; Kosak, Katie; Avachat, Sayali S.; Perlman, Eric S.

    2017-02-01

    Written in Python, JetCurry models the 3D geometry of AGN jets from 2-D images. JetCurry requires NumPy and SciPy and incorporates emcee (ascl:1303.002) and AstroPy (ascl:1304.002), and optionally uses VPython. From a defined initial part of the jet that serves as a reference point, JetCurry finds the position of highest flux within a bin of data in the image matrix and fits along the x axis for the general location of the bends in the jet. A spline fitting is used to smooth out the resulted jet stream.

  16. Image Registration Through The Exploitation Of Perspective Invariant Graphs

    NASA Astrophysics Data System (ADS)

    Gilmore, John F.

    1983-10-01

    This paper describes two new techniques of image registration as applied to scenes consisting of natural terrain. The first technique is a syntactic pattern recognition approach which combines the spatial relationships of a point pattern with point classifications to accurately perform image registration. In this approach, a preprocessor analyzes each image in order to identify points of interest and to classify these points based on statistical features. A classified graph possessing perspective invariant properties is created and is converted into a classification-based grammar string. A local match analysis is performed and the best global match is con-structed. A probability-of-match metric is computed in order to evaluate match confidence. The second technique described is an isomorphic graph matching approach called Mean Neighbors (MN). A MN graph is constructed from a given point pattern taking into account the elliptical projections of real world scenes onto a two dimensional surface. This approach exploits the spatial relationships of the given points of interest but neglects the point classifications used in syntactic processing. A projective, perspective invariant graph is constructed for both the reference and sensed images and a mapping of the coincidence edges occurs. A probability of match metric is used to evaluate the confidence of the best mapping.

  17. 2D image classification for 3D anatomy localization: employing deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    de Vos, Bob D.; Wolterink, Jelmer M.; de Jong, Pim A.; Viergever, Max A.; Išgum, Ivana

    2016-03-01

    Localization of anatomical regions of interest (ROIs) is a preprocessing step in many medical image analysis tasks. While trivial for humans, it is complex for automatic methods. Classic machine learning approaches require the challenge of hand crafting features to describe differences between ROIs and background. Deep convolutional neural networks (CNNs) alleviate this by automatically finding hierarchical feature representations from raw images. We employ this trait to detect anatomical ROIs in 2D image slices in order to localize them in 3D. In 100 low-dose non-contrast enhanced non-ECG synchronized screening chest CT scans, a reference standard was defined by manually delineating rectangular bounding boxes around three anatomical ROIs -- heart, aortic arch, and descending aorta. Every anatomical ROI was automatically identified using a combination of three CNNs, each analyzing one orthogonal image plane. While single CNNs predicted presence or absence of a specific ROI in the given plane, the combination of their results provided a 3D bounding box around it. Classification performance of each CNN, expressed in area under the receiver operating characteristic curve, was >=0.988. Additionally, the performance of ROI localization was evaluated. Median Dice scores for automatically determined bounding boxes around the heart, aortic arch, and descending aorta were 0.89, 0.70, and 0.85 respectively. The results demonstrate that accurate automatic 3D localization of anatomical structures by CNN-based 2D image classification is feasible.

  18. Breast density measurement: 3D cone beam computed tomography (CBCT) images versus 2D digital mammograms

    NASA Astrophysics Data System (ADS)

    Han, Tao; Lai, Chao-Jen; Chen, Lingyun; Liu, Xinming; Shen, Youtao; Zhong, Yuncheng; Ge, Shuaiping; Yi, Ying; Wang, Tianpeng; Yang, Wei T.; Shaw, Chris C.

    2009-02-01

    Breast density has been recognized as one of the major risk factors for breast cancer. However, breast density is currently estimated using mammograms which are intrinsically 2D in nature and cannot accurately represent the real breast anatomy. In this study, a novel technique for measuring breast density based on the segmentation of 3D cone beam CT (CBCT) images was developed and the results were compared to those obtained from 2D digital mammograms. 16 mastectomy breast specimens were imaged with a bench top flat-panel based CBCT system. The reconstructed 3D CT images were corrected for the cupping artifacts and then filtered to reduce the noise level, followed by using threshold-based segmentation to separate the dense tissue from the adipose tissue. For each breast specimen, volumes of the dense tissue structures and the entire breast were computed and used to calculate the volumetric breast density. BI-RADS categories were derived from the measured breast densities and compared with those estimated from conventional digital mammograms. The results show that in 10 of 16 cases the BI-RADS categories derived from the CBCT images were lower than those derived from the mammograms by one category. Thus, breasts considered as dense in mammographic examinations may not be considered as dense with the CBCT images. This result indicates that the relation between breast cancer risk and true (volumetric) breast density needs to be further investigated.

  19. Three modality image registration of brain SPECT/CT and MR images for quantitative analysis of dopamine transporter imaging

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Yuzuho; Takeda, Yuta; Hara, Takeshi; Zhou, Xiangrong; Matsusako, Masaki; Tanaka, Yuki; Hosoya, Kazuhiko; Nihei, Tsutomu; Katafuchi, Tetsuro; Fujita, Hiroshi

    2016-03-01

    Important features in Parkinson's disease (PD) are degenerations and losses of dopamine neurons in corpus striatum. 123I-FP-CIT can visualize activities of the dopamine neurons. The activity radio of background to corpus striatum is used for diagnosis of PD and Dementia with Lewy Bodies (DLB). The specific activity can be observed in the corpus striatum on SPECT images, but the location and the shape of the corpus striatum on SPECT images only are often lost because of the low uptake. In contrast, MR images can visualize the locations of the corpus striatum. The purpose of this study was to realize a quantitative image analysis for the SPECT images by using image registration technique with brain MR images that can determine the region of corpus striatum. In this study, the image fusion technique was used to fuse SPECT and MR images by intervening CT image taken by SPECT/CT. The mutual information (MI) for image registration between CT and MR images was used for the registration. Six SPECT/CT and four MR scans of phantom materials are taken by changing the direction. As the results of the image registrations, 16 of 24 combinations were registered within 1.3mm. By applying the approach to 32 clinical SPECT/CT and MR cases, all of the cases were registered within 0.86mm. In conclusions, our registration method has a potential in superimposing MR images on SPECT images.

  20. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  1. Electron Microscopy: From 2D to 3D Images with Special Reference to Muscle

    PubMed Central

    2015-01-01

    This is a brief and necessarily very sketchy presentation of the evolution in electron microscopy (EM) imaging that was driven by the necessity of extracting 3-D views from the essentially 2-D images produced by the electron beam. The lens design of standard transmission electron microscope has not been greatly altered since its inception. However, technical advances in specimen preparation, image coll