Science.gov

Sample records for 2d image registration

  1. Real-time SPECT and 2D ultrasound image registration.

    PubMed

    Bucki, Marek; Chassat, Fabrice; Galdames, Francisco; Asahi, Takeshi; Pizarro, Daniel; Lobo, Gabriel

    2007-01-01

    In this paper we present a technique for fully automatic, real-time 3D SPECT (Single Photon Emitting Computed Tomography) and 2D ultrasound image registration. We use this technique in the context of kidney lesion diagnosis. Our registration algorithm allows a physician to perform an ultrasound exam after a SPECT image has been acquired and see in real time the registration of both modalities. An automatic segmentation algorithm has been implemented in order to display in 3D the positions of the acquired US images with respect to the organs. PMID:18044572

  2. 2D/3D Image Registration using Regression Learning

    PubMed Central

    Chou, Chen-Rui; Frederick, Brandon; Mageras, Gig; Chang, Sha; Pizer, Stephen

    2013-01-01

    In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object’s 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called CLARET (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projec-tion(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region’s motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method’s application to Image-guided Radiation Therapy (IGRT) requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof. PMID:24058278

  3. 3D-2D registration of cerebral angiograms: a method and evaluation on clinical images.

    PubMed

    Mitrovic, Uroš; Špiclin, Žiga; Likar, Boštjan; Pernuš, Franjo

    2013-08-01

    Endovascular image-guided interventions (EIGI) involve navigation of a catheter through the vasculature followed by application of treatment at the site of anomaly using live 2D projection images for guidance. 3D images acquired prior to EIGI are used to quantify the vascular anomaly and plan the intervention. If fused with the information of live 2D images they can also facilitate navigation and treatment. For this purpose 3D-2D image registration is required. Although several 3D-2D registration methods for EIGI achieve registration accuracy below 1 mm, their clinical application is still limited by insufficient robustness or reliability. In this paper, we propose a 3D-2D registration method based on matching a 3D vasculature model to intensity gradients of live 2D images. To objectively validate 3D-2D registration methods, we acquired a clinical image database of 10 patients undergoing cerebral EIGI and established "gold standard" registrations by aligning fiducial markers in 3D and 2D images. The proposed method had mean registration accuracy below 0.65 mm, which was comparable to tested state-of-the-art methods, and execution time below 1 s. With the highest rate of successful registrations and the highest capture range the proposed method was the most robust and thus a good candidate for application in EIGI. PMID:23649179

  4. Automatic Masking for Robust 3D-2D Image Registration in Image-Guided Spine Surgery

    PubMed Central

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-01-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  5. Image-based RSA: Roentgen stereophotogrammetric analysis based on 2D-3D image registration.

    PubMed

    de Bruin, P W; Kaptein, B L; Stoel, B C; Reiber, J H C; Rozing, P M; Valstar, E R

    2008-01-01

    Image-based Roentgen stereophotogrammetric analysis (IBRSA) integrates 2D-3D image registration and conventional RSA. Instead of radiopaque RSA bone markers, IBRSA uses 3D CT data, from which digitally reconstructed radiographs (DRRs) are generated. Using 2D-3D image registration, the 3D pose of the CT is iteratively adjusted such that the generated DRRs resemble the 2D RSA images as closely as possible, according to an image matching metric. Effectively, by registering all 2D follow-up moments to the same 3D CT, the CT volume functions as common ground. In two experiments, using RSA and using a micromanipulator as gold standard, IBRSA has been validated on cadaveric and sawbone scapula radiographs, and good matching results have been achieved. The accuracy was: |mu |< 0.083 mm for translations and |mu| < 0.023 degrees for rotations. The precision sigma in x-, y-, and z-direction was 0.090, 0.077, and 0.220 mm for translations and 0.155 degrees , 0.243 degrees , and 0.074 degrees for rotations. Our results show that the accuracy and precision of in vitro IBRSA, performed under ideal laboratory conditions, are lower than in vitro standard RSA but higher than in vivo standard RSA. Because IBRSA does not require radiopaque markers, it adds functionality to the RSA method by opening new directions and possibilities for research, such as dynamic analyses using fluoroscopy on subjects without markers and computer navigation applications. PMID:17706656

  6. A faster method for 3D/2D medical image registration--a simulation study.

    PubMed

    Birkfellner, Wolfgang; Wirth, Joachim; Burgstaller, Wolfgang; Baumann, Bernard; Staedele, Harald; Hammer, Beat; Gellrich, Niels Claudius; Jacob, Augustinus Ludwig; Regazzoni, Pietro; Messmer, Peter

    2003-08-21

    3D/2D patient-to-computed-tomography (CT) registration is a method to determine a transformation that maps two coordinate systems by comparing a projection image rendered from CT to a real projection image. Iterative variation of the CT's position between rendering steps finally leads to exact registration. Applications include exact patient positioning in radiation therapy, calibration of surgical robots, and pose estimation in computer-aided surgery. One of the problems associated with 3D/2D registration is the fact that finding a registration includes solving a minimization problem in six degrees of freedom (dof) in motion. This results in considerable time requirements since for each iteration step at least one volume rendering has to be computed. We show that by choosing an appropriate world coordinate system and by applying a 2D/2D registration method in each iteration step, the number of iterations can be grossly reduced from n6 to n5. Here, n is the number of discrete variations around a given coordinate. Depending on the configuration of the optimization algorithm, this reduces the total number of iterations necessary to at least 1/3 of it's original value. The method was implemented and extensively tested on simulated x-ray images of a tibia, a pelvis and a skull base. When using one projective image and a discrete full parameter space search for solving the optimization problem, average accuracy was found to be 1.0 +/- 0.6(degrees) and 4.1 +/- 1.9 (mm) for a registration in six parameters, and 1.0 +/- 0.7(degrees) and 4.2 +/- 1.6 (mm) when using the 5 + 1 dof method described in this paper. Time requirements were reduced by a factor 3.1. We conclude that this hardware-independent optimization of 3D/2D registration is a step towards increasing the acceptance of this promising method for a wide number of clinical applications. PMID:12974581

  7. Progressive attenuation fields: Fast 2D-3D image registration without precomputation

    SciTech Connect

    Rohlfing, Torsten; Russakoff, Daniel B.; Denzler, Joachim; Mori, Kensaku; Maurer, Calvin R. Jr.

    2005-09-15

    Computation of digitally reconstructed radiograph (DRR) images is the rate-limiting step in most current intensity-based algorithms for the registration of three-dimensional (3D) images to two-dimensional (2D) projection images. This paper introduces and evaluates the progressive attenuation field (PAF), which is a new method to speed up DRR computation. A PAF is closely related to an attenuation field (AF). A major difference is that a PAF is constructed on the fly as the registration proceeds; it does not require any precomputation time, nor does it make any prior assumptions of the patient pose or limit the permissible range of patient motion. A PAF effectively acts as a cache memory for projection values once they are computed, rather than as a lookup table for precomputed projections like standard AFs. We use a cylindrical attenuation field parametrization, which is better suited for many medical applications of 2D-3D registration than the usual two-plane parametrization. The computed attenuation values are stored in a hash table for time-efficient storage and access. Using clinical gold-standard spine image data sets from five patients, we demonstrate consistent speedups of intensity-based 2D-3D image registration using PAF DRRs by a factor of 10 over conventional ray casting DRRs with no decrease of registration accuracy or robustness.

  8. A new gold-standard dataset for 2D/3D image registration evaluation

    NASA Astrophysics Data System (ADS)

    Pawiro, Supriyanto; Markelj, Primoz; Gendrin, Christelle; Figl, Michael; Stock, Markus; Bloch, Christoph; Weber, Christoph; Unger, Ewald; Nöbauer, Iris; Kainberger, Franz; Bergmeister, Helga; Georg, Dietmar; Bergmann, Helmar; Birkfellner, Wolfgang

    2010-02-01

    In this paper, we propose a new gold standard data set for the validation of 2D/3D image registration algorithms for image guided radiotherapy. A gold standard data set was calculated using a pig head with attached fiducial markers. We used several imaging modalities common in diagnostic imaging or radiotherapy which include 64-slice computed tomography (CT), magnetic resonance imaging (MRI) using T1, T2 and proton density (PD) sequences, and cone beam CT (CBCT) imaging data. Radiographic data were acquired using kilovoltage (kV) and megavoltage (MV) imaging techniques. The image information reflects both anatomy and reliable fiducial marker information, and improves over existing data sets by the level of anatomical detail and image data quality. The markers of three dimensional (3D) and two dimensional (2D) images were segmented using Analyze 9.0 (AnalyzeDirect, Inc) and an in-house software. The projection distance errors (PDE) and the expected target registration errors (TRE) over all the image data sets were found to be less than 1.7 mm and 1.3 mm, respectively. The gold standard data set, obtained with state-of-the-art imaging technology, has the potential to improve the validation of 2D/3D registration algorithms for image guided therapy.

  9. Nonrigid 2D registration of fluoroscopic coronary artery image sequence with layered motion

    NASA Astrophysics Data System (ADS)

    Park, Taewoo; Jung, Hoyup; Yun, Il Dong

    2016-03-01

    We present a new method for nonrigid registration of coronary artery models with layered motion information. 2D nonrigid registration method is proposed that brings layered motion information into correspondence with fluoroscopic angiograms. The registered model is overlaid on top of interventional angiograms to provide surgical assistance during image-guided chronic total occlusion procedures. The proposed methodology is divided into two parts: layered structures alignments and local nonrigid registration. In the first part, inpainting method is used to estimate a layered rigid transformation that aligns layered motion information. In the second part, a nonrigid registration method is implemented and used to compensate for any local shape discrepancy. Experimental evaluation conducted on a set of 7 fluoroscopic angiograms results in a reduced target registration error, which showed the effectiveness of the proposed method over single layered approach.

  10. Simultaneous 3D–2D image registration and C-arm calibration: Application to endovascular image-guided interventions

    SciTech Connect

    Mitrović, Uroš; Pernuš, Franjo; Likar, Boštjan; Špiclin, Žiga

    2015-11-15

    Purpose: Three-dimensional to two-dimensional (3D–2D) image registration is a key to fusion and simultaneous visualization of valuable information contained in 3D pre-interventional and 2D intra-interventional images with the final goal of image guidance of a procedure. In this paper, the authors focus on 3D–2D image registration within the context of intracranial endovascular image-guided interventions (EIGIs), where the 3D and 2D images are generally acquired with the same C-arm system. The accuracy and robustness of any 3D–2D registration method, to be used in a clinical setting, is influenced by (1) the method itself, (2) uncertainty of initial pose of the 3D image from which registration starts, (3) uncertainty of C-arm’s geometry and pose, and (4) the number of 2D intra-interventional images used for registration, which is generally one and at most two. The study of these influences requires rigorous and objective validation of any 3D–2D registration method against a highly accurate reference or “gold standard” registration, performed on clinical image datasets acquired in the context of the intervention. Methods: The registration process is split into two sequential, i.e., initial and final, registration stages. The initial stage is either machine-based or template matching. The latter aims to reduce possibly large in-plane translation errors by matching a projection of the 3D vessel model and 2D image. In the final registration stage, four state-of-the-art intrinsic image-based 3D–2D registration methods, which involve simultaneous refinement of rigid-body and C-arm parameters, are evaluated. For objective validation, the authors acquired an image database of 15 patients undergoing cerebral EIGI, for which accurate gold standard registrations were established by fiducial marker coregistration. Results: Based on target registration error, the obtained success rates of 3D to a single 2D image registration after initial machine-based and

  11. 3D/2D image registration using weighted histogram of gradient directions

    NASA Astrophysics Data System (ADS)

    Ghafurian, Soheil; Hacihaliloglu, Ilker; Metaxas, Dimitris N.; Tan, Virak; Li, Kang

    2015-03-01

    Three dimensional (3D) to two dimensional (2D) image registration is crucial in many medical applications such as image-guided evaluation of musculoskeletal disorders. One of the key problems is to estimate the 3D CT- reconstructed bone model positions (translation and rotation) which maximize the similarity between the digitally reconstructed radiographs (DRRs) and the 2D fluoroscopic images using a registration method. This problem is computational-intensive due to a large search space and the complicated DRR generation process. Also, finding a similarity measure which converges to the global optimum instead of local optima adds to the challenge. To circumvent these issues, most existing registration methods need a manual initialization, which requires user interaction and is prone to human error. In this paper, we introduce a novel feature-based registration method using the weighted histogram of gradient directions of images. This method simplifies the computation by searching the parameter space (rotation and translation) sequentially rather than simultaneously. In our numeric simulation experiments, the proposed registration algorithm was able to achieve sub-millimeter and sub-degree accuracies. Moreover, our method is robust to the initial guess. It can tolerate up to +/-90°rotation offset from the global optimal solution, which minimizes the need for human interaction to initialize the algorithm.

  12. 2D Ultrasound and 3D MR Image Registration of the Prostate for Brachytherapy Surgical Navigation

    PubMed Central

    Zhang, Shihui; Jiang, Shan; Yang, Zhiyong; Liu, Ranlu

    2015-01-01

    Abstract Two-dimensional (2D) ultrasound (US) images are widely used in minimally invasive prostate procedure for its noninvasive nature and convenience. However, the poor quality of US image makes it difficult to be used as guiding utility. To improve the limitation, we propose a multimodality image guided navigation module that registers 2D US images with magnetic resonance imaging (MRI) based on high quality preoperative models. A 2-step spatial registration method is used to complete the procedure which combines manual alignment and rapid mutual information (MI) optimize algorithm. In addition, a 3-dimensional (3D) reconstruction model of prostate with surrounding organs is employed to combine with the registered images to conduct the navigation. Registration accuracy is measured by calculating the target registration error (TRE). The results show that the error between the US and preoperative MR images of a polyvinyl alcohol hydrogel model phantom is 1.37 ± 0.14 mm, with a similar performance being observed in patient experiments. PMID:26448009

  13. Voxel-based 2-D/3-D registration of fluoroscopy images and CT scans for image-guided surgery.

    PubMed

    Weese, J; Penney, G P; Desmedt, P; Buzug, T M; Hill, D L; Hawkes, D J

    1997-12-01

    Registration of intraoperative fluoroscopy images with preoperative three-dimensional (3-D) CT images can be used for several purposes in image-guided surgery. On the one hand, it can be used to display the position of surgical instruments, which are being tracked by a localizer, in the preoperative CT scan. On the other hand, the registration result can be used to project preoperative planning information or important anatomical structures visible in the CT image onto the fluoroscopy image. For this registration task, a novel voxel-based method in combination with a new similarity measure (pattern intensity) has been developed. The basic concept of the method is explained at the example of two-dimensional (2-D)/3-D registration of a vertebra in an X-ray fluoroscopy image with a 3-D CT image. The registration method is described, and the results for a spine phantom are presented and discussed. Registration has been carried out repeatedly with different starting estimates to study the capture range. Information about registration accuracy has been obtained by comparing the registration results with a highly accurate "ground-truth" registration, which has been derived from fiducial markers attached to the phantom prior to imaging. In addition, registration results for different vertebrae have been compared. The results show that the rotation parameters and the shifts parallel to the projection plane can accurately be determined from a single projection. Because of the projection geometry, the accuracy of the height above the projection plane is significantly lower. PMID:11020832

  14. 2D imaging and 3D sensing data acquisition and mutual registration for painting conservation

    NASA Astrophysics Data System (ADS)

    Fontana, Raffaella; Gambino, Maria Chiara; Greco, Marinella; Marras, Luciano; Pampaloni, Enrico M.; Pelagotti, Anna; Pezzati, Luca; Poggi, Pasquale

    2005-01-01

    We describe the application of 2D and 3D data acquisition and mutual registration to the conservation of paintings. RGB color image acquisition, IR and UV fluorescence imaging, together with the more recent hyperspectral imaging (32 bands) are among the most useful techniques in this field. They generally are meant to provide information on the painting materials, on the employed techniques and on the object state of conservation. However, only when the various images are perfectly registered on each other and on the 3D model, no ambiguity is possible and safe conclusions may be drawn. We present the integration of 2D and 3D measurements carried out on two different paintings: "Madonna of the Yarnwinder" by Leonardo da Vinci, and "Portrait of Lionello d'Este", by Pisanello, both painted in the XV century.

  15. 2D imaging and 3D sensing data acquisition and mutual registration for painting conservation

    NASA Astrophysics Data System (ADS)

    Fontana, Raffaella; Gambino, Maria Chiara; Greco, Marinella; Marras, Luciano; Pampaloni, Enrico M.; Pelagotti, Anna; Pezzati, Luca; Poggi, Pasquale

    2004-12-01

    We describe the application of 2D and 3D data acquisition and mutual registration to the conservation of paintings. RGB color image acquisition, IR and UV fluorescence imaging, together with the more recent hyperspectral imaging (32 bands) are among the most useful techniques in this field. They generally are meant to provide information on the painting materials, on the employed techniques and on the object state of conservation. However, only when the various images are perfectly registered on each other and on the 3D model, no ambiguity is possible and safe conclusions may be drawn. We present the integration of 2D and 3D measurements carried out on two different paintings: "Madonna of the Yarnwinder" by Leonardo da Vinci, and "Portrait of Lionello d'Este", by Pisanello, both painted in the XV century.

  16. GPU accelerated generation of digitally reconstructed radiographs for 2-D/3-D image registration.

    PubMed

    Dorgham, Osama M; Laycock, Stephen D; Fisher, Mark H

    2012-09-01

    Recent advances in programming languages for graphics processing units (GPUs) provide developers with a convenient way of implementing applications which can be executed on the CPU and GPU interchangeably. GPUs are becoming relatively cheap, powerful, and widely available hardware components, which can be used to perform intensive calculations. The last decade of hardware performance developments shows that GPU-based computation is progressing significantly faster than CPU-based computation, particularly if one considers the execution of highly parallelisable algorithms. Future predictions illustrate that this trend is likely to continue. In this paper, we introduce a way of accelerating 2-D/3-D image registration by developing a hybrid system which executes on the CPU and utilizes the GPU for parallelizing the generation of digitally reconstructed radiographs (DRRs). Based on the advancements of the GPU over the CPU, it is timely to exploit the benefits of many-core GPU technology by developing algorithms for DRR generation. Although some previous work has investigated the rendering of DRRs using the GPU, this paper investigates approximations which reduce the computational overhead while still maintaining a quality consistent with that needed for 2-D/3-D registration with sufficient accuracy to be clinically acceptable in certain applications of radiation oncology. Furthermore, by comparing implementations of 2-D/3-D registration on the CPU and GPU, we investigate current performance and propose an optimal framework for PC implementations addressing the rigid registration problem. Using this framework, we are able to render DRR images from a 256×256×133 CT volume in ~24 ms using an NVidia GeForce 8800 GTX and in ~2 ms using NVidia GeForce GTX 580. In addition to applications requiring fast automatic patient setup, these levels of performance suggest image-guided radiation therapy at video frame rates is technically feasible using relatively low cost PC

  17. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes

    PubMed Central

    Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes. PMID:27019849

  18. Rigid 2D/3D registration of intraoperative digital x-ray images and preoperative CT and MR images

    NASA Astrophysics Data System (ADS)

    Tomazevic, Dejan; Likar, Bostjan; Pernus, Franjo

    2002-05-01

    This paper describes a novel approach to register 3D computed tomography (CT) or magnetic resonance (MR) images to a set of 2D X-ray images. Such a registration may be a valuable tool for intraoperative determination of the precise position and orientation of some anatomy of interest, defined in preoperative images. The registration is based solely on the information present in 2D and 3D images. It does not require fiducial markers, X-ray image segmentation, or construction of digitally reconstructed radiographs. The originality of the approach is in using normals to bone surfaces, preoperatively defined in 3D MR or CT data, and gradients of intraoperative X-ray images, which are back-projected towards the X-ray source. The registration is then concerned with finding that rigid transformation of a CT or MR volume, which provides the best match between surface normals and back projected gradients, considering their amplitudes and orientations. The method is tested on a lumbar spine phantom. Gold standard registration is obtained by fidicual markers attached to the phantom. Volumes of interest, containing single vertebrae, are registered to different pairs of X-ray images from different starting positions, chosen randomly and uniformly around the gold standard position. Target registration errors and rotation errors are in order of 0.3 mm and 0.35 degrees for the CT to X-ray registration and 1.3 mm and 1.5 degrees for MR to X-ray registration. The registration is shown to be fast and accurate.

  19. Location constraint based 2D-3D registration of fluoroscopic images and CT volumes for image-guided EP procedures

    NASA Astrophysics Data System (ADS)

    Liao, Rui; Xu, Ning; Sun, Yiyong

    2008-03-01

    Presentation of detailed anatomical structures via 3D Computed Tomographic (CT) volumes helps visualization and navigation in electrophysiology procedures (EP). Registration of the CT volume with the online fluoroscopy however is a challenging task for EP applications due to the lack of discernable features in fluoroscopic images. In this paper, we propose to use the coronary sinus (CS) catheter in bi-plane fluoroscopic images and the coronary sinus in the CT volume as a location constraint to accomplish 2D-3D registration. Two automatic registration algorithms are proposed in this study, and their performances are investigated on both simulated and real data. It is shown that compared to registration using mono-plane fluoroscopy, registration using bi-plane images results in substantially higher accuracy in 3D and enhanced robustness. In addition, compared to registering the projection of CS to the 2D CS catheter, it is more desirable to reconstruct a 3D CS catheter from the bi-plane fluoroscopy and then perform a 3D-3D registration between the CS and the reconstructed CS catheter. Quantitative validation based on simulation and visual inspection on real data demonstrates the feasibility of the proposed workflow in EP procedures.

  20. Registration of 2D x-ray images to 3D MRI by generating pseudo-CT data

    NASA Astrophysics Data System (ADS)

    van der Bom, M. J.; Pluim, J. P. W.; Gounis, M. J.; van de Kraats, E. B.; Sprinkhuizen, S. M.; Timmer, J.; Homan, R.; Bartels, L. W.

    2011-02-01

    Spatial and soft tissue information provided by magnetic resonance imaging can be very valuable during image-guided procedures, where usually only real-time two-dimensional (2D) x-ray images are available. Registration of 2D x-ray images to three-dimensional (3D) magnetic resonance imaging (MRI) data, acquired prior to the procedure, can provide optimal information to guide the procedure. However, registering x-ray images to MRI data is not a trivial task because of their fundamental difference in tissue contrast. This paper presents a technique that generates pseudo-computed tomography (CT) data from multi-spectral MRI acquisitions which is sufficiently similar to real CT data to enable registration of x-ray to MRI with comparable accuracy as registration of x-ray to CT. The method is based on a k-nearest-neighbors (kNN)-regression strategy which labels voxels of MRI data with CT Hounsfield Units. The regression method uses multi-spectral MRI intensities and intensity gradients as features to discriminate between various tissue types. The efficacy of using pseudo-CT data for registration of x-ray to MRI was tested on ex vivo animal data. 2D-3D registration experiments using CT and pseudo-CT data of multiple subjects were performed with a commonly used 2D-3D registration algorithm. On average, the median target registration error for registration of two x-ray images to MRI data was approximately 1 mm larger than for x-ray to CT registration. The authors have shown that pseudo-CT data generated from multi-spectral MRI facilitate registration of MRI to x-ray images. From the experiments it could be concluded that the accuracy achieved was comparable to that of registering x-ray images to CT data.

  1. Clinical Assessment of 2D/3D Registration Accuracy in 4 Major Anatomic Sites Using On-Board 2D Kilovoltage Images for 6D Patient Setup

    PubMed Central

    Li, Guang; Yang, T. Jonathan; Furtado, Hugo; Birkfellner, Wolfgang; Ballangrud, Åse; Powell, Simon N.; Mechalakos, James

    2015-01-01

    To provide a comprehensive assessment of patient setup accuracy in 6 degrees of freedom (DOFs) using 2-dimensional/3-dimensional (2D/3D) image registration with on-board 2-dimensional kilovoltage (OB-2DkV) radiographic images, we evaluated cranial, head and neck (HN), and thoracic and abdominal sites under clinical conditions. A fast 2D/3D image registration method using graphics processing unit GPU was modified for registration between OB-2DkV and 3D simulation computed tomography (simCT) images, with 3D/3D registration as the gold standard for 6DOF alignment. In 2D/3D registration, body roll rotation was obtained solely by matching orthogonal OB-2DkV images with a series of digitally reconstructed radiographs (DRRs) from simCT with a small rotational increment along the gantry rotation axis. The window/level adjustments for optimal visualization of the bone in OB-2DkV and DRRs were performed prior to registration. Ideal patient alignment at the isocenter was calculated and used as an initial registration position. In 3D/3D registration, cone-beam CT (CBCT) was aligned to simCT on bony structures using a bone density filter in 6DOF. Included in this retrospective study were 37 patients treated in 55 fractions with frameless stereotactic radiosurgery or stereotactic body radiotherapy for cranial and paraspinal cancer. A cranial phantom was used to serve as a control. In all cases, CBCT images were acquired for patient setup with subsequent OB-2DkV verification. It was found that the accuracy of the 2D/3D registration was 0.0 ± 0.5 mm and 0.1° ± 0.4° in phantom. In patient, it is site dependent due to deformation of the anatomy: 0.2 ± 1.6 mm and −0.4° ± 1.2° on average for each dimension for the cranial site, 0.7 ± 1.6 mm and 0.3° ± 1.3° for HN, 0.7 ± 2.0 mm and −0.7° ± 1.1° for the thorax, and 1.1 ± 2.6 mm and −0.5° ± 1.9° for the abdomen. Anatomical deformation and presence of soft tissue in 2D/3D registration affect the consistency with

  2. Clinical Assessment of 2D/3D Registration Accuracy in 4 Major Anatomic Sites Using On-Board 2D Kilovoltage Images for 6D Patient Setup.

    PubMed

    Li, Guang; Yang, T Jonathan; Furtado, Hugo; Birkfellner, Wolfgang; Ballangrud, Åse; Powell, Simon N; Mechalakos, James

    2015-06-01

    To provide a comprehensive assessment of patient setup accuracy in 6 degrees of freedom (DOFs) using 2-dimensional/3-dimensional (2D/3D) image registration with on-board 2-dimensional kilovoltage (OB-2 DkV) radiographic images, we evaluated cranial, head and neck (HN), and thoracic and abdominal sites under clinical conditions. A fast 2D/3D image registration method using graphics processing unit GPU was modified for registration between OB-2 DkV and 3D simulation computed tomography (simCT) images, with 3D/3D registration as the gold standard for 6 DOF alignment. In 2D/3D registration, body roll rotation was obtained solely by matching orthogonal OB-2 DkV images with a series of digitally reconstructed radiographs (DRRs) from simCT with a small rotational increment along the gantry rotation axis. The window/level adjustments for optimal visualization of the bone in OB-2 DkV and DRRs were performed prior to registration. Ideal patient alignment at the isocenter was calculated and used as an initial registration position. In 3D/3D registration, cone-beam CT (CBCT) was aligned to simCT on bony structures using a bone density filter in 6DOF. Included in this retrospective study were 37 patients treated in 55 fractions with frameless stereotactic radiosurgery or stereotactic body radiotherapy for cranial and paraspinal cancer. A cranial phantom was used to serve as a control. In all cases, CBCT images were acquired for patient setup with subsequent OB-2 DkV verification. It was found that the accuracy of the 2D/3D registration was 0.0 ± 0.5 mm and 0.1° ± 0.4° in phantom. In patient, it is site dependent due to deformation of the anatomy: 0.2 ± 1.6 mm and -0.4° ± 1.2° on average for each dimension for the cranial site, 0.7 ± 1.6 mm and 0.3° ± 1.3° for HN, 0.7 ± 2.0 mm and -0.7° ± 1.1° for the thorax, and 1.1 ± 2.6 mm and -0.5° ± 1.9° for the abdomen. Anatomical deformation and presence of soft tissue in 2D/3D registration affect the consistency with

  3. Ultrasound 2D Strain Estimator Based on Image Registration for Ultrasound Elastography

    PubMed Central

    Yang, Xiaofeng; Torres, Mylin; Kirkpatrick, Stephanie; Curran, Walter J.; Liu, Tian

    2015-01-01

    In this paper, we present a new approach to calculate 2D strain through the registration of the pre- and post-compression (deformation) B-mode image sequences based on an intensity-based non-rigid registration algorithm (INRA). Compared with the most commonly used cross-correlation (CC) method, our approach is not constrained to any particular set of directions, and can overcome displacement estimation errors introduced by incoherent motion and variations in the signal under high compression. This INRA method was tested using phantom and in vivo data. The robustness of our approach was demonstrated in the axial direction as well as the lateral direction where the standard CC method frequently fails. In addition, our approach copes well under large compression (over 6%). In the phantom study, we computed the strain image under various compressions and calculated the signal-to-noise (SNR) and contrast-to-noise (CNS) ratios. The SNR and CNS values of the INRA method were much higher than those calculated from the CC-based method. Furthermore, the clinical feasibility of our approach was demonstrated with the in vivo data from patients with arm lymphedema. PMID:25914492

  4. Ultrasound 2D strain estimator based on image registration for ultrasound elastography

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Torres, Mylin; Kirkpatrick, Stephanie; Curran, Walter J.; Liu, Tian

    2014-03-01

    In this paper, we present a new approach to calculate 2D strain through the registration of the pre- and post-compression (deformation) B-mode image sequences based on an intensity-based non-rigid registration algorithm (INRA). Compared with the most commonly used cross-correlation (CC) method, our approach is not constrained to any particular set of directions, and can overcome displacement estimation errors introduced by incoherent motion and variations in the signal under high compression. This INRA method was tested using phantom and in vivo data. The robustness of our approach was demonstrated in the axial direction as well as the lateral direction where the standard CC method frequently fails. In addition, our approach copes well under large compression (over 6%). In the phantom study, we computed the strain image under various compressions and calculated the signal-to-noise (SNR) and contrast-to-noise (CNS) ratios. The SNR and CNS values of the INRA method were much higher than those calculated from the CC-based method. Furthermore, the clinical feasibility of our approach was demonstrated with the in vivo data from patients with arm lymphedema.

  5. A 2D to 3D ultrasound image registration algorithm for robotically assisted laparoscopic radical prostatectomy

    NASA Astrophysics Data System (ADS)

    Esteghamatian, Mehdi; Pautler, Stephen E.; McKenzie, Charles A.; Peters, Terry M.

    2011-03-01

    Robotically assisted laparoscopic radical prostatectomy (RARP) is an effective approach to resect the diseased organ, with stereoscopic views of the targeted tissue improving the dexterity of the surgeons. However, since the laparoscopic view acquires only the surface image of the tissue, the underlying distribution of the cancer within the organ is not observed, making it difficult to make informed decisions on surgical margins and sparing of neurovascular bundles. One option to address this problem is to exploit registration to integrate the laparoscopic view with images of pre-operatively acquired dynamic contrast enhanced (DCE) MRI that can demonstrate the regions of malignant tissue within the prostate. Such a view potentially allows the surgeon to visualize the location of the malignancy with respect to the surrounding neurovascular structures, permitting a tissue-sparing strategy to be formulated directly based on the observed tumour distribution. If the tumour is close to the capsule, it may be determined that the adjacent neurovascular bundle (NVB) needs to be sacrificed within the surgical margin to ensure that any erupted tumour was resected. On the other hand, if the cancer is sufficiently far from the capsule, one or both NVBs may be spared. However, in order to realize such image integration, the pre-operative image needs to be fused with the laparoscopic view of the prostate. During the initial stages of the operation, the prostate must be tracked in real time so that the pre-operative MR image remains aligned with patient coordinate system. In this study, we propose and investigate a novel 2D to 3D ultrasound image registration algorithm to track the prostate motion with an accuracy of 2.68+/-1.31mm.

  6. Towards real-time 2D/3D registration for organ motion monitoring in image-guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Gendrin, C.; Spoerk, J.; Bloch, C.; Pawiro, S. A.; Weber, C.; Figl, M.; Markelj, P.; Pernus, F.; Georg, D.; Bergmann, H.; Birkfellner, W.

    2010-02-01

    Nowadays, radiation therapy systems incorporate kV imaging units which allow for the real-time acquisition of intra-fractional X-ray images of the patient with high details and contrast. An application of this technology is tumor motion monitoring during irradiation. For tumor tracking, implanted markers or position sensors are used which requires an intervention. 2D/3D intensity based registration is an alternative, non-invasive method but the procedure must be accelerate to the update rate of the device, which lies in the range of 5 Hz. In this paper we investigate fast CT to a single kV X-ray 2D/3D image registration using a new porcine reference phantom with seven implanted fiducial markers. Several parameters influencing the speed and accuracy of the registrations are investigated. First, four intensity based merit functions, namely Cross-Correlation, Rank Correlation, Mutual Information and Correlation Ratio, are compared. Secondly, wobbled splatting and ray casting rendering techniques are implemented on the GPU and the influence of each algorithm on the performance of 2D/3D registration is evaluated. Rendering times for a single DRR of 20 ms were achieved. Different thresholds of the CT volume were also examined for rendering to find the setting that achieves the best possible correspondence with the X-ray images. Fast registrations below 4 s became possible with an inplane accuracy down to 0.8 mm.

  7. Recovering 3D tumor locations from 2D bioluminescence images and registration with CT images

    NASA Astrophysics Data System (ADS)

    Huang, Xiaolei; Metaxas, Dimitris N.; Menon, Lata G.; Mayer-Kuckuk, Philipp; Bertino, Joseph R.; Banerjee, Debabrata

    2006-02-01

    In this paper, we introduce a novel and efficient algorithm for reconstructing the 3D locations of tumor sites from a set of 2D bioluminescence images which are taken by a same camera but after continually rotating the object by a small angle. Our approach requires a much simpler set up than those using multiple cameras, and the algorithmic steps in our framework are efficient and robust enough to facilitate its use in analyzing the repeated imaging of a same animal transplanted with gene marked cells. In order to visualize in 3D the structure of the tumor, we also co-register the BLI-reconstructed crude structure with detailed anatomical structure extracted from high-resolution microCT on a single platform. We present our method using both phantom studies and real studies on small animals.

  8. 2D-3D registration for prostate radiation therapy based on a statistical model of transmission images

    SciTech Connect

    Munbodh, Reshma; Tagare, Hemant D.; Chen Zhe; Jaffray, David A.; Moseley, Douglas J.; Knisely, Jonathan P. S.; Duncan, James S.

    2009-10-15

    Purpose: In external beam radiation therapy of pelvic sites, patient setup errors can be quantified by registering 2D projection radiographs acquired during treatment to a 3D planning computed tomograph (CT). We present a 2D-3D registration framework based on a statistical model of the intensity values in the two imaging modalities. Methods: The model assumes that intensity values in projection radiographs are independently but not identically distributed due to the nonstationary nature of photon counting noise. Two probability distributions are considered for the intensity values: Poisson and Gaussian. Using maximum likelihood estimation, two similarity measures, maximum likelihood with a Poisson (MLP) and maximum likelihood with Gaussian (MLG), distribution are derived. Further, we investigate the merit of the model-based registration approach for data obtained with current imaging equipment and doses by comparing the performance of the similarity measures derived to that of the Pearson correlation coefficient (ICC) on accurately collected data of an anthropomorphic phantom of the pelvis and on patient data. Results: Registration accuracy was similar for all three similarity measures and surpassed current clinical requirements of 3 mm for pelvic sites. For pose determination experiments with a kilovoltage (kV) cone-beam CT (CBCT) and kV projection radiographs of the phantom in the anterior-posterior (AP) view, registration accuracies were 0.42 mm (MLP), 0.29 mm (MLG), and 0.29 mm (ICC). For kV CBCT and megavoltage (MV) AP portal images of the same phantom, registration accuracies were 1.15 mm (MLP), 0.90 mm (MLG), and 0.69 mm (ICC). Registration of a kV CT and MV AP portal images of a patient was successful in all instances. Conclusions: The results indicate that high registration accuracy is achievable with multiple methods including methods that are based on a statistical model of a 3D CT and 2D projection images.

  9. Registration of 2D to 3D joint images using phase-based mutual information

    NASA Astrophysics Data System (ADS)

    Dalvi, Rupin; Abugharbieh, Rafeef; Pickering, Mark; Scarvell, Jennie; Smith, Paul

    2007-03-01

    Registration of two dimensional to three dimensional orthopaedic medical image data has important applications particularly in the area of image guided surgery and sports medicine. Fluoroscopy to computer tomography (CT) registration is an important case, wherein digitally reconstructed radiographs derived from the CT data are registered to the fluoroscopy data. Traditional registration metrics such as intensity-based mutual information (MI) typically work well but often suffer from gross misregistration errors when the image to be registered contains a partial view of the anatomy visible in the target image. Phase-based MI provides a robust alternative similarity measure which, in addition to possessing the general robustness and noise immunity that MI provides, also employs local phase information in the registration process which makes it less susceptible to the aforementioned errors. In this paper, we propose using the complex wavelet transform for computing image phase information and incorporating that into a phase-based MI measure for image registration. Tests on a CT volume and 6 fluoroscopy images of the knee are presented. The femur and the tibia in the CT volume were individually registered to the fluoroscopy images using intensity-based MI, gradient-based MI and phase-based MI. Errors in the coordinates of fiducials present in the bone structures were used to assess the accuracy of the different registration schemes. Quantitative results demonstrate that the performance of intensity-based MI was the worst. Gradient-based MI performed slightly better, while phase-based MI results were the best consistently producing the lowest errors.

  10. Robust initialization of 2D-3D image registration using the projection-slice theorem and phase correlation

    SciTech Connect

    Bom, M. J. van der; Bartels, L. W.; Gounis, M. J.; Homan, R.; Timmer, J.; Viergever, M. A.; Pluim, J. P. W.

    2010-04-15

    Purpose: The image registration literature comprises many methods for 2D-3D registration for which accuracy has been established in a variety of applications. However, clinical application is limited by a small capture range. Initial offsets outside the capture range of a registration method will not converge to a successful registration. Previously reported capture ranges, defined as the 95% success range, are in the order of 4-11 mm mean target registration error. In this article, a relatively computationally inexpensive and robust estimation method is proposed with the objective to enlarge the capture range. Methods: The method uses the projection-slice theorem in combination with phase correlation in order to estimate the transform parameters, which provides an initialization of the subsequent registration procedure. Results: The feasibility of the method was evaluated by experiments using digitally reconstructed radiographs generated from in vivo 3D-RX data. With these experiments it was shown that the projection-slice theorem provides successful estimates of the rotational transform parameters for perspective projections and in case of translational offsets. The method was further tested on ex vivo ovine x-ray data. In 95% of the cases, the method yielded successful estimates for initial mean target registration errors up to 19.5 mm. Finally, the method was evaluated as an initialization method for an intensity-based 2D-3D registration method. The uninitialized and initialized registration experiments had success rates of 28.8% and 68.6%, respectively. Conclusions: The authors have shown that the initialization method based on the projection-slice theorem and phase correlation yields adequate initializations for existing registration methods, thereby substantially enlarging the capture range of these methods.

  11. Curve-based 2D-3D registration of coronary vessels for image guided procedure

    NASA Astrophysics Data System (ADS)

    Duong, Luc; Liao, Rui; Sundar, Hari; Tailhades, Benoit; Meyer, Andreas; Xu, Chenyang

    2009-02-01

    3D roadmap provided by pre-operative volumetric data that is aligned with fluoroscopy helps visualization and navigation in Interventional Cardiology (IC), especially when contrast agent-injection used to highlight coronary vessels cannot be systematically used during the whole procedure, or when there is low visibility in fluoroscopy for partially or totally occluded vessels. The main contribution of this work is to register pre-operative volumetric data with intraoperative fluoroscopy for specific vessel(s) occurring during the procedure, even without contrast agent injection, to provide a useful 3D roadmap. In addition, this study incorporates automatic ECG gating for cardiac motion. Respiratory motion is identified by rigid body registration of the vessels. The coronary vessels are first segmented from a multislice computed tomography (MSCT) volume and correspondent vessel segments are identified on a single gated 2D fluoroscopic frame. Registration can be explicitly constrained using one or multiple branches of a contrast-enhanced vessel tree or the outline of guide wire used to navigate during the procedure. Finally, the alignment problem is solved by Iterative Closest Point (ICP) algorithm. To be computationally efficient, a distance transform is computed from the 2D identification of each vessel such that distance is zero on the centerline of the vessel and increases away from the centerline. Quantitative results were obtained by comparing the registration of random poses and a ground truth alignment for 5 datasets. We conclude that the proposed method is promising for accurate 2D-3D registration, even for difficult cases of occluded vessel without injection of contrast agent.

  12. Self-calibration of cone-beam CT geometry using 3D–2D image registration

    PubMed Central

    Ouadah, S; Stayman, J W; Gang, G J; Ehtiati, T; Siewerdsen, J H

    2016-01-01

    Robotic C-arms are capable of complex orbits that can increase field of view, reduce artifacts, improve image quality, and/or reduce dose; however, it can be challenging to obtain accurate, reproducible geometric calibration required for image reconstruction for such complex orbits. This work presents a method for geometric calibration for an arbitrary source-detector orbit by registering 2D projection data to a previously acquired 3D image. It also yields a method by which calibration of simple circular orbits can be improved. The registration uses a normalized gradient information similarity metric and the covariance matrix adaptation-evolution strategy optimizer for robustness against local minima and changes in image content. The resulting transformation provides a ‘self-calibration’ of system geometry. The algorithm was tested in phantom studies using both a cone-beam CT (CBCT) test-bench and a robotic C-arm (Artis Zeego, Siemens Healthcare) for circular and non-circular orbits. Self-calibration performance was evaluated in terms of the full-width at half-maximum (FWHM) of the point spread function in CBCT reconstructions, the reprojection error (RPE) of steel ball bearings placed on each phantom, and the overall quality and presence of artifacts in CBCT images. In all cases, self-calibration improved the FWHM—e.g. on the CBCT bench, FWHM = 0.86 mm for conventional calibration compared to 0.65 mm for self-calibration (p < 0.001). Similar improvements were measured in RPE—e.g. on the robotic C-arm, RPE = 0.73 mm for conventional calibration compared to 0.55 mm for self-calibration (p < 0.001). Visible improvement was evident in CBCT reconstructions using self-calibration, particularly about high-contrast, high-frequency objects (e.g. temporal bone air cells and a surgical needle). The results indicate that self-calibration can improve even upon systems with presumably accurate geometric calibration and is applicable to situations where conventional

  13. Self-calibration of cone-beam CT geometry using 3D-2D image registration.

    PubMed

    Ouadah, S; Stayman, J W; Gang, G J; Ehtiati, T; Siewerdsen, J H

    2016-04-01

    Robotic C-arms are capable of complex orbits that can increase field of view, reduce artifacts, improve image quality, and/or reduce dose; however, it can be challenging to obtain accurate, reproducible geometric calibration required for image reconstruction for such complex orbits. This work presents a method for geometric calibration for an arbitrary source-detector orbit by registering 2D projection data to a previously acquired 3D image. It also yields a method by which calibration of simple circular orbits can be improved. The registration uses a normalized gradient information similarity metric and the covariance matrix adaptation-evolution strategy optimizer for robustness against local minima and changes in image content. The resulting transformation provides a 'self-calibration' of system geometry. The algorithm was tested in phantom studies using both a cone-beam CT (CBCT) test-bench and a robotic C-arm (Artis Zeego, Siemens Healthcare) for circular and non-circular orbits. Self-calibration performance was evaluated in terms of the full-width at half-maximum (FWHM) of the point spread function in CBCT reconstructions, the reprojection error (RPE) of steel ball bearings placed on each phantom, and the overall quality and presence of artifacts in CBCT images. In all cases, self-calibration improved the FWHM-e.g. on the CBCT bench, FWHM  =  0.86 mm for conventional calibration compared to 0.65 mm for self-calibration (p  <  0.001). Similar improvements were measured in RPE-e.g. on the robotic C-arm, RPE  =  0.73 mm for conventional calibration compared to 0.55 mm for self-calibration (p  <  0.001). Visible improvement was evident in CBCT reconstructions using self-calibration, particularly about high-contrast, high-frequency objects (e.g. temporal bone air cells and a surgical needle). The results indicate that self-calibration can improve even upon systems with presumably accurate geometric calibration and is

  14. Self-calibration of cone-beam CT geometry using 3D-2D image registration

    NASA Astrophysics Data System (ADS)

    Ouadah, S.; Stayman, J. W.; Gang, G. J.; Ehtiati, T.; Siewerdsen, J. H.

    2016-04-01

    Robotic C-arms are capable of complex orbits that can increase field of view, reduce artifacts, improve image quality, and/or reduce dose; however, it can be challenging to obtain accurate, reproducible geometric calibration required for image reconstruction for such complex orbits. This work presents a method for geometric calibration for an arbitrary source-detector orbit by registering 2D projection data to a previously acquired 3D image. It also yields a method by which calibration of simple circular orbits can be improved. The registration uses a normalized gradient information similarity metric and the covariance matrix adaptation-evolution strategy optimizer for robustness against local minima and changes in image content. The resulting transformation provides a ‘self-calibration’ of system geometry. The algorithm was tested in phantom studies using both a cone-beam CT (CBCT) test-bench and a robotic C-arm (Artis Zeego, Siemens Healthcare) for circular and non-circular orbits. Self-calibration performance was evaluated in terms of the full-width at half-maximum (FWHM) of the point spread function in CBCT reconstructions, the reprojection error (RPE) of steel ball bearings placed on each phantom, and the overall quality and presence of artifacts in CBCT images. In all cases, self-calibration improved the FWHM—e.g. on the CBCT bench, FWHM  =  0.86 mm for conventional calibration compared to 0.65 mm for self-calibration (p  <  0.001). Similar improvements were measured in RPE—e.g. on the robotic C-arm, RPE  =  0.73 mm for conventional calibration compared to 0.55 mm for self-calibration (p  <  0.001). Visible improvement was evident in CBCT reconstructions using self-calibration, particularly about high-contrast, high-frequency objects (e.g. temporal bone air cells and a surgical needle). The results indicate that self-calibration can improve even upon systems with presumably accurate geometric calibration and is

  15. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    NASA Astrophysics Data System (ADS)

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  16. Known-component 3D-2D registration for image guidance and quality assurance in spine surgery pedicle screw placement

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Stayman, J. W.; De Silva, T.; Wang, A. S.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Wolinsky, J.-P.; Gokaslan, Z. L.; Siewerdsen, J. H.

    2015-03-01

    Purpose. To extend the functionality of radiographic / fluoroscopic imaging systems already within standard spine surgery workflow to: 1) provide guidance of surgical device analogous to an external tracking system; and 2) provide intraoperative quality assurance (QA) of the surgical product. Methods. Using fast, robust 3D-2D registration in combination with 3D models of known components (surgical devices), the 3D pose determination was solved to relate known components to 2D projection images and 3D preoperative CT in near-real-time. Exact and parametric models of the components were used as input to the algorithm to evaluate the effects of model fidelity. The proposed algorithm employs the covariance matrix adaptation evolution strategy (CMA-ES) to maximize gradient correlation (GC) between measured projections and simulated forward projections of components. Geometric accuracy was evaluated in a spine phantom in terms of target registration error at the tool tip (TREx), and angular deviation (TREΦ) from planned trajectory. Results. Transpedicle surgical devices (probe tool and spine screws) were successfully guided with TREx<2 mm and TREΦ <0.5° given projection views separated by at least >30° (easily accommodated on a mobile C-arm). QA of the surgical product based on 3D-2D registration demonstrated the detection of pedicle screw breach with TREx<1 mm, demonstrating a trend of improved accuracy correlated to the fidelity of the component model employed. Conclusions. 3D-2D registration combined with 3D models of known surgical components provides a novel method for near-real-time guidance and quality assurance using a mobile C-arm without external trackers or fiducial markers. Ongoing work includes determination of optimal views based on component shape and trajectory, improved robustness to anatomical deformation, and expanded preclinical testing in spine and intracranial surgeries.

  17. Development of fast patient position verification software using 2D-3D image registration and its clinical experience.

    PubMed

    Mori, Shinichiro; Kumagai, Motoki; Miki, Kentaro; Fukuhara, Riki; Haneishi, Hideaki

    2015-09-01

    To improve treatment workflow, we developed a graphic processing unit (GPU)-based patient positional verification software application and integrated it into carbon-ion scanning beam treatment. Here, we evaluated the basic performance of the software. The algorithm provides 2D/3D registration matching using CT and orthogonal X-ray flat panel detector (FPD) images. The participants were 53 patients with tumors of the head and neck, prostate or lung receiving carbon-ion beam treatment. 2D/3D-ITchi-Gime (ITG) calculation accuracy was evaluated in terms of computation time and registration accuracy. Registration calculation was determined using the similarity measurement metrics gradient difference (GD), normalized mutual information (NMI), zero-mean normalized cross-correlation (ZNCC), and their combination. Registration accuracy was dependent on the particular metric used. Representative examples were determined to have target registration error (TRE) = 0.45 ± 0.23 mm and angular error (AE) = 0.35 ± 0.18° with ZNCC + GD for a head and neck tumor; TRE = 0.12 ± 0.07 mm and AE = 0.16 ± 0.07° with ZNCC for a pelvic tumor; and TRE = 1.19 ± 0.78 mm and AE = 0.83 ± 0.61° with ZNCC for lung tumor. Calculation time was less than 7.26 s.The new registration software has been successfully installed and implemented in our treatment process. We expect that it will improve both treatment workflow and treatment accuracy. PMID:26081313

  18. Development of fast patient position verification software using 2D-3D image registration and its clinical experience

    PubMed Central

    Mori, Shinichiro; Kumagai, Motoki; Miki, Kentaro; Fukuhara, Riki; Haneishi, Hideaki

    2015-01-01

    To improve treatment workflow, we developed a graphic processing unit (GPU)-based patient positional verification software application and integrated it into carbon-ion scanning beam treatment. Here, we evaluated the basic performance of the software. The algorithm provides 2D/3D registration matching using CT and orthogonal X-ray flat panel detector (FPD) images. The participants were 53 patients with tumors of the head and neck, prostate or lung receiving carbon-ion beam treatment. 2D/3D-ITchi-Gime (ITG) calculation accuracy was evaluated in terms of computation time and registration accuracy. Registration calculation was determined using the similarity measurement metrics gradient difference (GD), normalized mutual information (NMI), zero-mean normalized cross-correlation (ZNCC), and their combination. Registration accuracy was dependent on the particular metric used. Representative examples were determined to have target registration error (TRE) = 0.45 ± 0.23 mm and angular error (AE) = 0.35 ± 0.18° with ZNCC + GD for a head and neck tumor; TRE = 0.12 ± 0.07 mm and AE = 0.16 ± 0.07° with ZNCC for a pelvic tumor; and TRE = 1.19 ± 0.78 mm and AE = 0.83 ± 0.61° with ZNCC for lung tumor. Calculation time was less than 7.26 s.The new registration software has been successfully installed and implemented in our treatment process. We expect that it will improve both treatment workflow and treatment accuracy. PMID:26081313

  19. Auto-masked 2D/3D image registration and its validation with clinical cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Steininger, P.; Neuner, M.; Weichenberger, H.; Sharp, G. C.; Winey, B.; Kametriser, G.; Sedlmayer, F.; Deutschmann, H.

    2012-07-01

    Image-guided alignment procedures in radiotherapy aim at minimizing discrepancies between the planned and the real patient setup. For that purpose, we developed a 2D/3D approach which rigidly registers a computed tomography (CT) with two x-rays by maximizing the agreement in pixel intensity between the x-rays and the corresponding reconstructed radiographs from the CT. Moreover, the algorithm selects regions of interest (masks) in the x-rays based on 3D segmentations from the pre-planning stage. For validation, orthogonal x-ray pairs from different viewing directions of 80 pelvic cone-beam CT (CBCT) raw data sets were used. The 2D/3D results were compared to corresponding standard 3D/3D CBCT-to-CT alignments. Outcome over 8400 2D/3D experiments showed that parametric errors in root mean square were <0.18° (rotations) and <0.73 mm (translations), respectively, using rank correlation as intensity metric. This corresponds to a mean target registration error, related to the voxels of the lesser pelvis, of <2 mm in 94.1% of the cases. From the results we conclude that 2D/3D registration based on sequentially acquired orthogonal x-rays of the pelvis is a viable alternative to CBCT-based approaches if rigid alignment on bony anatomy is sufficient, no volumetric intra-interventional data set is required and the expected error range fits the individual treatment prescription.

  20. Auto-masked 2D/3D image registration and its validation with clinical cone-beam computed tomography.

    PubMed

    Steininger, P; Neuner, M; Weichenberger, H; Sharp, G C; Winey, B; Kametriser, G; Sedlmayer, F; Deutschmann, H

    2012-07-01

    Image-guided alignment procedures in radiotherapy aim at minimizing discrepancies between the planned and the real patient setup. For that purpose, we developed a 2D/3D approach which rigidly registers a computed tomography (CT) with two x-rays by maximizing the agreement in pixel intensity between the x-rays and the corresponding reconstructed radiographs from the CT. Moreover, the algorithm selects regions of interest (masks) in the x-rays based on 3D segmentations from the pre-planning stage. For validation, orthogonal x-ray pairs from different viewing directions of 80 pelvic cone-beam CT (CBCT) raw data sets were used. The 2D/3D results were compared to corresponding standard 3D/3D CBCT-to-CT alignments. Outcome over 8400 2D/3D experiments showed that parametric errors in root mean square were <0.18° (rotations) and <0.73 mm (translations), respectively, using rank correlation as intensity metric. This corresponds to a mean target registration error, related to the voxels of the lesser pelvis, of <2 mm in 94.1% of the cases. From the results we conclude that 2D/3D registration based on sequentially acquired orthogonal x-rays of the pelvis is a viable alternative to CBCT-based approaches if rigid alignment on bony anatomy is sufficient, no volumetric intra-interventional data set is required and the expected error range fits the individual treatment prescription. PMID:22705709

  1. Registration of 2D C-Arm and 3D CT Images for a C-Arm Image-Assisted Navigation System for Spinal Surgery.

    PubMed

    Chang, Chih-Ju; Lin, Geng-Li; Tse, Alex; Chu, Hong-Yu; Tseng, Ching-Shiow

    2015-01-01

    C-Arm image-assisted surgical navigation system has been broadly applied to spinal surgery. However, accurate path planning on the C-Arm AP-view image is difficult. This research studies 2D-3D image registration methods to obtain the optimum transformation matrix between C-Arm and CT image frames. Through the transformation matrix, the surgical path planned on preoperative CT images can be transformed and displayed on the C-Arm images for surgical guidance. The positions of surgical instruments will also be displayed on both CT and C-Arm in the real time. Five similarity measure methods of 2D-3D image registration including Normalized Cross-Correlation, Gradient Correlation, Pattern Intensity, Gradient Difference Correlation, and Mutual Information combined with three optimization methods including Powell's method, Downhill simplex algorithm, and genetic algorithm are applied to evaluate their performance in converge range, efficiency, and accuracy. Experimental results show that the combination of Normalized Cross-Correlation measure method with Downhill simplex algorithm obtains maximum correlation and similarity in C-Arm and Digital Reconstructed Radiograph (DRR) images. Spine saw bones are used in the experiment to evaluate 2D-3D image registration accuracy. The average error in displacement is 0.22 mm. The success rate is approximately 90% and average registration time takes 16 seconds. PMID:27018859

  2. Robust 3D-2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation

    NASA Astrophysics Data System (ADS)

    Otake, Yoshito; Wang, Adam S.; Webster Stayman, J.; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Khanna, A. Jay; Gokaslan, Ziya L.; Siewerdsen, Jeffrey H.

    2013-12-01

    We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with ‘success’ defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial

  3. Robust 3D–2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation

    PubMed Central

    Otake, Yoshito; Wang, Adam S; Stayman, J Webster; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Khanna, A Jay; Gokaslan, Ziya L; Siewerdsen, Jeffrey H

    2016-01-01

    We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with `success' defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial run) the

  4. Significant acceleration of 2D-3D registration-based fusion of ultrasound and x-ray images by mesh-based DRR rendering

    NASA Astrophysics Data System (ADS)

    Kaiser, Markus; John, Matthias; Borsdorf, Anja; Mountney, Peter; Ionasec, Razvan; Nöttling, Alois; Kiefer, Philipp; Seeburger, Jörg; Neumuth, Thomas

    2013-03-01

    For transcatheter-based minimally invasive procedures in structural heart disease ultrasound and X-ray are the two enabling imaging modalities. A live fusion of both real-time modalities can potentially improve the workflow and the catheter navigation by combining the excellent instrument imaging of X-ray with the high-quality soft tissue imaging of ultrasound. A recently published approach to fuse X-ray fluoroscopy with trans-esophageal echo (TEE) registers the ultrasound probe to X-ray images by a 2D-3D registration method which inherently provides a registration of ultrasound images to X-ray images. In this paper, we significantly accelerate the 2D-3D registration method in this context. The main novelty is to generate the projection images (DRR) of the 3D object not via volume ray-casting but instead via a fast rendering of triangular meshes. This is possible, because in the setting for TEE/X-ray fusion the 3D geometry of the ultrasound probe is known in advance and their main components can be described by triangular meshes. We show that the new approach can achieve a speedup factor up to 65 and does not affect the registration accuracy when used in conjunction with the gradient correlation similarity measure. The improvement is independent of the underlying registration optimizer. Based on the results, a TEE/X-ray fusion could be performed with a higher frame rate and a shorter time lag towards real-time registration performance. The approach could potentially accelerate other applications of 2D-3D registrations, e.g. the registration of implant models with X-ray images.

  5. 3D reconstruction of 2D fluorescence histology images and registration with in vivo MR images: application in a rodent stroke model.

    PubMed

    Stille, Maik; Smith, Edward J; Crum, William R; Modo, Michel

    2013-09-30

    To validate and add value to non-invasive imaging techniques, the corresponding histology is required to establish biological correlates. We present an efficient, semi-automated image-processing pipeline that uses immunohistochemically stained sections to reconstruct a 3D brain volume from 2D histological images before registering these with the corresponding 3D in vivo magnetic resonance images (MRI). A multistep registration procedure that first aligns the "global" volume by using the centre of mass and then applies a rigid and affine alignment based on signal intensities is described. This technique was applied to a training set of three rat brain volumes before being validated on three normal brains. Application of the approach to register "abnormal" images from a rat model of stroke allowed the neurobiological correlates of the variations in the hyper-intense MRI signal intensity caused by infarction to be investigated. For evaluation, the corresponding anatomical landmarks in MR and histology were defined to measure the registration accuracy. A registration error of 0.249 mm (approximately one in-plane voxel dimension) was evident in healthy rat brains and of 0.323 mm in a rodent model of stroke. The proposed reconstruction and registration pipeline allowed for the precise analysis of non-invasive MRI and corresponding microstructural histological features in 3D. We were thus able to interrogate histology to deduce the cause of MRI signal variations in the lesion cavity and the peri-infarct area. PMID:23816399

  6. Automatic localization of target vertebrae in spine surgery using fast CT-to-fluoroscopy (3D-2D) image registration

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Schafer, S.; Stayman, J. W.; Zbijewski, W.; Kleinszig, G.; Graumann, R.; Khanna, A. J.; Siewerdsen, J. H.

    2012-02-01

    Localization of target vertebrae is an essential step in minimally invasive spine surgery, with conventional methods relying on "level counting" - i.e., manual counting of vertebrae under fluoroscopy starting from readily identifiable anatomy (e.g., the sacrum). The approach requires an undesirable level of radiation, time, and is prone to counting errors due to the similar appearance of vertebrae in projection images; wrong-level surgery occurs in 1 of every ~3000 cases. This paper proposes a method to automatically localize target vertebrae in x-ray projections using 3D-2D registration between preoperative CT (in which vertebrae are preoperatively labeled) and intraoperative fluoroscopy. The registration uses an intensity-based approach with a gradient-based similarity metric and the CMA-ES algorithm for optimization. Digitally reconstructed radiographs (DRRs) and a robust similarity metric are computed on GPU to accelerate the process. Evaluation in clinical CT data included 5,000 PA and LAT projections randomly perturbed to simulate human variability in setup of mobile intraoperative C-arm. The method demonstrated 100% success for PA view (projection error: 0.42mm) and 99.8% success for LAT view (projection error: 0.37mm). Initial implementation on GPU provided automatic target localization within about 3 sec, with further improvement underway via multi-GPU. The ability to automatically label vertebrae in fluoroscopy promises to streamline surgical workflow, improve patient safety, and reduce wrong-site surgeries, especially in large patients for whom manual methods are time consuming and error prone.

  7. Position tracking of moving liver lesion based on real-time registration between 2D ultrasound and 3D preoperative images

    SciTech Connect

    Weon, Chijun; Hyun Nam, Woo; Lee, Duhgoon; Ra, Jong Beom; Lee, Jae Young

    2015-01-15

    Purpose: Registration between 2D ultrasound (US) and 3D preoperative magnetic resonance (MR) (or computed tomography, CT) images has been studied recently for US-guided intervention. However, the existing techniques have some limits, either in the registration speed or the performance. The purpose of this work is to develop a real-time and fully automatic registration system between two intermodal images of the liver, and subsequently an indirect lesion positioning/tracking algorithm based on the registration result, for image-guided interventions. Methods: The proposed position tracking system consists of three stages. In the preoperative stage, the authors acquire several 3D preoperative MR (or CT) images at different respiratory phases. Based on the transformations obtained from nonrigid registration of the acquired 3D images, they then generate a 4D preoperative image along the respiratory phase. In the intraoperative preparatory stage, they properly attach a 3D US transducer to the patient’s body and fix its pose using a holding mechanism. They then acquire a couple of respiratory-controlled 3D US images. Via the rigid registration of these US images to the 3D preoperative images in the 4D image, the pose information of the fixed-pose 3D US transducer is determined with respect to the preoperative image coordinates. As feature(s) to use for the rigid registration, they may choose either internal liver vessels or the inferior vena cava. Since the latter is especially useful in patients with a diffuse liver disease, the authors newly propose using it. In the intraoperative real-time stage, they acquire 2D US images in real-time from the fixed-pose transducer. For each US image, they select candidates for its corresponding 2D preoperative slice from the 4D preoperative MR (or CT) image, based on the predetermined pose information of the transducer. The correct corresponding image is then found among those candidates via real-time 2D registration based on a

  8. Registration of 2D cardiac images to real-time 3D ultrasound volumes for 3D stress echocardiography

    NASA Astrophysics Data System (ADS)

    Leung, K. Y. Esther; van Stralen, Marijn; Voormolen, Marco M.; van Burken, Gerard; Nemes, Attila; ten Cate, Folkert J.; Geleijnse, Marcel L.; de Jong, Nico; van der Steen, Antonius F. W.; Reiber, Johan H. C.; Bosch, Johan G.

    2006-03-01

    Three-dimensional (3D) stress echocardiography is a novel technique for diagnosing cardiac dysfunction, by comparing wall motion of the left ventricle under different stages of stress. For quantitative comparison of this motion, it is essential to register the ultrasound data. We propose an intensity based rigid registration method to retrieve two-dimensional (2D) four-chamber (4C), two-chamber, and short-axis planes from the 3D data set acquired in the stress stage, using manually selected 2D planes in the rest stage as reference. The algorithm uses the Nelder-Mead simplex optimization to find the optimal transformation of one uniform scaling, three rotation, and three translation parameters. We compared registration using the SAD, SSD, and NCC metrics, performed on four resolution levels of a Gaussian pyramid. The registration's effectiveness was assessed by comparing the 3D positions of the registered apex and mitral valve midpoints and 4C direction with the manually selected results. The registration was tested on data from 20 patients. Best results were found using the NCC metric on data downsampled with factor two: mean registration errors were 8.1mm, 5.4mm, and 8.0° in the apex position, mitral valve position, and 4C direction respectively. The errors were close to the interobserver (7.1mm, 3.8mm, 7.4°) and intraobserver variability (5.2mm, 3.3mm, 7.0°), and better than the error before registration (9.4mm, 9.0mm, 9.9°). We demonstrated that the registration algorithm visually and quantitatively improves the alignment of rest and stress data sets, performing similar to manual alignment. This will improve automated analysis in 3D stress echocardiography.

  9. A comparison of the 3D kinematic measurements obtained by single-plane 2D-3D image registration and RSA.

    PubMed

    Muhit, Abdullah A; Pickering, Mark R; Ward, Tom; Scarvell, Jennie M; Smith, Paul N

    2010-01-01

    3D computed tomography (CT) to single-plane 2D fluoroscopy registration is an emerging technology for many clinical applications such as kinematic analysis of human joints and image-guided surgery. However, previous registration approaches have suffered from the inaccuracy of determining precise motion parameters for out-of-plane movements. In this paper we compare kinematic measurements obtained by a new 2D-3D registration algorithm with measurements provided by the gold standard Roentgen Stereo Analysis (RSA). In particular, we are interested in the out-of-plane translation and rotations which are difficult to measure precisely using a single plane approach. Our experimental results show that the standard deviation of the error for out-of-plane translation is 0.42 mm which compares favourably to RSA. It is also evident that our approach produces very similar flexion/extension, abduction/adduction and external knee rotation angles when compared to RSA. PMID:21097358

  10. Real-time intensity based 2D/3D registration using kV-MV image pairs for tumor motion tracking in image guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Furtado, H.; Steiner, E.; Stock, M.; Georg, D.; Birkfellner, W.

    2014-03-01

    Intra-fractional respiratorymotion during radiotherapy is one of themain sources of uncertainty in dose application creating the need to extend themargins of the planning target volume (PTV). Real-time tumormotion tracking by 2D/3D registration using on-board kilo-voltage (kV) imaging can lead to a reduction of the PTV. One limitation of this technique when using one projection image, is the inability to resolve motion along the imaging beam axis. We present a retrospective patient study to investigate the impact of paired portal mega-voltage (MV) and kV images, on registration accuracy. We used data from eighteen patients suffering from non small cell lung cancer undergoing regular treatment at our center. For each patient we acquired a planning CT and sequences of kV and MV images during treatment. Our evaluation consisted of comparing the accuracy of motion tracking in 6 degrees-of-freedom(DOF) using the anterior-posterior (AP) kV sequence or the sequence of kV-MV image pairs. We use graphics processing unit rendering for real-time performance. Motion along cranial-caudal direction could accurately be extracted when using only the kV sequence but in AP direction we obtained large errors. When using kV-MV pairs, the average error was reduced from 3.3 mm to 1.8 mm and the motion along AP was successfully extracted. The mean registration time was of 190+/-35ms. Our evaluation shows that using kVMV image pairs leads to improved motion extraction in 6 DOF. Therefore, this approach is suitable for accurate, real-time tumor motion tracking with a conventional LINAC.

  11. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology

    PubMed Central

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang

    2012-01-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 × 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches – namely so-called wobbled splatting – to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. PMID:21782399

  12. Fully automated 2D-3D registration and verification.

    PubMed

    Varnavas, Andreas; Carrell, Tom; Penney, Graeme

    2015-12-01

    Clinical application of 2D-3D registration technology often requires a significant amount of human interaction during initialisation and result verification. This is one of the main barriers to more widespread clinical use of this technology. We propose novel techniques for automated initial pose estimation of the 3D data and verification of the registration result, and show how these techniques can be combined to enable fully automated 2D-3D registration, particularly in the case of a vertebra based system. The initialisation method is based on preoperative computation of 2D templates over a wide range of 3D poses. These templates are used to apply the Generalised Hough Transform to the intraoperative 2D image and the sought 3D pose is selected with the combined use of the generated accumulator arrays and a Gradient Difference Similarity Measure. On the verification side, two algorithms are proposed: one using normalised features based on the similarity value and the other based on the pose agreement between multiple vertebra based registrations. The proposed methods are employed here for CT to fluoroscopy registration and are trained and tested with data from 31 clinical procedures with 417 low dose, i.e. low quality, high noise interventional fluoroscopy images. When similarity value based verification is used, the fully automated system achieves a 95.73% correct registration rate, whereas a no registration result is produced for the remaining 4.27% of cases (i.e. incorrect registration rate is 0%). The system also automatically detects input images outside its operating range. PMID:26387052

  13. Efficient framework for deformable 2D-3D registration

    NASA Astrophysics Data System (ADS)

    Fluck, Oliver; Aharon, Shmuel; Khamene, Ali

    2008-03-01

    Using 2D-3D registration it is possible to extract the body transformation between the coordinate systems of X-ray and volumetric CT images. Our initial motivation is the improvement of accuracy of external beam radiation therapy, an effective method for treating cancer, where CT data play a central role in radiation treatment planning. Rigid body transformation is used to compute the correct patient setup. The drawback of such approaches is that the rigidity assumption on the imaged object is not valid for most of the patient cases, mainly due to respiratory motion. In the present work, we address this limitation by proposing a flexible framework for deformable 2D-3D registration consisting of a learning phase incorporating 4D CT data sets and hardware accelerated free form DRR generation, 2D motion computation, and 2D-3D back projection.

  14. 3D-2D image registration for target localization in spine surgery: investigation of similarity metrics providing robustness to content mismatch.

    PubMed

    De Silva, T; Uneri, A; Ketcha, M D; Reaungamornrat, S; Kleinszig, G; Vogt, S; Aygun, N; Lo, S-F; Wolinsky, J-P; Siewerdsen, J H

    2016-04-21

    In image-guided spine surgery, robust three-dimensional to two-dimensional (3D-2D) registration of preoperative computed tomography (CT) and intraoperative radiographs can be challenged by the image content mismatch associated with the presence of surgical instrumentation and implants as well as soft-tissue resection or deformation. This work investigates image similarity metrics in 3D-2D registration offering improved robustness against mismatch, thereby improving performance and reducing or eliminating the need for manual masking. The performance of four gradient-based image similarity metrics (gradient information (GI), gradient correlation (GC), gradient information with linear scaling (GS), and gradient orientation (GO)) with a multi-start optimization strategy was evaluated in an institutional review board-approved retrospective clinical study using 51 preoperative CT images and 115 intraoperative mobile radiographs. Registrations were tested with and without polygonal masks as a function of the number of multistarts employed during optimization. Registration accuracy was evaluated in terms of the projection distance error (PDE) and assessment of failure modes (PDE  >  30 mm) that could impede reliable vertebral level localization. With manual polygonal masking and 200 multistarts, the GC and GO metrics exhibited robust performance with 0% gross failures and median PDE  <  6.4 mm (±4.4 mm interquartile range (IQR)) and a median runtime of 84 s (plus upwards of 1-2 min for manual masking). Excluding manual polygonal masks and decreasing the number of multistarts to 50 caused the GC-based registration to fail at a rate of  >14%; however, GO maintained robustness with a 0% gross failure rate. Overall, the GI, GC, and GS metrics were susceptible to registration errors associated with content mismatch, but GO provided robust registration (median PDE  =  5.5 mm, 2.6 mm IQR) without manual masking and with an improved

  15. 3D-2D image registration for target localization in spine surgery: investigation of similarity metrics providing robustness to content mismatch

    NASA Astrophysics Data System (ADS)

    De Silva, T.; Uneri, A.; Ketcha, M. D.; Reaungamornrat, S.; Kleinszig, G.; Vogt, S.; Aygun, N.; Lo, S.-F.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-04-01

    In image-guided spine surgery, robust three-dimensional to two-dimensional (3D-2D) registration of preoperative computed tomography (CT) and intraoperative radiographs can be challenged by the image content mismatch associated with the presence of surgical instrumentation and implants as well as soft-tissue resection or deformation. This work investigates image similarity metrics in 3D-2D registration offering improved robustness against mismatch, thereby improving performance and reducing or eliminating the need for manual masking. The performance of four gradient-based image similarity metrics (gradient information (GI), gradient correlation (GC), gradient information with linear scaling (GS), and gradient orientation (GO)) with a multi-start optimization strategy was evaluated in an institutional review board-approved retrospective clinical study using 51 preoperative CT images and 115 intraoperative mobile radiographs. Registrations were tested with and without polygonal masks as a function of the number of multistarts employed during optimization. Registration accuracy was evaluated in terms of the projection distance error (PDE) and assessment of failure modes (PDE  >  30 mm) that could impede reliable vertebral level localization. With manual polygonal masking and 200 multistarts, the GC and GO metrics exhibited robust performance with 0% gross failures and median PDE  <  6.4 mm (±4.4 mm interquartile range (IQR)) and a median runtime of 84 s (plus upwards of 1-2 min for manual masking). Excluding manual polygonal masks and decreasing the number of multistarts to 50 caused the GC-based registration to fail at a rate of  >14% however, GO maintained robustness with a 0% gross failure rate. Overall, the GI, GC, and GS metrics were susceptible to registration errors associated with content mismatch, but GO provided robust registration (median PDE  =  5.5 mm, 2.6 mm IQR) without manual masking and with an improved

  16. 3D–2D image registration for target localization in spine surgery: investigation of similarity metrics providing robustness to content mismatch

    PubMed Central

    De Silva, T; Uneri, A; Ketcha, M D; Reaungamornrat, S; Kleinszig, G; Vogt, S; Aygun, N; Lo, S-F; Wolinsky, J-P; Siewerdsen, J H

    2016-01-01

    In image-guided spine surgery, robust three-dimensional to two-dimensional (3D–2D) registration of preoperative computed tomography (CT) and intraoperative radiographs can be challenged by the image content mismatch associated with the presence of surgical instrumentation and implants as well as soft-tissue resection or deformation. This work investigates image similarity metrics in 3D–2D registration offering improved robustness against mismatch, thereby improving performance and reducing or eliminating the need for manual masking. The performance of four gradient-based image similarity metrics (gradient information (GI), gradient correlation (GC), gradient information with linear scaling (GS), and gradient orientation (GO)) with a multi-start optimization strategy was evaluated in an institutional review board-approved retrospective clinical study using 51 preoperative CT images and 115 intraoperative mobile radiographs. Registrations were tested with and without polygonal masks as a function of the number of multistarts employed during optimization. Registration accuracy was evaluated in terms of the projection distance error (PDE) and assessment of failure modes (PDE > 30 mm) that could impede reliable vertebral level localization. With manual polygonal masking and 200 multistarts, the GC and GO metrics exhibited robust performance with 0% gross failures and median PDE < 6.4 mm (±4.4 mm interquartile range (IQR)) and a median runtime of 84 s (plus upwards of 1–2 min for manual masking). Excluding manual polygonal masks and decreasing the number of multistarts to 50 caused the GC-based registration to fail at a rate of >14%; however, GO maintained robustness with a 0% gross failure rate. Overall, the GI, GC, and GS metrics were susceptible to registration errors associated with content mismatch, but GO provided robust registration (median PDE = 5.5 mm, 2.6 mm IQR) without manual masking and with an improved runtime (29.3 s). The GO metric improved the

  17. Intraoperative Image-based Multiview 2D/3D Registration for Image-Guided Orthopaedic Surgery: Incorporation of Fiducial-Based C-Arm Tracking and GPU-Acceleration

    PubMed Central

    Armand, Mehran; Armiger, Robert S.; Kutzer, Michael D.; Basafa, Ehsan; Kazanzides, Peter; Taylor, Russell H.

    2012-01-01

    Intraoperative patient registration may significantly affect the outcome of image-guided surgery (IGS). Image-based registration approaches have several advantages over the currently dominant point-based direct contact methods and are used in some industry solutions in image-guided radiation therapy with fixed X-ray gantries. However, technical challenges including geometric calibration and computational cost have precluded their use with mobile C-arms for IGS. We propose a 2D/3D registration framework for intraoperative patient registration using a conventional mobile X-ray imager combining fiducial-based C-arm tracking and graphics processing unit (GPU)-acceleration. The two-stage framework 1) acquires X-ray images and estimates relative pose between the images using a custom-made in-image fiducial, and 2) estimates the patient pose using intensity-based 2D/3D registration. Experimental validations using a publicly available gold standard dataset, a plastic bone phantom and cadaveric specimens have been conducted. The mean target registration error (mTRE) was 0.34 ± 0.04 mm (success rate: 100%, registration time: 14.2 s) for the phantom with two images 90° apart, and 0.99 ± 0.41 mm (81%, 16.3 s) for the cadaveric specimen with images 58.5° apart. The experimental results showed the feasibility of the proposed registration framework as a practical alternative for IGS routines. PMID:22113773

  18. A frequency-based approach to locate common structure for 2D-3D intensity-based registration of setup images in prostate radiotherapy

    SciTech Connect

    Munbodh, Reshma; Chen Zhe; Jaffray, David A.; Moseley, Douglas J.; Knisely, Jonathan P. S.; Duncan, James S.

    2007-07-15

    In many radiotherapy clinics, geometric uncertainties in the delivery of 3D conformal radiation therapy and intensity modulated radiation therapy of the prostate are reduced by aligning the patient's bony anatomy in the planning 3D CT to corresponding bony anatomy in 2D portal images acquired before every treatment fraction. In this paper, we seek to determine if there is a frequency band within the portal images and the digitally reconstructed radiographs (DRRs) of the planning CT in which bony anatomy predominates over non-bony anatomy such that portal images and DRRs can be suitably filtered to achieve high registration accuracy in an automated 2D-3D single portal intensity-based registration framework. Two similarity measures, mutual information and the Pearson correlation coefficient were tested on carefully collected gold-standard data consisting of a kilovoltage cone-beam CT (CBCT) and megavoltage portal images in the anterior-posterior (AP) view of an anthropomorphic phantom acquired under clinical conditions at known poses, and on patient data. It was found that filtering the portal images and DRRs during the registration considerably improved registration performance. Without filtering, the registration did not always converge while with filtering it always converged to an accurate solution. For the pose-determination experiments conducted on the anthropomorphic phantom with the correlation coefficient, the mean (and standard deviation) of the absolute errors in recovering each of the six transformation parameters were {theta}{sub x}:0.18(0.19) deg., {theta}{sub y}:0.04(0.04) deg., {theta}{sub z}:0.04(0.02) deg., t{sub x}:0.14(0.15) mm, t{sub y}:0.09(0.05) mm, and t{sub z}:0.49(0.40) mm. The mutual information-based registration with filtered images also resulted in similarly small errors. For the patient data, visual inspection of the superimposed registered images showed that they were correctly aligned in all instances. The results presented in this

  19. Image fusion of Ultrasound Computer Tomography volumes with X-ray mammograms using a biomechanical model based 2D/3D registration.

    PubMed

    Hopp, T; Duric, N; Ruiter, N V

    2015-03-01

    Ultrasound Computer Tomography (USCT) is a promising breast imaging modality under development. Comparison to a standard method like mammography is essential for further development. Due to significant differences in image dimensionality and compression state of the breast, correlating USCT images and X-ray mammograms is challenging. In this paper we present a 2D/3D registration method to improve the spatial correspondence and allow direct comparison of the images. It is based on biomechanical modeling of the breast and simulation of the mammographic compression. We investigate the effect of including patient-specific material parameters estimated automatically from USCT images. The method was systematically evaluated using numerical phantoms and in-vivo data. The average registration accuracy using the automated registration was 11.9mm. Based on the registered images a method for analysis of the diagnostic value of the USCT images was developed and initially applied to analyze sound speed and attenuation images based on X-ray mammograms as ground truth. Combining sound speed and attenuation allows differentiating lesions from surrounding tissue. Overlaying this information on mammograms, combines quantitative and morphological information for multimodal diagnosis. PMID:25456144

  20. A novel approach for a 2D/3D image registration routine for medical tool navigation in minimally invasive vascular interventions.

    PubMed

    Schwerter, Michael; Lietzmann, Florian; Schad, Lothar R

    2016-09-01

    Minimally invasive interventions are frequently aided by 2D projective image guidance. To facilitate the navigation of medical tools within the patient, information from preoperative 3D images can supplement interventional data. This work describes a novel approach to perform a 3D CT data registration to a single interventional native fluoroscopic frame. The goal of this procedure is to recover and visualize a current 2D interventional tool position in its corresponding 3D dataset. A dedicated routine was developed and tested on a phantom. The 3D position of a guidewire inserted into the phantom could successfully be reconstructed for varying 2D image acquisition geometries. The scope of the routine includes projecting the CT data into the plane of the fluoroscopy. A subsequent registration of the real and virtual projections is performed with an accuracy within the range of 1.16±0.17mm for fixed landmarks. The interventional tool is extracted from the fluoroscopy and matched to the corresponding part of the projected and transformed arterial vasculature. A root mean square error of up to 0.56mm for matched point pairs is reached. The desired 3D view is provided by backprojecting the matched guidewire through the CT array. Due to its potential to reduce patient dose and treatment times, the proposed routine has the capability of reducing patient stress at lower overall treatment costs. PMID:27157275

  1. A computerized framework for monitoring four-dimensional dose distributions during stereotactic body radiation therapy using a portal dose image-based 2D/3D registration approach.

    PubMed

    Nakamoto, Takahiro; Arimura, Hidetaka; Nakamura, Katsumasa; Shioyama, Yoshiyuki; Mizoguchi, Asumi; Hirose, Taka-Aki; Honda, Hiroshi; Umezu, Yoshiyuki; Nakamura, Yasuhiko; Hirata, Hideki

    2015-03-01

    A computerized framework for monitoring four-dimensional (4D) dose distributions during stereotactic body radiation therapy based on a portal dose image (PDI)-based 2D/3D registration approach has been proposed in this study. Using the PDI-based registration approach, simulated 4D "treatment" CT images were derived from the deformation of 3D planning CT images so that a 2D planning PDI could be similar to a 2D dynamic clinical PDI at a breathing phase. The planning PDI was calculated by applying a dose calculation algorithm (a pencil beam convolution algorithm) to the geometry of the planning CT image and a virtual water equivalent phantom. The dynamic clinical PDIs were estimated from electronic portal imaging device (EPID) dynamic images including breathing phase data obtained during a treatment. The parameters of the affine transformation matrix were optimized based on an objective function and a gamma pass rate using a Levenberg-Marquardt (LM) algorithm. The proposed framework was applied to the EPID dynamic images of ten lung cancer patients, which included 183 frames (mean: 18.3 per patient). The 4D dose distributions during the treatment time were successfully obtained by applying the dose calculation algorithm to the simulated 4D "treatment" CT images. The mean±standard deviation (SD) of the percentage errors between the prescribed dose and the estimated dose at an isocenter for all cases was 3.25±4.43%. The maximum error for the ten cases was 14.67% (prescribed dose: 1.50Gy, estimated dose: 1.72Gy), and the minimum error was 0.00%. The proposed framework could be feasible for monitoring the 4D dose distribution and dose errors within a patient's body during treatment. PMID:25592290

  2. Interactive initialization of 2D/3D rigid registration

    SciTech Connect

    Gong, Ren Hui; Güler, Özgür; Kürklüoglu, Mustafa; Lovejoy, John; Yaniv, Ziv

    2013-12-15

    Purpose: Registration is one of the key technical components in an image-guided navigation system. A large number of 2D/3D registration algorithms have been previously proposed, but have not been able to transition into clinical practice. The authors identify the primary reason for the lack of adoption with the prerequisite for a sufficiently accurate initial transformation, mean target registration error of about 10 mm or less. In this paper, the authors present two interactive initialization approaches that provide the desired accuracy for x-ray/MR and x-ray/CT registration in the operating room setting. Methods: The authors have developed two interactive registration methods based on visual alignment of a preoperative image, MR, or CT to intraoperative x-rays. In the first approach, the operator uses a gesture based interface to align a volume rendering of the preoperative image to multiple x-rays. The second approach uses a tracked tool available as part of a navigation system. Preoperatively, a virtual replica of the tool is positioned next to the anatomical structures visible in the volumetric data. Intraoperatively, the physical tool is positioned in a similar manner and subsequently used to align a volume rendering to the x-ray images using an augmented reality (AR) approach. Both methods were assessed using three publicly available reference data sets for 2D/3D registration evaluation. Results: In the authors' experiments, the authors show that for x-ray/MR registration, the gesture based method resulted in a mean target registration error (mTRE) of 9.3 ± 5.0 mm with an average interaction time of 146.3 ± 73.0 s, and the AR-based method had mTREs of 7.2 ± 3.2 mm with interaction times of 44 ± 32 s. For x-ray/CT registration, the gesture based method resulted in a mTRE of 7.4 ± 5.0 mm with an average interaction time of 132.1 ± 66.4 s, and the AR-based method had mTREs of 8.3 ± 5.0 mm with interaction times of 58 ± 52 s. Conclusions: Based on the

  3. SU-E-J-13: Six Degree of Freedom Image Fusion Accuracy for Cranial Target Localization On the Varian Edge Stereotactic Radiosurgery System: Comparison Between 2D/3D and KV CBCT Image Registration

    SciTech Connect

    Xu, H; Song, K; Chetty, I; Kim, J; Wen, N

    2015-06-15

    Purpose: To determine the 6 degree of freedom systematic deviations between 2D/3D and CBCT image registration with various imaging setups and fusion algorithms on the Varian Edge Linac. Methods: An anthropomorphic head phantom with radio opaque targets embedded was scanned with CT slice thicknesses of 0.8, 1, 2, and 3mm. The 6 DOF systematic errors were assessed by comparing 2D/3D (kV/MV with CT) with 3D/3D (CBCT with CT) image registrations with different offset positions, similarity measures, image filters, and CBCT slice thicknesses (1 and 2 mm). The 2D/3D registration accuracy of 51 fractions for 26 cranial SRS patients was also evaluated by analyzing 2D/3D pre-treatment verification taken after 3D/3D image registrations. Results: The systematic deviations of 2D/3D image registration using kV- kV, MV-kV and MV-MV image pairs were within ±0.3mm and ±0.3° for translations and rotations with 95% confidence interval (CI) for a reference CT with 0.8 mm slice thickness. No significant difference (P>0.05) on target localization was observed between 0.8mm, 1mm, and 2mm CT slice thicknesses with CBCT slice thicknesses of 1mm and 2mm. With 3mm CT slice thickness, both 2D/3D and 3D/3D registrations performed less accurately in longitudinal direction than thinner CT slice thickness (0.60±0.12mm and 0.63±0.07mm off, respectively). Using content filter and using similarity measure of pattern intensity instead of mutual information, improved the 2D/3D registration accuracy significantly (P=0.02 and P=0.01, respectively). For the patient study, means and standard deviations of residual errors were 0.09±0.32mm, −0.22±0.51mm and −0.07±0.32mm in VRT, LNG and LAT directions, respectively, and 0.12°±0.46°, −0.12°±0.39° and 0.06°±0.28° in RTN, PITCH, and ROLL directions, respectively. 95% CI of translational and rotational deviations were comparable to those in phantom study. Conclusion: 2D/3D image registration provided on the Varian Edge radiosurgery, 6 DOF

  4. FIRE: an open-software suite for real-time 2D/3D image registration for image guided radiotherapy research

    NASA Astrophysics Data System (ADS)

    Furtado, H.; Gendrin, C.; Spoerk, J.; Steiner, E.; Underwood, T.; Kuenzler, T.; Georg, D.; Birkfellner, W.

    2016-03-01

    Radiotherapy treatments have changed at a tremendously rapid pace. Dose delivered to the tumor has escalated while organs at risk (OARs) are better spared. The impact of moving tumors during dose delivery has become higher due to very steep dose gradients. Intra-fractional tumor motion has to be managed adequately to reduce errors in dose delivery. For tumors with large motion such as tumors in the lung, tracking is an approach that can reduce position uncertainty. Tumor tracking approaches range from purely image intensity based techniques to motion estimation based on surrogate tracking. Research efforts are often based on custom designed software platforms which take too much time and effort to develop. To address this challenge we have developed an open software platform especially focusing on tumor motion management. FLIRT is a freely available open-source software platform. The core method for tumor tracking is purely intensity based 2D/3D registration. The platform is written in C++ using the Qt framework for the user interface. The performance critical methods are implemented on the graphics processor using the CUDA extension. One registration can be as fast as 90ms (11Hz). This is suitable to track tumors moving due to respiration (~0.3Hz) or heartbeat (~1Hz). Apart from focusing on high performance, the platform is designed to be flexible and easy to use. Current use cases range from tracking feasibility studies, patient positioning and method validation. Such a framework has the potential of enabling the research community to rapidly perform patient studies or try new methods.

  5. 2D/3D registration algorithm for lung brachytherapy

    SciTech Connect

    Zvonarev, P. S.; Farrell, T. J.; Hunter, R.; Wierzbicki, M.; Hayward, J. E.; Sur, R. K.

    2013-02-15

    Purpose: A 2D/3D registration algorithm is proposed for registering orthogonal x-ray images with a diagnostic CT volume for high dose rate (HDR) lung brachytherapy. Methods: The algorithm utilizes a rigid registration model based on a pixel/voxel intensity matching approach. To achieve accurate registration, a robust similarity measure combining normalized mutual information, image gradient, and intensity difference was developed. The algorithm was validated using a simple body and anthropomorphic phantoms. Transfer catheters were placed inside the phantoms to simulate the unique image features observed during treatment. The algorithm sensitivity to various degrees of initial misregistration and to the presence of foreign objects, such as ECG leads, was evaluated. Results: The mean registration error was 2.2 and 1.9 mm for the simple body and anthropomorphic phantoms, respectively. The error was comparable to the interoperator catheter digitization error of 1.6 mm. Preliminary analysis of data acquired from four patients indicated a mean registration error of 4.2 mm. Conclusions: Results obtained using the proposed algorithm are clinically acceptable especially considering the complications normally encountered when imaging during lung HDR brachytherapy.

  6. Kinematic Analysis of Healthy Hips during Weight-Bearing Activities by 3D-to-2D Model-to-Image Registration Technique

    PubMed Central

    Hara, Daisuke; Nakashima, Yasuharu; Hamai, Satoshi; Higaki, Hidehiko; Ikebe, Satoru; Shimoto, Takeshi; Hirata, Masanobu; Kanazawa, Masayuki; Kohno, Yusuke; Iwamoto, Yukihide

    2014-01-01

    Dynamic hip kinematics during weight-bearing activities were analyzed for six healthy subjects. Continuous X-ray images of gait, chair-rising, squatting, and twisting were taken using a flat panel X-ray detector. Digitally reconstructed radiographic images were used for 3D-to-2D model-to-image registration technique. The root-mean-square errors associated with tracking the pelvis and femur were less than 0.3 mm and 0.3° for translations and rotations. For gait, chair-rising, and squatting, the maximum hip flexion angles averaged 29.6°, 81.3°, and 102.4°, respectively. The pelvis was tilted anteriorly around 4.4° on average during full gait cycle. For chair-rising and squatting, the maximum absolute value of anterior/posterior pelvic tilt averaged 12.4°/11.7° and 10.7°/10.8°, respectively. Hip flexion peaked on the way of movement due to further anterior pelvic tilt during both chair-rising and squatting. For twisting, the maximum absolute value of hip internal/external rotation averaged 29.2°/30.7°. This study revealed activity dependent kinematics of healthy hip joints with coordinated pelvic and femoral dynamic movements. Kinematics' data during activities of daily living may provide important insight as to the evaluating kinematics of pathological and reconstructed hips. PMID:25506056

  7. Model-based measurement of food portion size for image-based dietary assessment using 3D/2D registration

    NASA Astrophysics Data System (ADS)

    Chen, Hsin-Chen; Jia, Wenyan; Yue, Yaofeng; Li, Zhaoxin; Sun, Yung-Nien; Fernstrom, John D.; Sun, Mingui

    2013-10-01

    Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographic image of food contained on a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image-based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image.

  8. Oriented Gaussian mixture models for nonrigid 2D/3D coronary artery registration.

    PubMed

    Baka, N; Metz, C T; Schultz, C J; van Geuns, R-J; Niessen, W J; van Walsum, T

    2014-05-01

    2D/3D registration of patient vasculature from preinterventional computed tomography angiography (CTA) to interventional X-ray angiography is of interest to improve guidance in percutaneous coronary interventions. In this paper we present a novel feature based 2D/3D registration framework, that is based on probabilistic point correspondences, and show its usefulness on aligning 3D coronary artery centerlines derived from CTA images with their 2D projection derived from interventional X-ray angiography. The registration framework is an extension of the Gaussian mixture model (GMM) based point-set registration to the 2D/3D setting, with a modified distance metric. We also propose a way to incorporate orientation in the registration, and show its added value for artery registration on patient datasets as well as in simulation experiments. The oriented GMM registration achieved a median accuracy of 1.06 mm, with a convergence rate of 81% for nonrigid vessel centerline registration on 12 patient datasets, using a statistical shape model. The method thereby outperformed the iterative closest point algorithm, the GMM registration without orientation, and two recently published methods on 2D/3D coronary artery registration. PMID:24770908

  9. Device and methods for "gold standard" registration of clinical 3D and 2D cerebral angiograms

    NASA Astrophysics Data System (ADS)

    Madan, Hennadii; Likar, Boštjan; Pernuš, Franjo; Å piclin, Žiga

    2015-03-01

    Translation of any novel and existing 3D-2D image registration methods into clinical image-guidance systems is limited due to lack of their objective validation on clinical image datasets. The main reason is that, besides the calibration of the 2D imaging system, a reference or "gold standard" registration is very difficult to obtain on clinical image datasets. In the context of cerebral endovascular image-guided interventions (EIGIs), we present a calibration device in the form of a headband with integrated fiducial markers and, secondly, propose an automated pipeline comprising 3D and 2D image processing, analysis and annotation steps, the result of which is a retrospective calibration of the 2D imaging system and an optimal, i.e., "gold standard" registration of 3D and 2D images. The device and methods were used to create the "gold standard" on 15 datasets of 3D and 2D cerebral angiograms, whereas each dataset was acquired on a patient undergoing EIGI for either aneurysm coiling or embolization of arteriovenous malformation. The use of the device integrated seamlessly in the clinical workflow of EIGI. While the automated pipeline eliminated all manual input or interactive image processing, analysis or annotation. In this way, the time to obtain the "gold standard" was reduced from 30 to less than one minute and the "gold standard" of 3D-2D registration on all 15 datasets of cerebral angiograms was obtained with a sub-0.1 mm accuracy.

  10. Reconstruction of 3D lung models from 2D planning data sets for Hodgkin's lymphoma patients using combined deformable image registration and navigator channels

    SciTech Connect

    Ng, Angela; Nguyen, Thao-Nguyen; Moseley, Joanne L.; Hodgson, David C.; Sharpe, Michael B.; Brock, Kristy K.

    2010-03-15

    Purpose: Late complications (cardiac toxicities, secondary lung, and breast cancer) remain a significant concern in the radiation treatment of Hodgkin's lymphoma (HL). To address this issue, predictive dose-risk models could potentially be used to estimate radiotherapy-related late toxicities. This study investigates the use of deformable image registration (DIR) and navigator channels (NCs) to reconstruct 3D lung models from 2D radiographic planning images, in order to retrospectively calculate the treatment dose exposure to HL patients treated with 2D planning, which are now experiencing late effects. Methods: Three-dimensional planning CT images of 52 current HL patients were acquired. 12 image sets were used to construct a male and a female population lung model. 23 ''Reference'' images were used to generate lung deformation adaptation templates, constructed by deforming the population model into each patient-specific lung geometry using a biomechanical-based DIR algorithm, MORFEUS. 17 ''Test'' patients were used to test the accuracy of the reconstruction technique by adapting existing templates using 2D digitally reconstructed radiographs. The adaptation process included three steps. First, a Reference patient was matched to a Test patient by thorax measurements. Second, four NCs (small regions of interest) were placed on the lung boundary to calculate 1D differences in lung edges. Third, the Reference lung model was adapted to the Test patient's lung using the 1D edge differences. The Reference-adapted Test model was then compared to the 3D lung contours of the actual Test patient by computing their percentage volume overlap (POL) and Dice coefficient. Results: The average percentage overlapping volumes and Dice coefficient expressed as a percentage between the adapted and actual Test models were found to be 89.2{+-}3.9% (Right lung=88.8%; Left lung=89.6%) and 89.3{+-}2.7% (Right=88.5%; Left=90.2%), respectively. Paired T-tests demonstrated that the

  11. Non-Iterative Rigid 2D/3D Point-Set Registration Using Semidefinite Programming

    NASA Astrophysics Data System (ADS)

    Khoo, Yuehaw; Kapoor, Ankur

    2016-07-01

    We describe a convex programming framework for pose estimation in 2D/3D point-set registration with unknown point correspondences. We give two mixed-integer nonlinear program (MINP) formulations of the 2D/3D registration problem when there are multiple 2D images, and propose convex relaxations for both of the MINPs to semidefinite programs (SDP) that can be solved efficiently by interior point methods. Our approach to the 2D/3D registration problem is non-iterative in nature as we jointly solve for pose and correspondence. Furthermore, these convex programs can readily incorporate feature descriptors of points to enhance registration results. We prove that the convex programs exactly recover the solution to the original nonconvex 2D/3D registration problem under noiseless condition. We apply these formulations to the registration of 3D models of coronary vessels to their 2D projections obtained from multiple intra-operative fluoroscopic images. For this application, we experimentally corroborate the exact recovery property in the absence of noise and further demonstrate robustness of the convex programs in the presence of noise.

  12. Image Registration Workshop Proceedings

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline (Editor)

    1997-01-01

    Automatic image registration has often been considered as a preliminary step for higher-level processing, such as object recognition or data fusion. But with the unprecedented amounts of data which are being and will continue to be generated by newly developed sensors, the very topic of automatic image registration has become and important research topic. This workshop presents a collection of very high quality work which has been grouped in four main areas: (1) theoretical aspects of image registration; (2) applications to satellite imagery; (3) applications to medical imagery; and (4) image registration for computer vision research.

  13. Self-calibration of cone-beam CT geometry using 3D-2D image registration: development and application to tasked-based imaging with a robotic C-arm

    NASA Astrophysics Data System (ADS)

    Ouadah, S.; Stayman, J. W.; Gang, G.; Uneri, A.; Ehtiati, T.; Siewerdsen, J. H.

    2015-03-01

    Purpose: Robotic C-arm systems are capable of general noncircular orbits whose trajectories can be driven by the particular imaging task. However obtaining accurate calibrations for reconstruction in such geometries can be a challenging problem. This work proposes a method to perform a unique geometric calibration of an arbitrary C-arm orbit by registering 2D projections to a previously acquired 3D image to determine the transformation parameters representing the system geometry. Methods: Experiments involved a cone-beam CT (CBCT) bench system, a robotic C-arm, and three phantoms. A robust 3D-2D registration process was used to compute the 9 degree of freedom (DOF) transformation between each projection and an existing 3D image by maximizing normalized gradient information with a digitally reconstructed radiograph (DRR) of the 3D volume. The quality of the resulting "self-calibration" was evaluated in terms of the agreement with an established calibration method using a BB phantom as well as image quality in the resulting CBCT reconstruction. Results: The self-calibration yielded CBCT images without significant difference in spatial resolution from the standard ("true") calibration methods (p-value >0.05 for all three phantoms), and the differences between CBCT images reconstructed using the "self" and "true" calibration methods were on the order of 10-3 mm-1. Maximum error in magnification was 3.2%, and back-projection ray placement was within 0.5 mm. Conclusion: The proposed geometric "self" calibration provides a means for 3D imaging on general noncircular orbits in CBCT systems for which a geometric calibration is either not available or not reproducible. The method forms the basis of advanced "task-based" 3D imaging methods now in development for robotic C-arms.

  14. 3D-2D registration of cerebral angiograms based on vessel directions and intensity gradients

    NASA Astrophysics Data System (ADS)

    Mitrovic, Uroš; Špiclin, Žiga; Štern, Darko; Markelj, Primož; Likar, Boštjan; Miloševic, Zoran; Pernuš, Franjo

    2012-02-01

    Endovascular treatment of cerebral aneurysms and arteriovenous malformations (AVM) involves navigation of a catheter through the femoral artery and vascular system to the site of pathology. Intra-interventional navigation is done under the guidance of one or at most two two-dimensional (2D) X-ray fluoroscopic images or 2D digital subtracted angiograms (DSA). Due to the projective nature of 2D images, the interventionist needs to mentally reconstruct the position of the catheter in respect to the three-dimensional (3D) patient vasculature, which is not a trivial task. By 3D-2D registration of pre-interventional 3D images like CTA, MRA or 3D-DSA and intra-interventional 2D images, intra-interventional tools such as catheters can be visualized on the 3D model of patient vasculature, allowing easier and faster navigation. Such a navigation may consequently lead to the reduction of total ionizing dose and delivered contrast medium. In the past, development and evaluation of 3D-2D registration methods for endovascular treatments received considerable attention. The main drawback of these methods is that they have to be initialized rather close to the correct position as they mostly have a rather small capture range. In this paper, a novel registration method that has a higher capture range and success rate is proposed. The proposed method and a state-of-the-art method were tested and evaluated on synthetic and clinical 3D-2D image-pairs. The results on both databases indicate that although the proposed method was slightly less accurate, it significantly outperformed the state-of-the-art 3D-2D registration method in terms of robustness measured by capture range and success rate.

  15. Validation for 2D/3D registration I: A new gold standard data set

    PubMed Central

    Pawiro, S. A.; Markelj, P.; Pernuš, F.; Gendrin, C.; Figl, M.; Weber, C.; Kainberger, F.; Nöbauer-Huhmann, I.; Bergmeister, H.; Stock, M.; Georg, D.; Bergmann, H.; Birkfellner, W.

    2011-01-01

    Purpose In this article, the authors propose a new gold standard data set for the validation of two-dimensional/three-dimensional (2D/3D) and 3D/3D image registration algorithms. Methods A gold standard data set was produced using a fresh cadaver pig head with attached fiducial markers. The authors used several imaging modalities common in diagnostic imaging or radiotherapy, which include 64-slice computed tomography (CT), magnetic resonance imaging using Tl, T2, and proton density sequences, and cone beam CT imaging data. Radiographic data were acquired using kilovoltage and megavoltage imaging techniques. The image information reflects both anatomy and reliable fiducial marker information and improves over existing data sets by the level of anatomical detail, image data quality, and soft-tissue content. The markers on the 3D and 2D image data were segmented using analyze 10.0 (AnalyzeDirect, Inc., Kansas City, KN) and an in-house software. Results The projection distance errors and the expected target registration errors over all the image data sets were found to be less than 2.71 and 1.88 mm, respectively. Conclusions The gold standard data set, obtained with state-of-the-art imaging technology, has the potential to improve the validation of 2D/3D and 3D/3D registration algorithms for image guided therapy. PMID:21520860

  16. Validation for 2D/3D registration I: A new gold standard data set

    SciTech Connect

    Pawiro, S. A.; Markelj, P.; Pernus, F.; Gendrin, C.; Figl, M.; Weber, C.; Kainberger, F.; Noebauer-Huhmann, I.; Bergmeister, H.; Stock, M.; Georg, D.; Bergmann, H.; Birkfellner, W.

    2011-03-15

    Purpose: In this article, the authors propose a new gold standard data set for the validation of two-dimensional/three-dimensional (2D/3D) and 3D/3D image registration algorithms. Methods: A gold standard data set was produced using a fresh cadaver pig head with attached fiducial markers. The authors used several imaging modalities common in diagnostic imaging or radiotherapy, which include 64-slice computed tomography (CT), magnetic resonance imaging using Tl, T2, and proton density sequences, and cone beam CT imaging data. Radiographic data were acquired using kilovoltage and megavoltage imaging techniques. The image information reflects both anatomy and reliable fiducial marker information and improves over existing data sets by the level of anatomical detail, image data quality, and soft-tissue content. The markers on the 3D and 2D image data were segmented using ANALYZE 10.0 (AnalyzeDirect, Inc., Kansas City, KN) and an in-house software. Results: The projection distance errors and the expected target registration errors over all the image data sets were found to be less than 2.71 and 1.88 mm, respectively. Conclusions: The gold standard data set, obtained with state-of-the-art imaging technology, has the potential to improve the validation of 2D/3D and 3D/3D registration algorithms for image guided therapy.

  17. 2D-3D registration of coronary angiograms for cardiac procedure planning and guidance.

    PubMed

    Turgeon, Guy-Anne; Lehmann, Glen; Guiraudon, Gerard; Drangova, Maria; Holdsworth, David; Peters, Terry

    2005-12-01

    We present a completely automated 2D-3D registration technique that accurately maps a patient-specific heart model, created from preoperative images, to the patient's orientation in the operating room. This mapping is based on the registration of preoperatively acquired 3D vascular data with intraoperatively acquired angiograms. Registration using both single and dual-plane angiograms is explored using simulated but realistic datasets that were created from clinical images. Heart deformations and cardiac phase mismatches are taken into account in our validation using a digital 4D human heart model. In an ideal situation where the pre- and intraoperative images were acquired at identical time points within the cardiac cycle, the single-plane and the dual-plane registrations resulted in 3D root-mean-square (rms) errors of 1.60 +/- 0.21 and 0.53 +/- 0.08 mm, respectively. When a 10% timing offset was added between the pre- and the intraoperative acquisitions, the single-plane registration approach resulted in inaccurate registrations in the out-of-plane axis, whereas the dual-plane registration exhibited a 98% success rate with a 3D rms error of 1.33 +/- 0.28 mm. When all potential sources of error were included, namely, the anatomical background, timing offset, and typical errors in the vascular tree reconstruction, the dual-plane registration performed at 94% with an accuracy of 2.19 +/- 0.77 mm. PMID:16475773

  18. Locally adaptive 2D-3D registration using vascular structure model for liver catheterization.

    PubMed

    Kim, Jihye; Lee, Jeongjin; Chung, Jin Wook; Shin, Yeong-Gil

    2016-03-01

    Two-dimensional-three-dimensional (2D-3D) registration between intra-operative 2D digital subtraction angiography (DSA) and pre-operative 3D computed tomography angiography (CTA) can be used for roadmapping purposes. However, through the projection of 3D vessels, incorrect intersections and overlaps between vessels are produced because of the complex vascular structure, which makes it difficult to obtain the correct solution of 2D-3D registration. To overcome these problems, we propose a registration method that selects a suitable part of a 3D vascular structure for a given DSA image and finds the optimized solution to the partial 3D structure. The proposed algorithm can reduce the registration errors because it restricts the range of the 3D vascular structure for the registration by using only the relevant 3D vessels with the given DSA. To search for the appropriate 3D partial structure, we first construct a tree model of the 3D vascular structure and divide it into several subtrees in accordance with the connectivity. Then, the best matched subtree with the given DSA image is selected using the results from the coarse registration between each subtree and the vessels in the DSA image. Finally, a fine registration is conducted to minimize the difference between the selected subtree and the vessels of the DSA image. In experimental results obtained using 10 clinical datasets, the average distance errors in the case of the proposed method were 2.34±1.94mm. The proposed algorithm converges faster and produces more correct results than the conventional method in evaluations on patient datasets. PMID:26824922

  19. 3D–2D registration for surgical guidance: effect of projection view angles on registration accuracy

    PubMed Central

    Uneri, A; Otake, Y; Wang, A S; Kleinszig, G; Vogt, S; Khanna, A J; Siewerdsen, J H

    2016-01-01

    An algorithm for intensity-based 3D–2D registration of CT and x-ray projections is evaluated, specifically using single- or dual-projection views to provide 3D localization. The registration framework employs the gradient information similarity metric and covariance matrix adaptation evolution strategy to solve for the patient pose in six degrees of freedom. Registration performance was evaluated in an anthropomorphic phantom and cadaver, using C-arm projection views acquired at angular separation, Δθ, ranging from ~0°–180° at variable C-arm magnification. Registration accuracy was assessed in terms of 2D projection distance error and 3D target registration error (TRE) and compared to that of an electromagnetic (EM) tracker. The results indicate that angular separation as small as Δθ ~10°–20° achieved TRE <2 mm with 95% confidence, comparable or superior to that of the EM tracker. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers and manual registration. PMID:24351769

  20. 3D-2D registration for surgical guidance: effect of projection view angles on registration accuracy

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Otake, Y.; Wang, A. S.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Siewerdsen, J. H.

    2014-01-01

    An algorithm for intensity-based 3D-2D registration of CT and x-ray projections is evaluated, specifically using single- or dual-projection views to provide 3D localization. The registration framework employs the gradient information similarity metric and covariance matrix adaptation evolution strategy to solve for the patient pose in six degrees of freedom. Registration performance was evaluated in an anthropomorphic phantom and cadaver, using C-arm projection views acquired at angular separation, Δθ, ranging from ˜0°-180° at variable C-arm magnification. Registration accuracy was assessed in terms of 2D projection distance error and 3D target registration error (TRE) and compared to that of an electromagnetic (EM) tracker. The results indicate that angular separation as small as Δθ ˜10°-20° achieved TRE <2 mm with 95% confidence, comparable or superior to that of the EM tracker. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers and manual registration.

  1. Image registration by parts

    NASA Technical Reports Server (NTRS)

    Chalermwat, Prachya; El-Ghazawi, Tarek; LeMoigne, Jacqueline

    1997-01-01

    In spite of the large number of different image registration techniques, most of these techniques use the correlation operation to match spatial image characteristics. Correlation is known to be one of the most computationally intensive operations and its computational needs grow rapidly with the increase in the image sizes. In this article, we show that, in many cases, it might be sufficient to determine image transformations by considering only one or several parts of the image rather than the entire image, which could result in substantial computational savings. This paper introduces the concept of registration by parts and investigates its viability. It describes alternative techniques for such image registration by parts and presents early empirical results that address the underlying trade-offs.

  2. Robust image registration of biological microscopic images.

    PubMed

    Wang, Ching-Wei; Ka, Shuk-Man; Chen, Ann

    2014-01-01

    Image registration of biological data is challenging as complex deformation problems are common. Possible deformation effects can be caused in individual data preparation processes, involving morphological deformations, stain variations, stain artifacts, rotation, translation, and missing tissues. The combining deformation effects tend to make existing automatic registration methods perform poor. In our experiments on serial histopathological images, the six state of the art image registration techniques, including TrakEM2, SURF + affine transformation, UnwarpJ, bUnwarpJ, CLAHE + bUnwarpJ and BrainAligner, achieve no greater than 70% averaged accuracies, while the proposed method achieves 91.49% averaged accuracy. The proposed method has also been demonstrated to be significantly better in alignment of laser scanning microscope brain images and serial ssTEM images than the benchmark automatic approaches (p < 0.001). The contribution of this study is to introduce a fully automatic, robust and fast image registration method for 2D image registration. PMID:25116443

  3. Evaluation of low-dose limits in 3D-2D rigid registration for surgical guidance

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Wang, A. S.; Otake, Y.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Gallia, G. L.; Gokaslan, Z. L.; Siewerdsen, J. H.

    2014-09-01

    An algorithm for intensity-based 3D-2D registration of CT and C-arm fluoroscopy is evaluated for use in surgical guidance, specifically considering the low-dose limits of the fluoroscopic x-ray projections. The registration method is based on a framework using the covariance matrix adaptation evolution strategy (CMA-ES) to identify the 3D patient pose that maximizes the gradient information similarity metric. Registration performance was evaluated in an anthropomorphic head phantom emulating intracranial neurosurgery, using target registration error (TRE) to characterize accuracy and robustness in terms of 95% confidence upper bound in comparison to that of an infrared surgical tracking system. Three clinical scenarios were considered: (1) single-view image + guidance, wherein a single x-ray projection is used for visualization and 3D-2D guidance; (2) dual-view image + guidance, wherein one projection is acquired for visualization, combined with a second (lower-dose) projection acquired at a different C-arm angle for 3D-2D guidance; and (3) dual-view guidance, wherein both projections are acquired at low dose for the purpose of 3D-2D guidance alone (not visualization). In each case, registration accuracy was evaluated as a function of the entrance surface dose associated with the projection view(s). Results indicate that images acquired at a dose as low as 4 μGy (approximately one-tenth the dose of a typical fluoroscopic frame) were sufficient to provide TRE comparable or superior to that of conventional surgical tracking, allowing 3D-2D guidance at a level of dose that is at most 10% greater than conventional fluoroscopy (scenario #2) and potentially reducing the dose to approximately 20% of the level in a conventional fluoroscopically guided procedure (scenario #3).

  4. Staring 2-D hadamard transform spectral imager

    DOEpatents

    Gentry, Stephen M.; Wehlburg, Christine M.; Wehlburg, Joseph C.; Smith, Mark W.; Smith, Jody L.

    2006-02-07

    A staring imaging system inputs a 2D spatial image containing multi-frequency spectral information. This image is encoded in one dimension of the image with a cyclic Hadamarid S-matrix. The resulting image is detecting with a spatial 2D detector; and a computer applies a Hadamard transform to recover the encoded image.

  5. Automatic digital image registration

    NASA Technical Reports Server (NTRS)

    Goshtasby, A.; Jain, A. K.; Enslin, W. R.

    1982-01-01

    This paper introduces a general procedure for automatic registration of two images which may have translational, rotational, and scaling differences. This procedure involves (1) segmentation of the images, (2) isolation of dominant objects from the images, (3) determination of corresponding objects in the two images, and (4) estimation of transformation parameters using the center of gravities of objects as control points. An example is given which uses this technique to register two images which have translational, rotational, and scaling differences.

  6. Automatic pose initialization for accurate 2D/3D registration applied to abdominal aortic aneurysm endovascular repair

    NASA Astrophysics Data System (ADS)

    Miao, Shun; Lucas, Joseph; Liao, Rui

    2012-02-01

    Minimally invasive abdominal aortic aneurysm (AAA) stenting can be greatly facilitated by overlaying the preoperative 3-D model of the abdominal aorta onto the intra-operative 2-D X-ray images. Accurate 2-D/3-D registration in 3-D space makes the 2-D/3-D overlay robust to the change of C-Arm angulations. By far, the 2-D/3-D registration methods based on simulated X-ray projection images using multiple image planes have been shown to be able to provide satisfactory 3-D registration accuracy. However, one drawback of the intensity-based 2-D/3-D registration methods is that the similarity measure is usually highly non-convex and hence the optimizer can easily be trapped into local minima. User interaction therefore is often needed in the initialization of the position of the 3-D model in order to get a successful 2-D/3-D registration. In this paper, a novel 3-D pose initialization technique is proposed, as an extension of our previously proposed bi-plane 2-D/3-D registration method for AAA intervention [4]. The proposed method detects vessel bifurcation points and spine centerline in both 2-D and 3-D images, and utilizes landmark information to bring the 3-D volume into a 15mm capture range. The proposed landmark detection method was validated on real dataset, and is shown to be able to provide a good initialization for 2-D/3-D registration in [4], thus making the workflow fully automatic.

  7. Nonrigid point registration for 2D curves and 3D surfaces and its various applications

    NASA Astrophysics Data System (ADS)

    Wang, Hesheng; Fei, Baowei

    2013-06-01

    A nonrigid B-spline-based point-matching (BPM) method is proposed to match dense surface points. The method solves both the point correspondence and nonrigid transformation without features extraction. The registration method integrates a motion model, which combines a global transformation and a B-spline-based local deformation, into a robust point-matching framework. The point correspondence and deformable transformation are estimated simultaneously by fuzzy correspondence and by a deterministic annealing technique. Prior information about global translation, rotation and scaling is incorporated into the optimization. A local B-spline motion model decreases the degrees of freedom for optimization and thus enables the registration of a larger number of feature points. The performance of the BPM method has been demonstrated and validated using synthesized 2D and 3D data, mouse MRI and micro-CT images. The proposed BPM method can be used to register feature point sets, 2D curves, 3D surfaces and various image data.

  8. Deformable 3D-2D registration for CT and its application to low dose tomographic fluoroscopy

    NASA Astrophysics Data System (ADS)

    Flach, Barbara; Brehm, Marcus; Sawall, Stefan; Kachelrieß, Marc

    2014-12-01

    Many applications in medical imaging include image registration for matching of images from the same or different modalities. In the case of full data sampling, the respective reconstructed images are usually of such a good image quality that standard deformable volume-to-volume (3D-3D) registration approaches can be applied. But research in temporal-correlated image reconstruction and dose reductions increases the number of cases where rawdata are available from only few projection angles. Here, deteriorated image quality leads to non-acceptable deformable volume-to-volume registration results. Therefore a registration approach is required that is robust against a decreasing number of projections defining the target position. We propose a deformable volume-to-rawdata (3D-2D) registration method that aims at finding a displacement vector field maximizing the alignment of a CT volume and the acquired rawdata based on the sum of squared differences in rawdata domain. The registration is constrained by a regularization term in accordance with a fluid-based diffusion. Both cost function components, the rawdata fidelity and the regularization term, are optimized in an alternating manner. The matching criterion is optimized by a conjugate gradient descent for nonlinear functions, while the regularization is realized by convolution of the vector fields with Gaussian kernels. We validate the proposed method and compare it to the demons algorithm, a well-known 3D-3D registration method. The comparison is done for a range of 4-60 target projections using datasets from low dose tomographic fluoroscopy as an application example. The results show a high correlation to the ground truth target position without introducing artifacts even in the case of very few projections. In particular the matching in the rawdata domain is improved compared to the 3D-3D registration for the investigated range. The proposed volume-to-rawdata registration increases the robustness regarding sparse

  9. Deformable 3D-2D registration for CT and its application to low dose tomographic fluoroscopy.

    PubMed

    Flach, Barbara; Brehm, Marcus; Sawall, Stefan; Kachelrieß, Marc

    2014-12-21

    Many applications in medical imaging include image registration for matching of images from the same or different modalities. In the case of full data sampling, the respective reconstructed images are usually of such a good image quality that standard deformable volume-to-volume (3D-3D) registration approaches can be applied. But research in temporal-correlated image reconstruction and dose reductions increases the number of cases where rawdata are available from only few projection angles. Here, deteriorated image quality leads to non-acceptable deformable volume-to-volume registration results. Therefore a registration approach is required that is robust against a decreasing number of projections defining the target position. We propose a deformable volume-to-rawdata (3D-2D) registration method that aims at finding a displacement vector field maximizing the alignment of a CT volume and the acquired rawdata based on the sum of squared differences in rawdata domain. The registration is constrained by a regularization term in accordance with a fluid-based diffusion. Both cost function components, the rawdata fidelity and the regularization term, are optimized in an alternating manner. The matching criterion is optimized by a conjugate gradient descent for nonlinear functions, while the regularization is realized by convolution of the vector fields with Gaussian kernels. We validate the proposed method and compare it to the demons algorithm, a well-known 3D-3D registration method. The comparison is done for a range of 4-60 target projections using datasets from low dose tomographic fluoroscopy as an application example. The results show a high correlation to the ground truth target position without introducing artifacts even in the case of very few projections. In particular the matching in the rawdata domain is improved compared to the 3D-3D registration for the investigated range. The proposed volume-to-rawdata registration increases the robustness regarding sparse

  10. Evaluation of optimization methods for intensity-based 2D-3D registration in x-ray guided interventions

    NASA Astrophysics Data System (ADS)

    van der Bom, I. M. J.; Klein, S.; Staring, M.; Homan, R.; Bartels, L. W.; Pluim, J. P. W.

    2011-03-01

    The advantage of 2D-3D image registration methods versus direct image-to-patient registration, is that these methods generally do not require user interaction (such as manual annotations), additional machinery or additional acquisition of 3D data. A variety of intensity-based similarity measures has been proposed and evaluated for different applications. These studies showed that the registration accuracy and capture range are influenced by the choice of similarity measure. However, the influence of the optimization method on intensity-based 2D-3D image registration has not been investigated. We have compared the registration performance of seven optimization methods in combination with three similarity measures: gradient difference, gradient correlation, and pattern intensity. Optimization methods included in this study were: regular step gradient descent, Nelder-Mead, Powell-Brent, Quasi-Newton, nonlinear conjugate gradient, simultaneous perturbation stochastic approximation, and evolution strategy. Registration experiments were performed on multiple patient data sets that were obtained during cerebral interventions. Various component combinations were evaluated on registration accuracy, capture range, and registration time. The results showed that for the same similarity measure, different registration accuracies and capture ranges were obtained when different optimization methods were used. For gradient difference, largest capture ranges were obtained with Powell-Brent and simultaneous perturbation stochastic approximation. Gradient correlation and pattern intensity had the largest capture ranges in combination with Powell-Brent, Nelder-Mead, nonlinear conjugate gradient, and Quasi-Newton. Average registration time, expressed in the number of DRRs required for convergence, was the lowest for Powell-Brent. Based on these results, we conclude that Powell-Brent is a reliable optimization method for intensity-based 2D-3D registration of x-ray images to CBCT

  11. Semiautomated Multimodal Breast Image Registration

    PubMed Central

    Curtis, Charlotte; Frayne, Richard; Fear, Elise

    2012-01-01

    Consideration of information from multiple modalities has been shown to have increased diagnostic power in breast imaging. As a result, new techniques such as microwave imaging continue to be developed. Interpreting these novel image modalities is a challenge, requiring comparison to established techniques such as the gold standard X-ray mammography. However, due to the highly deformable nature of breast tissues, comparison of 3D and 2D modalities is a challenge. To enable this comparison, a registration technique was developed to map features from 2D mammograms to locations in the 3D image space. This technique was developed and tested using magnetic resonance (MR) images as a reference 3D modality, as MR breast imaging is an established technique in clinical practice. The algorithm was validated using a numerical phantom then successfully tested on twenty-four image pairs. Dice's coefficient was used to measure the external goodness of fit, resulting in an excellent overall average of 0.94. Internal agreement was evaluated by examining internal features in consultation with a radiologist, and subjective assessment concludes that reasonable alignment was achieved. PMID:22481910

  12. Local Metric Learning in 2D/3D Deformable Registration With Application in the Abdomen

    PubMed Central

    Chou, Chen-Rui; Mageras, Gig; Pizer, Stephen

    2015-01-01

    In image-guided radiotherapy (IGRT) of disease sites subject to respiratory motion, soft tissue deformations can affect localization accuracy. We describe the application of a method of 2D/3D deformable registration to soft tissue localization in abdomen. The method, called registration efficiency and accuracy through learning a metric on shape (REALMS), is designed to support real-time IGRT. In a previously developed version of REALMS, the method interpolated 3D deformation parameters for any credible deformation in a deformation space using a single globally-trained Riemannian metric for each parameter. We propose a refinement of the method in which the metric is trained over a particular region of the deformation space, such that interpolation accuracy within that region is improved. We report on the application of the proposed algorithm to IGRT in abdominal disease sites, which is more challenging than in lung because of low intensity contrast and nonrespiratory deformation. We introduce a rigid translation vector to compensate for nonrespiratory deformation, and design a special region-of-interest around fiducial markers implanted near the tumor to produce a more reliable registration. Both synthetic data and actual data tests on abdominal datasets show that the localized approach achieves more accurate 2D/3D deformable registration than the global approach. PMID:24771575

  13. Spacecraft camera image registration

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)

    1987-01-01

    A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).

  14. Image registration with uncertainty analysis

    DOEpatents

    Simonson, Katherine M.

    2011-03-22

    In an image registration method, edges are detected in a first image and a second image. A percentage of edge pixels in a subset of the second image that are also edges in the first image shifted by a translation is calculated. A best registration point is calculated based on a maximum percentage of edges matched. In a predefined search region, all registration points other than the best registration point are identified that are not significantly worse than the best registration point according to a predetermined statistical criterion.

  15. Image registration under symmetric conditions: novel approach

    NASA Astrophysics Data System (ADS)

    Duraisamy, Prakash; Yousef, Amr; Buckles, Bill; Jackson, Steve

    2015-03-01

    Registering the 2D images is one of the important pre-processing steps in many computer vision applications like 3D reconstruction, building panoramic images. Contemporary registration algorithm like SIFT (Scale Invariant Feature transform) was not quite success in registering the images under symmetric conditions and under poor illuminations using DoF (Difference of Gaussian) features. In this paper, we introduced a novel approach for registering the images under symmetric conditions.

  16. Efficient implementation of the rank correlation merit function for 2D/3D registration.

    PubMed

    Figl, M; Bloch, C; Gendrin, C; Weber, C; Pawiro, S A; Hummel, J; Markelj, P; Pernus, F; Bergmann, H; Birkfellner, W

    2010-10-01

    A growing number of clinical applications using 2D/3D registration have been presented recently. Usually, a digitally reconstructed radiograph is compared iteratively to an x-ray image of the known projection geometry until a match is achieved, thus providing six degrees of freedom of rigid motion which can be used for patient setup in image-guided radiation therapy or computer-assisted interventions. Recently, stochastic rank correlation, a merit function based on Spearman's rank correlation coefficient, was presented as a merit function especially suitable for 2D/3D registration. The advantage of this measure is its robustness against variations in image histogram content and its wide convergence range. The considerable computational expense of computing an ordered rank list is avoided here by comparing randomly chosen subsets of the DRR and reference x-ray. In this work, we show that it is possible to omit the sorting step and to compute the rank correlation coefficient of the full image content as fast as conventional merit functions. Our evaluation of a well-calibrated cadaver phantom also confirms that rank correlation-type merit functions give the most accurate results if large differences in the histogram content for the DRR and the x-ray image are present. PMID:20844334

  17. Topology-Preserving Rigid Transformation of 2D Digital Images.

    PubMed

    Ngo, Phuc; Passat, Nicolas; Kenmochi, Yukiko; Talbot, Hugues

    2014-02-01

    We provide conditions under which 2D digital images preserve their topological properties under rigid transformations. We consider the two most common digital topology models, namely dual adjacency and well-composedness. This paper leads to the proposal of optimal preprocessing strategies that ensure the topological invariance of images under arbitrary rigid transformations. These results and methods are proved to be valid for various kinds of images (binary, gray-level, label), thus providing generic and efficient tools, which can be used in particular in the context of image registration and warping. PMID:26270925

  18. Tomosynthesis imaging with 2D scanning trajectories

    NASA Astrophysics Data System (ADS)

    Khare, Kedar; Claus, Bernhard E. H.; Eberhard, Jeffrey W.

    2011-03-01

    Tomosynthesis imaging in chest radiography provides volumetric information with the potential for improved diagnostic value when compared to the standard AP or LAT projections. In this paper we explore the image quality benefits of 2D scanning trajectories when coupled with advanced image reconstruction approaches. It is intuitively clear that 2D trajectories provide projection data that is more complete in terms of Radon space filling, when compared with conventional tomosynthesis using a linearly scanned source. Incorporating this additional information for obtaining improved image quality is, however, not a straightforward problem. The typical tomosynthesis reconstruction algorithms are based on direct inversion methods e.g. Filtered Backprojection (FBP) or iterative algorithms that are variants of the Algebraic Reconstruction Technique (ART). The FBP approach is fast and provides high frequency details in the image but at the same time introduces streaking artifacts degrading the image quality. The iterative methods can reduce the image artifacts by using image priors but suffer from a slow convergence rate, thereby producing images lacking high frequency details. In this paper we propose using a fast converging optimal gradient iterative scheme that has advantages of both the FBP and iterative methods in that it produces images with high frequency details while reducing the image artifacts. We show that using favorable 2D scanning trajectories along with the proposed reconstruction method has the advantage of providing improved depth information for structures such as the spine and potentially producing images with more isotropic resolution.

  19. A multicore based parallel image registration method.

    PubMed

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L; Foran, David J

    2009-01-01

    Image registration is a crucial step for many image-assisted clinical applications such as surgery planning and treatment evaluation. In this paper we proposed a landmark based nonlinear image registration algorithm for matching 2D image pairs. The algorithm was shown to be effective and robust under conditions of large deformations. In landmark based registration, the most important step is establishing the correspondence among the selected landmark points. This usually requires an extensive search which is often computationally expensive. We introduced a nonregular data partition algorithm using the K-means clustering algorithm to group the landmarks based on the number of available processing cores. The step optimizes the memory usage and data transfer. We have tested our method using IBM Cell Broadband Engine (Cell/B.E.) platform. PMID:19964921

  20. 2D microwave imaging reflectometer electronics

    SciTech Connect

    Spear, A. G.; Domier, C. W. Hu, X.; Muscatello, C. M.; Ren, X.; Luhmann, N. C.; Tobias, B. J.

    2014-11-15

    A 2D microwave imaging reflectometer system has been developed to visualize electron density fluctuations on the DIII-D tokamak. Simultaneously illuminated at four probe frequencies, large aperture optics image reflections from four density-dependent cutoff surfaces in the plasma over an extended region of the DIII-D plasma. Localized density fluctuations in the vicinity of the plasma cutoff surfaces modulate the plasma reflections, yielding a 2D image of electron density fluctuations. Details are presented of the receiver down conversion electronics that generate the in-phase (I) and quadrature (Q) reflectometer signals from which 2D density fluctuation data are obtained. Also presented are details on the control system and backplane used to manage the electronics as well as an introduction to the computer based control program.

  1. 2D microwave imaging reflectometer electronics

    NASA Astrophysics Data System (ADS)

    Spear, A. G.; Domier, C. W.; Hu, X.; Muscatello, C. M.; Ren, X.; Tobias, B. J.; Luhmann, N. C.

    2014-11-01

    A 2D microwave imaging reflectometer system has been developed to visualize electron density fluctuations on the DIII-D tokamak. Simultaneously illuminated at four probe frequencies, large aperture optics image reflections from four density-dependent cutoff surfaces in the plasma over an extended region of the DIII-D plasma. Localized density fluctuations in the vicinity of the plasma cutoff surfaces modulate the plasma reflections, yielding a 2D image of electron density fluctuations. Details are presented of the receiver down conversion electronics that generate the in-phase (I) and quadrature (Q) reflectometer signals from which 2D density fluctuation data are obtained. Also presented are details on the control system and backplane used to manage the electronics as well as an introduction to the computer based control program.

  2. 2D microwave imaging reflectometer electronics.

    PubMed

    Spear, A G; Domier, C W; Hu, X; Muscatello, C M; Ren, X; Tobias, B J; Luhmann, N C

    2014-11-01

    A 2D microwave imaging reflectometer system has been developed to visualize electron density fluctuations on the DIII-D tokamak. Simultaneously illuminated at four probe frequencies, large aperture optics image reflections from four density-dependent cutoff surfaces in the plasma over an extended region of the DIII-D plasma. Localized density fluctuations in the vicinity of the plasma cutoff surfaces modulate the plasma reflections, yielding a 2D image of electron density fluctuations. Details are presented of the receiver down conversion electronics that generate the in-phase (I) and quadrature (Q) reflectometer signals from which 2D density fluctuation data are obtained. Also presented are details on the control system and backplane used to manage the electronics as well as an introduction to the computer based control program. PMID:25430247

  3. Fast DRR generation for 2D to 3D registration on GPUs

    SciTech Connect

    Tornai, Gabor Janos; Cserey, Gyoergy

    2012-08-15

    Purpose: The generation of digitally reconstructed radiographs (DRRs) is the most time consuming step on the CPU in intensity based two-dimensional x-ray to three-dimensional (CT or 3D rotational x-ray) medical image registration, which has application in several image guided interventions. This work presents optimized DRR rendering on graphical processor units (GPUs) and compares performance achievable on four commercially available devices. Methods: A ray-cast based DRR rendering was implemented for a 512 Multiplication-Sign 512 Multiplication-Sign 72 CT volume. The block size parameter was optimized for four different GPUs for a region of interest (ROI) of 400 Multiplication-Sign 225 pixels with different sampling ratios (1.1%-9.1% and 100%). Performance was statistically evaluated and compared for the four GPUs. The method and the block size dependence were validated on the latest GPU for several parameter settings with a public gold standard dataset (512 Multiplication-Sign 512 Multiplication-Sign 825 CT) for registration purposes. Results: Depending on the GPU, the full ROI is rendered in 2.7-5.2 ms. If sampling ratio of 1.1%-9.1% is applied, execution time is in the range of 0.3-7.3 ms. On all GPUs, the mean of the execution time increased linearly with respect to the number of pixels if sampling was used. Conclusions: The presented results outperform other results from the literature. This indicates that automatic 2D to 3D registration, which typically requires a couple of hundred DRR renderings to converge, can be performed quasi on-line, in less than a second or depending on the application and hardware in less than a couple of seconds. Accordingly, a whole new field of applications is opened for image guided interventions, where the registration is continuously performed to match the real-time x-ray.

  4. CT image registration in sinogram space

    SciTech Connect

    Mao Weihua; Li Tianfang; Wink, Nicole; Xing Lei

    2007-09-15

    Object displacement in a CT scan is generally reflected in CT projection data or sinogram. In this work, the direct relationship between object motion and the change of CT projection data (sinogram) is investigated and this knowledge is applied to create a novel algorithm for sinogram registration. Calculated and experimental results demonstrate that the registration technique works well for registering rigid 2D or 3D motion in parallel and fan beam samplings. Problem and solution for 3D sinogram-based registration of metallic fiducials are also addressed. Since the motion is registered before image reconstruction, the presented algorithm is particularly useful when registering images with metal or truncation artifacts. In addition, this algorithm is valuable for dealing with situations where only limited projection data are available, making it appealing for various applications in image guided radiation therapy.

  5. Personalized x-ray reconstruction of the proximal femur via a non-rigid 2D-3D registration

    NASA Astrophysics Data System (ADS)

    Yu, Weimin; Zysset, Philippe; Zheng, Guoyan

    2015-03-01

    In this paper we present a new approach for a personalized X-ray reconstruction of the proximal femur via a non-rigid registration of a 3D volumetric template to 2D calibrated C-arm images. The 2D-3D registration is done with a hierarchical two-stage strategy: the global scaled rigid registration stage followed by a regularized deformable b-spline registration stage. In both stages, a set of control points with uniform spacing are placed over the domain of the 3D volumetric template and the registrations are driven by computing updated positions of these control points, which then allows to accurately register the 3D volumetric template to the reference space of the C-arm images. Comprehensive experiments on simulated images, on images of cadaveric femurs and on clinical datasets are designed and conducted to evaluate the performance of the proposed approach. Quantitative and qualitative evaluation results are given, which demonstrate the efficacy of the present approach.

  6. 2D-3D rigid registration to compensate for prostate motion during 3D TRUS-guided biopsy

    NASA Astrophysics Data System (ADS)

    De Silva, Tharindu; Fenster, Aaron; Bax, Jeffrey; Gardi, Lori; Romagnoli, Cesare; Samarabandu, Jagath; Ward, Aaron D.

    2012-02-01

    Prostate biopsy is the clinical standard for prostate cancer diagnosis. To improve the accuracy of targeting suspicious locations, systems have been developed that can plan and record biopsy locations in a 3D TRUS image acquired at the beginning of the procedure. Some systems are designed for maximum compatibility with existing ultrasound equipment and are thus designed around the use of a conventional 2D TRUS probe, using controlled axial rotation of this probe to acquire a 3D TRUS reference image at the start of the biopsy procedure. Prostate motion during the biopsy procedure causes misalignments between the prostate in the live 2D TRUS images and the pre-acquired 3D TRUS image. We present an image-based rigid registration technique that aligns live 2D TRUS images, acquired immediately prior to biopsy needle insertion, with the pre-acquired 3D TRUS image to compensate for this motion. Our method was validated using 33 manually identified intrinsic fiducials in eight subjects and the target registration error was found to be 1.89 mm. We analysed the suitability of two image similarity metrics (normalized cross correlation and mutual information) for this task by plotting these metrics as a function of varying parameters in the six degree-of-freedom transformation space, with the ground truth plane obtained from registration as the starting point for the parameter exploration. We observed a generally convex behaviour of the similarity metrics. This encourages their use for this registration problem, and could assist in the design of a tool for the detection of misalignment, which could trigger the execution of a non-real-time registration, when needed during the procedure.

  7. Image Registration: A Necessary Evil

    NASA Technical Reports Server (NTRS)

    Bell, James; McLachlan, Blair; Hermstad, Dexter; Trosin, Jeff; George, Michael W. (Technical Monitor)

    1995-01-01

    Registration of test and reference images is a key component of nearly all PSP data reduction techniques. This is done to ensure that a test image pixel viewing a particular point on the model is ratioed by the reference image pixel which views the same point. Typically registration is needed to account for model motion due to differing airloads when the wind-off and wind-on images are taken. Registration is also necessary when two cameras are used for simultaneous acquisition of data from a dual-frequency paint. This presentation will discuss the advantages and disadvantages of several different image registration techniques. In order to do so, it is necessary to propose both an accuracy requirement for image registration and a means for measuring the accuracy of a particular technique. High contrast regions in the unregistered images are most sensitive to registration errors, and it is proposed that these regions be used to establish the error limits for registration. Once this is done, the actual registration error can be determined by locating corresponding points on the test and reference images, and determining how well a particular registration technique matches them. An example of this procedure is shown for three transforms used to register images of a semispan model. Thirty control points were located on the model. A subset of the points were used to determine the coefficients of each registration transform, and the error with which each transform aligned the remaining points was determined. The results indicate the general superiority of a third-order polynomial over other candidate transforms, as well as showing how registration accuracy varies with number of control points. Finally, it is proposed that image registration may eventually be done away with completely. As more accurate image resection techniques and more detailed model surface grids become available, it will be possible to map raw image data onto the model surface accurately. Intensity

  8. Efficient feature-based 2D/3D registration of transesophageal echocardiography to x-ray fluoroscopy for cardiac interventions

    NASA Astrophysics Data System (ADS)

    Hatt, Charles R.; Speidel, Michael A.; Raval, Amish N.

    2014-03-01

    We present a novel 2D/ 3D registration algorithm for fusion between transesophageal echocardiography (TEE) and X-ray fluoroscopy (XRF). The TEE probe is modeled as a subset of 3D gradient and intensity point features, which facilitates efficient 3D-to-2D perspective projection. A novel cost-function, based on a combination of intensity and edge features, evaluates the registration cost value without the need for time-consuming generation of digitally reconstructed radiographs (DRRs). Validation experiments were performed with simulations and phantom data. For simulations, in silica XRF images of a TEE probe were generated in a number of different pose configurations using a previously acquired CT image. Random misregistrations were applied and our method was used to recover the TEE probe pose and compare the result to the ground truth. Phantom experiments were performed by attaching fiducial markers externally to a TEE probe, imaging the probe with an interventional cardiac angiographic x-ray system, and comparing the pose estimated from the external markers to that estimated from the TEE probe using our algorithm. Simulations found a 3D target registration error of 1.08(1.92) mm for biplane (monoplane) geometries, while the phantom experiment found a 2D target registration error of 0.69mm. For phantom experiments, we demonstrated a monoplane tracking frame-rate of 1.38 fps. The proposed feature-based registration method is computationally efficient, resulting in near real-time, accurate image based registration between TEE and XRF.

  9. Known-component 3D-2D registration for quality assurance of spine surgery pedicle screw placement.

    PubMed

    Uneri, A; De Silva, T; Stayman, J W; Kleinszig, G; Vogt, S; Khanna, A J; Gokaslan, Z L; Wolinsky, J-P; Siewerdsen, J H

    2015-10-21

    A 3D-2D image registration method is presented that exploits knowledge of interventional devices (e.g. K-wires or spine screws-referred to as 'known components') to extend the functionality of intraoperative radiography/fluoroscopy by providing quantitative measurement and quality assurance (QA) of the surgical product. The known-component registration (KC-Reg) algorithm uses robust 3D-2D registration combined with 3D component models of surgical devices known to be present in intraoperative 2D radiographs. Component models were investigated that vary in fidelity from simple parametric models (e.g. approximation of a screw as a simple cylinder, referred to as 'parametrically-known' component [pKC] registration) to precise models based on device-specific CAD drawings (referred to as 'exactly-known' component [eKC] registration). 3D-2D registration from three intraoperative radiographs was solved using the covariance matrix adaptation evolution strategy (CMA-ES) to maximize image-gradient similarity, relating device placement relative to 3D preoperative CT of the patient. Spine phantom and cadaver studies were conducted to evaluate registration accuracy and demonstrate QA of the surgical product by verification of the type of devices delivered and conformance within the 'acceptance window' of the spinal pedicle. Pedicle screws were successfully registered to radiographs acquired from a mobile C-arm, providing TRE 1-4 mm and  <5° using simple parametric (pKC) models, further improved to  <1 mm and  <1° using eKC registration. Using advanced pKC models, screws that did not match the device models specified in the surgical plan were detected with an accuracy of  >99%. Visualization of registered devices relative to surgical planning and the pedicle acceptance window provided potentially valuable QA of the surgical product and reliable detection of pedicle screw breach. 3D-2D registration combined with 3D models of known surgical devices offers a

  10. Known-component 3D-2D registration for quality assurance of spine surgery pedicle screw placement

    NASA Astrophysics Data System (ADS)

    Uneri, A.; De Silva, T.; Stayman, J. W.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Gokaslan, Z. L.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2015-10-01

    A 3D-2D image registration method is presented that exploits knowledge of interventional devices (e.g. K-wires or spine screws—referred to as ‘known components’) to extend the functionality of intraoperative radiography/fluoroscopy by providing quantitative measurement and quality assurance (QA) of the surgical product. The known-component registration (KC-Reg) algorithm uses robust 3D-2D registration combined with 3D component models of surgical devices known to be present in intraoperative 2D radiographs. Component models were investigated that vary in fidelity from simple parametric models (e.g. approximation of a screw as a simple cylinder, referred to as ‘parametrically-known’ component [pKC] registration) to precise models based on device-specific CAD drawings (referred to as ‘exactly-known’ component [eKC] registration). 3D-2D registration from three intraoperative radiographs was solved using the covariance matrix adaptation evolution strategy (CMA-ES) to maximize image-gradient similarity, relating device placement relative to 3D preoperative CT of the patient. Spine phantom and cadaver studies were conducted to evaluate registration accuracy and demonstrate QA of the surgical product by verification of the type of devices delivered and conformance within the ‘acceptance window’ of the spinal pedicle. Pedicle screws were successfully registered to radiographs acquired from a mobile C-arm, providing TRE 1-4 mm and  <5° using simple parametric (pKC) models, further improved to  <1 mm and  <1° using eKC registration. Using advanced pKC models, screws that did not match the device models specified in the surgical plan were detected with an accuracy of  >99%. Visualization of registered devices relative to surgical planning and the pedicle acceptance window provided potentially valuable QA of the surgical product and reliable detection of pedicle screw breach. 3D-2D registration combined with 3D models of known surgical

  11. 2D and 3D registration methods for dual-energy contrast-enhanced digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Lau, Kristen C.; Roth, Susan; Maidment, Andrew D. A.

    2014-03-01

    Contrast-enhanced digital breast tomosynthesis (CE-DBT) uses an iodinated contrast agent to image the threedimensional breast vasculature. The University of Pennsylvania is conducting a CE-DBT clinical study in patients with known breast cancers. The breast is compressed continuously and imaged at four time points (1 pre-contrast; 3 postcontrast). A hybrid subtraction scheme is proposed. First, dual-energy (DE) images are obtained by a weighted logarithmic subtraction of the high-energy and low-energy image pairs. Then, post-contrast DE images are subtracted from the pre-contrast DE image. This hybrid temporal subtraction of DE images is performed to analyze iodine uptake, but suffers from motion artifacts. Employing image registration further helps to correct for motion, enhancing the evaluation of vascular kinetics. Registration using ANTS (Advanced Normalization Tools) is performed in an iterative manner. Mutual information optimization first corrects large-scale motions. Normalized cross-correlation optimization then iteratively corrects fine-scale misalignment. Two methods have been evaluated: a 2D method using a slice-by-slice approach, and a 3D method using a volumetric approach to account for out-of-plane breast motion. Our results demonstrate that iterative registration qualitatively improves with each iteration (five iterations total). Motion artifacts near the edge of the breast are corrected effectively and structures within the breast (e.g. blood vessels, surgical clip) are better visualized. Statistical and clinical evaluations of registration accuracy in the CE-DBT images are ongoing.

  12. Image Segmentation, Registration, Compression, and Matching

    NASA Technical Reports Server (NTRS)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  13. Remapping of digital subtraction angiography on a standard fluoroscopy system using 2D-3D registration

    NASA Astrophysics Data System (ADS)

    Alhrishy, Mazen G.; Varnavas, Andreas; Guyot, Alexis; Carrell, Tom; King, Andrew; Penney, Graeme

    2015-03-01

    Fluoroscopy-guided endovascular interventions are being performing for more and more complex cases with longer screening times. However, X-ray is much better at visualizing interventional devices and dense structures compared to vasculature. To visualise vasculature, angiography screening is essential but requires the use of iodinated contrast medium (ICM) which is nephrotoxic. Acute kidney injury is the main life-threatening complication of ICM. Digital subtraction angiography (DSA) is also often a major contributor to overall patient radiation dose (81% reported). Furthermore, a DSA image is only valid for the current interventional view and not the new view once the C-arm is moved. In this paper, we propose the use of 2D-3D image registration between intraoperative images and the preoperative CT volume to facilitate DSA remapping using a standard fluoroscopy system. This allows repeated ICM-free DSA and has the potential to enable a reduction in ICM usage and radiation dose. Experiments were carried out using 9 clinical datasets. In total, 41 DSA images were remapped. For each dataset, the maximum and averaged remapping accuracy error were calculated and presented. Numerical results showed an overall averaged error of 2.50 mm, with 7 patients scoring averaged errors < 3 mm and 2 patients < 6 mm.

  14. Registration of interferometric SAR images

    NASA Technical Reports Server (NTRS)

    Lin, Qian; Vesecky, John F.; Zebker, Howard A.

    1992-01-01

    Interferometric synthetic aperture radar (INSAR) is a new way of performing topography mapping. Among the factors critical to mapping accuracy is the registration of the complex SAR images from repeated orbits. A new algorithm for registering interferometric SAR images is presented. A new figure of merit, the average fluctuation function of the phase difference image, is proposed to evaluate the fringe pattern quality. The process of adjusting the registration parameters according to the fringe pattern quality is optimized through a downhill simplex minimization algorithm. The results of applying the proposed algorithm to register two pairs of Seasat SAR images with a short baseline (75 m) and a long baseline (500 m) are shown. It is found that the average fluctuation function is a very stable measure of fringe pattern quality allowing very accurate registration.

  15. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets

    PubMed Central

    2010-01-01

    Background Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. Results We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. Conclusions The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics. PMID:20064262

  16. 3D–2D registration in mobile radiographs: algorithm development and preliminary clinical evaluation

    PubMed Central

    Otake, Yoshito; Wang, Adam S; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Aygun, Nafi; Lo, Sheng-fu L; Wolinsky, Jean-Paul; Gokaslan, Ziya L; Siewerdsen, Jeffrey H

    2015-01-01

    An image-based 3D–2D registration method is presented using radiographs acquired in the uncalibrated, unconstrained geometry of mobile radiography. The approach extends a previous method for six degree-of-freedom (DOF) registration in C-arm fluoroscopy (namely ‘LevelCheck’) to solve the 9-DOF estimate of geometry in which the position of the source and detector are unconstrained. The method was implemented using a gradient correlation similarity metric and stochastic derivative-free optimization on a GPU. Development and evaluation were conducted in three steps. First, simulation studies were performed that involved a CT scan of an anthropomorphic body phantom and 1000 randomly generated digitally reconstructed radiographs in posterior–anterior and lateral views. A median projection distance error (PDE) of 0.007 mm was achieved with 9-DOF registration compared to 0.767 mm for 6-DOF. Second, cadaver studies were conducted using mobile radiographs acquired in three anatomical regions (thorax, abdomen and pelvis) and three levels of source-detector distance (~800, ~1000 and ~1200 mm). The 9-DOF method achieved a median PDE of 0.49 mm (compared to 2.53 mm for the 6-DOF method) and demonstrated robustness in the unconstrained imaging geometry. Finally, a retrospective clinical study was conducted with intraoperative radiographs of the spine exhibiting real anatomical deformation and image content mismatch (e.g. interventional devices in the radiograph that were not in the CT), demonstrating a PDE = 1.1 mm for the 9-DOF approach. Average computation time was 48.5 s, involving 687 701 function evaluations on average, compared to 18.2 s for the 6-DOF method. Despite the greater computational load, the 9-DOF method may offer a valuable tool for target localization (e.g. decision support in level counting) as well as safety and quality assurance checks at the conclusion of a procedure (e.g. overlay of planning data on the radiograph for verification of the surgical

  17. 3D-2D registration in mobile radiographs: algorithm development and preliminary clinical evaluation

    NASA Astrophysics Data System (ADS)

    Otake, Yoshito; Wang, Adam S.; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Aygun, Nafi; Lo, Sheng-fu L.; Wolinsky, Jean-Paul; Gokaslan, Ziya L.; Siewerdsen, Jeffrey H.

    2015-03-01

    An image-based 3D-2D registration method is presented using radiographs acquired in the uncalibrated, unconstrained geometry of mobile radiography. The approach extends a previous method for six degree-of-freedom (DOF) registration in C-arm fluoroscopy (namely ‘LevelCheck’) to solve the 9-DOF estimate of geometry in which the position of the source and detector are unconstrained. The method was implemented using a gradient correlation similarity metric and stochastic derivative-free optimization on a GPU. Development and evaluation were conducted in three steps. First, simulation studies were performed that involved a CT scan of an anthropomorphic body phantom and 1000 randomly generated digitally reconstructed radiographs in posterior-anterior and lateral views. A median projection distance error (PDE) of 0.007 mm was achieved with 9-DOF registration compared to 0.767 mm for 6-DOF. Second, cadaver studies were conducted using mobile radiographs acquired in three anatomical regions (thorax, abdomen and pelvis) and three levels of source-detector distance (~800, ~1000 and ~1200 mm). The 9-DOF method achieved a median PDE of 0.49 mm (compared to 2.53 mm for the 6-DOF method) and demonstrated robustness in the unconstrained imaging geometry. Finally, a retrospective clinical study was conducted with intraoperative radiographs of the spine exhibiting real anatomical deformation and image content mismatch (e.g. interventional devices in the radiograph that were not in the CT), demonstrating a PDE = 1.1 mm for the 9-DOF approach. Average computation time was 48.5 s, involving 687 701 function evaluations on average, compared to 18.2 s for the 6-DOF method. Despite the greater computational load, the 9-DOF method may offer a valuable tool for target localization (e.g. decision support in level counting) as well as safety and quality assurance checks at the conclusion of a procedure (e.g. overlay of planning data on the radiograph for verification of

  18. Finding a Good Feature Detector-Descriptor Combination for the 2d Keypoint-Based Registration of Tls Point Clouds

    NASA Astrophysics Data System (ADS)

    Urban, S.; Weinmann, M.

    2015-08-01

    The automatic and accurate registration of terrestrial laser scanning (TLS) data is a topic of great interest in the domains of city modeling, construction surveying or cultural heritage. While numerous of the most recent approaches focus on keypoint-based point cloud registration relying on forward-projected 2D keypoints detected in panoramic intensity images, little attention has been paid to the selection of appropriate keypoint detector-descriptor combinations. Instead, keypoints are commonly detected and described by applying well-known methods such as the Scale Invariant Feature Transform (SIFT) or Speeded-Up Robust Features (SURF). In this paper, we present a framework for evaluating the influence of different keypoint detector-descriptor combinations on the results of point cloud registration. For this purpose, we involve five different approaches for extracting local features from the panoramic intensity images and exploit the range information of putative feature correspondences in order to define bearing vectors which, in turn, may be exploited to transfer the task of point cloud registration from the object space to the observation space. With an extensive evaluation of our framework on a standard benchmark TLS dataset, we clearly demonstrate that replacing SIFT and SURF detectors and descriptors by more recent approaches significantly alleviates point cloud registration in terms of accuracy, efficiency and robustness.

  19. Registration Of SAR Images With Multisensor Images

    NASA Technical Reports Server (NTRS)

    Evans, Diane L.; Burnette, Charles F.; Van Zyl, Jakob J.

    1993-01-01

    Semiautomated technique intended primarily to facilitate registration of polarimetric synthetic-aperture-radar (SAR) images with other images of same or partly overlapping terrain while preserving polarization information conveyed by SAR data. Technique generally applicable in sense one or both of images to be registered with each other generated by polarimetric or nonpolarimetric SAR, infrared radiometry, conventional photography, or any other applicable sensing method.

  20. Robust 2D/3D registration for fast-flexion motion of the knee joint using hybrid optimization.

    PubMed

    Ohnishi, Takashi; Suzuki, Masahiko; Kobayashi, Tatsuya; Naomoto, Shinji; Sukegawa, Tomoyuki; Nawata, Atsushi; Haneishi, Hideaki

    2013-01-01

    Previously, we proposed a 2D/3D registration method that uses Powell's algorithm to obtain 3D motion of a knee joint by 3D computed-tomography and bi-plane fluoroscopic images. The 2D/3D registration is performed consecutively and automatically for each frame of the fluoroscopic images. This method starts from the optimum parameters of the previous frame for each frame except for the first one, and it searches for the next set of optimum parameters using Powell's algorithm. However, if the flexion motion of the knee joint is fast, it is likely that Powell's algorithm will provide a mismatch because the initial parameters are far from the correct ones. In this study, we applied a hybrid optimization algorithm (HPS) combining Powell's algorithm with the Nelder-Mead simplex (NM-simplex) algorithm to overcome this problem. The performance of the HPS was compared with the separate performances of Powell's algorithm and the NM-simplex algorithm, the Quasi-Newton algorithm and hybrid optimization algorithm with the Quasi-Newton and NM-simplex algorithms with five patient data sets in terms of the root-mean-square error (RMSE), target registration error (TRE), success rate, and processing time. The RMSE, TRE, and the success rate of the HPS were better than those of the other optimization algorithms, and the processing time was similar to that of Powell's algorithm alone. PMID:23138929

  1. Medical image registration using sparse coding of image patches.

    PubMed

    Afzali, Maryam; Ghaffari, Aboozar; Fatemizadeh, Emad; Soltanian-Zadeh, Hamid

    2016-06-01

    Image registration is a basic task in medical image processing applications like group analysis and atlas construction. Similarity measure is a critical ingredient of image registration. Intensity distortion of medical images is not considered in most previous similarity measures. Therefore, in the presence of bias field distortions, they do not generate an acceptable registration. In this paper, we propose a sparse based similarity measure for mono-modal images that considers non-stationary intensity and spatially-varying distortions. The main idea behind this measure is that the aligned image is constructed by an analysis dictionary trained using the image patches. For this purpose, we use "Analysis K-SVD" to train the dictionary and find the sparse coefficients. We utilize image patches to construct the analysis dictionary and then we employ the proposed sparse similarity measure to find a non-rigid transformation using free form deformation (FFD). Experimental results show that the proposed approach is able to robustly register 2D and 3D images in both simulated and real cases. The proposed method outperforms other state-of-the-art similarity measures and decreases the transformation error compared to the previous methods. Even in the presence of bias field distortion, the proposed method aligns images without any preprocessing. PMID:27085311

  2. Fast 3D fluid registration of brain magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Leporé, Natasha; Chou, Yi-Yu; Lopez, Oscar L.; Aizenstein, Howard J.; Becker, James T.; Toga, Arthur W.; Thompson, Paul M.

    2008-03-01

    Fluid registration is widely used in medical imaging to track anatomical changes, to correct image distortions, and to integrate multi-modality data. Fluid mappings guarantee that the template image deforms smoothly into the target, without tearing or folding, even when large deformations are required for accurate matching. Here we implemented an intensity-based fluid registration algorithm, accelerated by using a filter designed by Bro-Nielsen and Gramkow. We validated the algorithm on 2D and 3D geometric phantoms using the mean square difference between the final registered image and target as a measure of the accuracy of the registration. In tests on phantom images with different levels of overlap, varying amounts of Gaussian noise, and different intensity gradients, the fluid method outperformed a more commonly used elastic registration method, both in terms of accuracy and in avoiding topological errors during deformation. We also studied the effect of varying the viscosity coefficients in the viscous fluid equation, to optimize registration accuracy. Finally, we applied the fluid registration algorithm to a dataset of 2D binary corpus callosum images and 3D volumetric brain MRIs from 14 healthy individuals to assess its accuracy and robustness.

  3. Fast voxel-based 2D/3D registration algorithm using a volume rendering method based on the shear-warp factorization

    NASA Astrophysics Data System (ADS)

    Weese, Juergen; Goecke, Roland; Penney, Graeme P.; Desmedt, Paul; Buzug, Thorsten M.; Schumann, Heidrun

    1999-05-01

    2D/3D registration makes it possible to use pre-operative CT scans for navigation purposes during X-ray fluoroscopy guided interventions. We present a fast voxel-based method for this registration task, which uses a recently introduced similarity measure (pattern intensity). This measure is especially suitable for 2D/3D registration, because it is robust with respect to structures such as a stent visible in the X-ray fluoroscopy image but not in the CT scan. The method uses only a part of the CT scan for the generation of digitally reconstructed radiographs (DRRs) to accelerate their computation. Nevertheless, computation time is crucial for intra-operative application and a further speed-up is required, because numerous DRRs must be computed. For that reason, the suitability of different volume rendering methods for 2D/3D registration has been investigated. A method based on the shear-warp factorization of the viewing transformation turned out to be especially suitable and builds the basis of the registration algorithm. The algorithm has been applied to images of a spine phantom and to clinical images. For comparison, registration results have been calculated using ray-casting. The shear-warp factorization based rendering method accelerates registration by a factor of up to seven compared to ray-casting without degrading registration accuracy. Using a vertebra as feature for registration, computation time is in the range of 3-4s (Sun UltraSparc, 300 MHz) which is acceptable for intra-operative application.

  4. Image registration method for medical image sequences

    DOEpatents

    Gee, Timothy F.; Goddard, James S.

    2013-03-26

    Image registration of low contrast image sequences is provided. In one aspect, a desired region of an image is automatically segmented and only the desired region is registered. Active contours and adaptive thresholding of intensity or edge information may be used to segment the desired regions. A transform function is defined to register the segmented region, and sub-pixel information may be determined using one or more interpolation methods.

  5. Automatic intensity-based 3D-to-2D registration of CT volume and dual-energy digital radiography for the detection of cardiac calcification

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Gilkeson, Robert; Fei, Baowei

    2007-03-01

    We are investigating three-dimensional (3D) to two-dimensional (2D) registration methods for computed tomography (CT) and dual-energy digital radiography (DR) for the detection of coronary artery calcification. CT is an established tool for the diagnosis of coronary artery diseases (CADs). Dual-energy digital radiography could be a cost-effective alternative for screening coronary artery calcification. In order to utilize CT as the "gold standard" to evaluate the ability of DR images for the detection and localization of calcium, we developed an automatic intensity-based 3D-to-2D registration method for 3D CT volumes and 2D DR images. To generate digital rendering radiographs (DRR) from the CT volumes, we developed three projection methods, i.e. Gaussian-weighted projection, threshold-based projection, and average-based projection. We tested normalized cross correlation (NCC) and normalized mutual information (NMI) as similarity measurement. We used the Downhill Simplex method as the search strategy. Simulated projection images from CT were fused with the corresponding DR images to evaluate the localization of cardiac calcification. The registration method was evaluated by digital phantoms, physical phantoms, and clinical data sets. The results from the digital phantoms show that the success rate is 100% with mean errors of less 0.8 mm and 0.2 degree for both NCC and NMI. The registration accuracy of the physical phantoms is 0.34 +/- 0.27 mm. Color overlay and 3D visualization of the clinical data show that the two images are registered well. This is consistent with the improvement of the NMI values from 0.20 +/- 0.03 to 0.25 +/- 0.03 after registration. The automatic 3D-to-2D registration method is accurate and robust and may provide a useful tool to evaluate the dual-energy DR images for the detection of coronary artery calcification.

  6. Automatic Intensity-based 3D-to-2D Registration of CT Volume and Dual-energy Digital Radiography for the Detection of Cardiac Calcification

    PubMed Central

    Chen, Xiang; Gilkeson, Robert; Fei, Baowei

    2013-01-01

    We are investigating three-dimensional (3D) to two-dimensional (2D) registration methods for computed tomography (CT) and dual-energy digital radiography (DR) for the detection of coronary artery calcification. CT is an established tool for the diagnosis of coronary artery diseases (CADs). Dual-energy digital radiography could be a cost-effective alternative for screening coronary artery calcification. In order to utilize CT as the “gold standard” to evaluate the ability of DR images for the detection and localization of calcium, we developed an automatic intensity-based 3D-to-2D registration method for 3D CT volumes and 2D DR images. To generate digital rendering radiographs (DRR) from the CT volumes, we developed three projection methods, i.e. Gaussian-weighted projection, threshold-based projection, and average-based projection. We tested normalized cross correlation (NCC) and normalized mutual information (NMI) as similarity measurement. We used the Downhill Simplex method as the search strategy. Simulated projection images from CT were fused with the corresponding DR images to evaluate the localization of cardiac calcification. The registration method was evaluated by digital phantoms, physical phantoms, and clinical data sets. The results from the digital phantoms show that the success rate is 100% with mean errors of less 0.8 mm and 0.2 degree for both NCC and NMI. The registration accuracy of the physical phantoms is 0.34 ± 0.27 mm. Color overlay and 3D visualization of the clinical data show that the two images are registered well. This is consistent with the improvement of the NMI values from 0.20 ± 0.03 to 0.25 ± 0.03 after registration. The automatic 3D-to-2D registration method is accurate and robust and may provide a useful tool to evaluate the dual-energy DR images for the detection of coronary artery calcification. PMID:24386527

  7. Evaluating Similarity Measures for Brain Image Registration

    PubMed Central

    Razlighi, Q. R.; Kehtarnavaz, N.; Yousefi, S.

    2013-01-01

    Evaluation of similarity measures for image registration is a challenging problem due to its complex interaction with the underlying optimization, regularization, image type and modality. We propose a single performance metric, named robustness, as part of a new evaluation method which quantifies the effectiveness of similarity measures for brain image registration while eliminating the effects of the other parts of the registration process. We show empirically that similarity measures with higher robustness are more effective in registering degraded images and are also more successful in performing intermodal image registration. Further, we introduce a new similarity measure, called normalized spatial mutual information, for 3D brain image registration whose robustness is shown to be much higher than the existing ones. Consequently, it tolerates greater image degradation and provides more consistent outcomes for intermodal brain image registration. PMID:24039378

  8. Interactive initialization for 2D/3D intra-operative registration using the Microsoft Kinect

    NASA Astrophysics Data System (ADS)

    Gong, Ren Hui; Güler, Özgur; Yaniv, Ziv

    2013-03-01

    All 2D/3D anatomy based rigid registration algorithms are iterative, requiring an initial estimate of the 3D data pose. Current initialization methods have limited applicability in the operating room setting, due to the constraints imposed by this environment or due to insufficient accuracy. In this work we use the Microsoft Kinect device to allow the surgeon to interactively initialize the registration process. A Kinect sensor is used to simulate the mouse-based operations in a conventional manual initialization approach, obviating the need for physical contact with an input device. Different gestures from both arms are detected from the sensor in order to set or switch the required working contexts. 3D hand motion provides the six degree-of-freedom controls for manipulating the pre-operative data in the 3D space. We evaluated our method for both X-ray/CT and X-ray/MR initialization using three publicly available reference data sets. Results show that, with initial target registration errors of 117:7 +/- 28:9 mm a user is able to achieve final errors of 5:9 +/- 2:6 mm within 158 +/- 65 sec using the Kinect-based approach, compared to 4:8+/-2:0 mm and 88+/-60 sec when using the mouse for interaction. Based on these results we conclude that this method is sufficiently accurate for initialization of X-ray/CT and X-ray/MR registration in the OR.

  9. 2D-3D Registration of CT Vertebra Volume to Fluoroscopy Projection: A Calibration Model Assessment

    NASA Astrophysics Data System (ADS)

    Bifulco, P.; Cesarelli, M.; Allen, R.; Romano, M.; Fratini, A.; Pasquariello, G.

    2009-12-01

    This study extends a previous research concerning intervertebral motion registration by means of 2D dynamic fluoroscopy to obtain a more comprehensive 3D description of vertebral kinematics. The problem of estimating the 3D rigid pose of a CT volume of a vertebra from its 2D X-ray fluoroscopy projection is addressed. 2D-3D registration is obtained maximising a measure of similarity between Digitally Reconstructed Radiographs (obtained from the CT volume) and real fluoroscopic projection. X-ray energy correction was performed. To assess the method a calibration model was realised a sheep dry vertebra was rigidly fixed to a frame of reference including metallic markers. Accurate measurement of 3D orientation was obtained via single-camera calibration of the markers and held as true 3D vertebra position; then, vertebra 3D pose was estimated and results compared. Error analysis revealed accuracy of the order of 0.1 degree for the rotation angles of about 1 mm for displacements parallel to the fluoroscopic plane, and of order of 10 mm for the orthogonal displacement.

  10. Interactive multigrid refinement for deformable image registration.

    PubMed

    Zhou, Wu; Xie, Yaoqin

    2013-01-01

    Deformable image registration is the spatial mapping of corresponding locations between images and can be used for important applications in radiotherapy. Although numerous methods have attempted to register deformable medical images automatically, such as salient-feature-based registration (SFBR), free-form deformation (FFD), and demons, no automatic method for registration is perfect, and no generic automatic algorithm has shown to work properly for clinical applications due to the fact that the deformation field is often complex and cannot be estimated well by current automatic deformable registration methods. This paper focuses on how to revise registration results interactively for deformable image registration. We can manually revise the transformed image locally in a hierarchical multigrid manner to make the transformed image register well with the reference image. The proposed method is based on multilevel B-spline to interactively revise the deformable transformation in the overlapping region between the reference image and the transformed image. The resulting deformation controls the shape of the transformed image and produces a nice registration or improves the registration results of other registration methods. Experimental results in clinical medical images for adaptive radiotherapy demonstrated the effectiveness of the proposed method. PMID:24232828

  11. Practical pseudo-3D registration for large tomographic images

    NASA Astrophysics Data System (ADS)

    Liu, Xuan; Laperre, Kjell; Sasov, Alexander

    2014-09-01

    Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has

  12. 32 CFR 1630.26 - Class 2-D: Registrant deferred because of study preparing for the ministry.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Class 2-D: Registrant deferred because of study... study preparing for the ministry. In accord with part 1639 of this chapter any registrant shall be... direction of a recognized church or religious organization; and (b) Who is satisfactorily pursuing a...

  13. 32 CFR 1630.26 - Class 2-D: Registrant deferred because of study preparing for the ministry.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 6 2014-07-01 2014-07-01 false Class 2-D: Registrant deferred because of study... study preparing for the ministry. In accord with part 1639 of this chapter any registrant shall be... direction of a recognized church or religious organization; and (b) Who is satisfactorily pursuing a...

  14. 32 CFR 1630.26 - Class 2-D: Registrant deferred because of study preparing for the ministry.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 6 2012-07-01 2012-07-01 false Class 2-D: Registrant deferred because of study... study preparing for the ministry. In accord with part 1639 of this chapter any registrant shall be... direction of a recognized church or religious organization; and (b) Who is satisfactorily pursuing a...

  15. 32 CFR 1630.26 - Class 2-D: Registrant deferred because of study preparing for the ministry.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 6 2013-07-01 2013-07-01 false Class 2-D: Registrant deferred because of study... study preparing for the ministry. In accord with part 1639 of this chapter any registrant shall be... direction of a recognized church or religious organization; and (b) Who is satisfactorily pursuing a...

  16. 32 CFR 1630.26 - Class 2-D: Registrant deferred because of study preparing for the ministry.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 6 2011-07-01 2011-07-01 false Class 2-D: Registrant deferred because of study... study preparing for the ministry. In accord with part 1639 of this chapter any registrant shall be... direction of a recognized church or religious organization; and (b) Who is satisfactorily pursuing a...

  17. Automated 2D-3D registration of a radiograph and a cone beam CT using line-segment enhancement

    SciTech Connect

    Munbodh, Reshma; Jaffray, David A.; Moseley, Douglas J.; Chen Zhe; Knisely, Jonathan P.S.; Cathier, Pascal; Duncan, James S.

    2006-05-15

    The objective of this study was to develop a fully automated two-dimensional (2D)-three-dimensional (3D) registration framework to quantify setup deviations in prostate radiation therapy from cone beam CT (CBCT) data and a single AP radiograph. A kilovoltage CBCT image and kilovoltage AP radiograph of an anthropomorphic phantom of the pelvis were acquired at 14 accurately known positions. The shifts in the phantom position were subsequently estimated by registering digitally reconstructed radiographs (DRRs) from the 3D CBCT scan to the AP radiographs through the correlation of enhanced linear image features mainly representing bony ridges. Linear features were enhanced by filtering the images with ''sticks,'' short line segments which are varied in orientation to achieve the maximum projection value at every pixel in the image. The mean (and standard deviations) of the absolute errors in estimating translations along the three orthogonal axes in millimeters were 0.134 (0.096) AP(out-of-plane), 0.021 (0.023) ML and 0.020 (0.020) SI. The corresponding errors for rotations in degrees were 0.011 (0.009) AP, 0.029 (0.016) ML (out-of-plane), and 0.030 (0.028) SI (out-of-plane). Preliminary results with megavoltage patient data have also been reported. The results suggest that it may be possible to enhance anatomic features that are common to DRRs from a CBCT image and a single AP radiography of the pelvis for use in a completely automated and accurate 2D-3D registration framework for setup verification in prostate radiotherapy. This technique is theoretically applicable to other rigid bony structures such as the cranial vault or skull base and piecewise rigid structures such as the spine.

  18. Bi-planar 2D-to-3D registration in Fourier domain for stereoscopic x-ray motion tracking

    NASA Astrophysics Data System (ADS)

    Zosso, Dominique; Le Callennec, Benoît; Bach Cuadra, Meritxell; Aminian, Kamiar; Jolles, Brigitte M.; Thiran, Jean-Philippe

    2008-03-01

    In this paper we present a new method to track bone movements in stereoscopic X-ray image series of the knee joint. The method is based on two different X-ray image sets: a rotational series of acquisitions of the still subject knee that allows the tomographic reconstruction of the three-dimensional volume (model), and a stereoscopic image series of orthogonal projections as the subject performs movements. Tracking the movements of bones throughout the stereoscopic image series means to determine, for each frame, the best pose of every moving element (bone) previously identified in the 3D reconstructed model. The quality of a pose is reflected in the similarity between its theoretical projections and the actual radiographs. We use direct Fourier reconstruction to approximate the three-dimensional volume of the knee joint. Then, to avoid the expensive computation of digitally rendered radiographs (DRR) for pose recovery, we develop a corollary to the 3-dimensional central-slice theorem and reformulate the tracking problem in the Fourier domain. Under the hypothesis of parallel X-ray beams, the heavy 2D-to-3D registration of projections in the signal domain is replaced by efficient slice-to-volume registration in the Fourier domain. Focusing on rotational movements, the translation-relevant phase information can be discarded and we only consider scalar Fourier amplitudes. The core of our motion tracking algorithm can be implemented as a classical frame-wise slice-to-volume registration task. Results on both synthetic and real images confirm the validity of our approach.

  19. A method of 2D/3D registration of a statistical mouse atlas with a planar X-ray projection and an optical photo

    PubMed Central

    Wang, Hongkai; Stout, David B; Chatziioannou, Arion F

    2013-01-01

    The development of sophisticated and high throughput whole body small animal imaging technologies has created a need for improved image analysis and increased automation. The registration of a digital mouse atlas to individual images is a prerequisite for automated organ segmentation and uptake quantification. This paper presents a fully-automatic method for registering a statistical mouse atlas with individual subjects based on an anterior-posterior X-ray projection and a lateral optical photo of the mouse silhouette. The mouse atlas was trained as a statistical shape model based on 83 organ-segmented micro-CT images. For registration, a hierarchical approach is applied which first registers high contrast organs, and then estimates low contrast organs based on the registered high contrast organs. To register the high contrast organs, a 2D-registration-back-projection strategy is used that deforms the 3D atlas based on the 2D registrations of the atlas projections. For validation, this method was evaluated using 55 subjects of preclinical mouse studies. The results showed that this method can compensate for moderate variations of animal postures and organ anatomy. Two different metrics, the Dice coefficient and the average surface distance, were used to assess the registration accuracy of major organs. The Dice coefficients vary from 0.31±0.16 for the spleen to 0.88±0.03 for the whole body, and the average surface distance varies from 0.54±0.06 mm for the lungs to 0.85±0.10 mm for the skin. The method was compared with a direct 3D deformation optimization (without 2D-registration-back-projection) and a single-subject atlas registration (instead of using the statistical atlas). The comparison revealed that the 2D-registration-back-projection strategy significantly improved the registration accuracy, and the use of the statistical mouse atlas led to more plausible organ shapes than the single-subject atlas. This method was also tested with shoulder xenograft

  20. Evaluation of similarity measures for use in the intensity-based rigid 2D-3D registration for patient positioning in radiotherapy

    SciTech Connect

    Wu Jian; Kim, Minho; Peters, Jorg; Chung, Heeteak; Samant, Sanjiv S.

    2009-12-15

    Purpose: Rigid 2D-3D registration is an alternative to 3D-3D registration for cases where largely bony anatomy can be used for patient positioning in external beam radiation therapy. In this article, the authors evaluated seven similarity measures for use in the intensity-based rigid 2D-3D registration using a variation in Skerl's similarity measure evaluation protocol. Methods: The seven similarity measures are partitioned intensity uniformity, normalized mutual information (NMI), normalized cross correlation (NCC), entropy of the difference image, pattern intensity (PI), gradient correlation (GC), and gradient difference (GD). In contrast to traditional evaluation methods that rely on visual inspection or registration outcomes, the similarity measure evaluation protocol probes the transform parameter space and computes a number of similarity measure properties, which is objective and optimization method independent. The variation in protocol offers an improved property in the quantification of the capture range. The authors used this protocol to investigate the effects of the downsampling ratio, the region of interest, and the method of the digitally reconstructed radiograph (DRR) calculation [i.e., the incremental ray-tracing method implemented on a central processing unit (CPU) or the 3D texture rendering method implemented on a graphics processing unit (GPU)] on the performance of the similarity measures. The studies were carried out using both the kilovoltage (kV) and the megavoltage (MV) images of an anthropomorphic cranial phantom and the MV images of a head-and-neck cancer patient. Results: Both the phantom and the patient studies showed the 2D-3D registration using the GPU-based DRR calculation yielded better robustness, while providing similar accuracy compared to the CPU-based calculation. The phantom study using kV imaging suggested that NCC has the best accuracy and robustness, but its slow function value change near the global maximum requires a

  1. Intensity-based femoral atlas 2D/3D registration using Levenberg-Marquardt optimisation

    NASA Astrophysics Data System (ADS)

    Klima, Ondrej; Kleparnik, Petr; Spanel, Michal; Zemcik, Pavel

    2016-03-01

    The reconstruction of a patient-specific 3D anatomy is the crucial step in the computer-aided preoperative planning based on plain X-ray images. In this paper, we propose a robust and fast reconstruction methods based on fitting the statistical shape and intensity model of a femoral bone onto a pair of calibrated X-ray images. We formulate the registration as a non-linear least squares problem, allowing for the involvement of Levenberg-Marquardt optimisation. The proposed methods have been tested on a set of 96 virtual X-ray images. The reconstruction accuracy was evaluated using the symmetric Hausdorff distance between reconstructed and ground-truth bones. The accuracy of the intensity-based method reached 1.18 +/- 1.57mm on average, the registration took 8.76 seconds on average.

  2. Image Registration for Stability Testing of MEMS

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; LeMoigne, Jacqueline; Blake, Peter N.; Morey, Peter A.; Landsman, Wayne B.; Chambers, Victor J.; Moseley, Samuel H.

    2011-01-01

    Image registration, or alignment of two or more images covering the same scenes or objects, is of great interest in many disciplines such as remote sensing, medical imaging. astronomy, and computer vision. In this paper, we introduce a new application of image registration algorithms. We demonstrate how through a wavelet based image registration algorithm, engineers can evaluate stability of Micro-Electro-Mechanical Systems (MEMS). In particular, we applied image registration algorithms to assess alignment stability of the MicroShutters Subsystem (MSS) of the Near Infrared Spectrograph (NIRSpec) instrument of the James Webb Space Telescope (JWST). This work introduces a new methodology for evaluating stability of MEMS devices to engineers as well as a new application of image registration algorithms to computer scientists.

  3. Local image registration a comparison for bilateral registration mammography

    NASA Astrophysics Data System (ADS)

    Celaya-Padilaa, José M.; Rodriguez-Rojas, Juan; Trevino, Victor; Tamez-Pena, José G.

    2013-11-01

    Early tumor detection is key in reducing the number of breast cancer death and screening mammography is one of the most widely available and reliable method for early detection. However, it is difficult for the radiologist to process with the same attention each case, due the large amount of images to be read. Computer aided detection (CADe) systems improve tumor detection rate; but the current efficiency of these systems is not yet adequate and the correct interpretation of CADe outputs requires expert human intervention. Computer aided diagnosis systems (CADx) are being designed to improve cancer diagnosis accuracy, but they have not been efficiently applied in breast cancer. CADx efficiency can be enhanced by considering the natural mirror symmetry between the right and left breast. The objective of this work is to evaluate co-registration algorithms for the accurate alignment of the left to right breast for CADx enhancement. A set of mammograms were artificially altered to create a ground truth set to evaluate the registration efficiency of DEMONs , and SPLINE deformable registration algorithms. The registration accuracy was evaluated using mean square errors, mutual information and correlation. The results on the 132 images proved that the SPLINE deformable registration over-perform the DEMONS on mammography images.

  4. Research relative to automated multisensor image registration

    NASA Technical Reports Server (NTRS)

    Kanal, L. N.

    1983-01-01

    The basic aproaches to image registration are surveyed. Three image models are presented as models of the subpixel problem. A variety of approaches to the analysis of subpixel analysis are presented using these models.

  5. Automatic registration and segmentation algorithm for multiple electrophoresis images

    NASA Astrophysics Data System (ADS)

    Baker, Matthew S.; Busse, Harald; Vogt, Martin

    2000-06-01

    We present an algorithm for registering, segmenting and quantifying multiple scanned electrophoresis images. (2D gel) Electrophoresis is a technique for separating proteins or other macromolecules in organic material according to net charge and molecular mass and results in scanned grayscale images with dark spots against a light background marking the presence of such macromolecules. The algorithm begins by registering each of the images using a non-rigid registration algorithm. The registered images are then jointly segmented using a Markov random field approach to obtain a single segmentation. By using multiple images, the effect of noise is greatly reduced. We demonstrate the algorithm on several sets of real data.

  6. Contrast-Based 3D/2D Registration of the Left Atrium: Fast versus Consistent

    PubMed Central

    Kowalewski, Christopher; Kurzidim, Klaus; Strobel, Norbert; Hornegger, Joachim

    2016-01-01

    For augmented fluoroscopy during cardiac ablation, a preoperatively acquired 3D model of a patient's left atrium (LA) can be registered to X-ray images recorded during a contrast agent (CA) injection. An automatic registration method that works also for small amounts of CA is desired. We propose two similarity measures: The first focuses on edges of the patient anatomy. The second computes a contrast agent distribution estimate (CADE) inside the 3D model and rates its consistency with the CA as seen in biplane fluoroscopic images. Moreover, temporal filtering on the obtained registration results of a sequence is applied using a Markov chain framework. Evaluation was performed on 11 well-contrasted clinical angiographic sequences and 10 additional sequences with less CA. For well-contrasted sequences, the error for all 73 frames was 7.9 ± 6.3 mm and it dropped to 4.6 ± 4.0 mm when registering to an automatically selected, well enhanced frame in each sequence. Temporal filtering reduced the error for all frames from 7.9 ± 6.3 mm to 5.7 ± 4.6 mm. The error was typically higher if less CA was used. A combination of both similarity measures outperforms a previously proposed similarity measure. The mean accuracy for well contrasted sequences is in the range of other proposed manual registration methods. PMID:27051412

  7. Contrast-Based 3D/2D Registration of the Left Atrium: Fast versus Consistent.

    PubMed

    Hoffmann, Matthias; Kowalewski, Christopher; Maier, Andreas; Kurzidim, Klaus; Strobel, Norbert; Hornegger, Joachim

    2016-01-01

    For augmented fluoroscopy during cardiac ablation, a preoperatively acquired 3D model of a patient's left atrium (LA) can be registered to X-ray images recorded during a contrast agent (CA) injection. An automatic registration method that works also for small amounts of CA is desired. We propose two similarity measures: The first focuses on edges of the patient anatomy. The second computes a contrast agent distribution estimate (CADE) inside the 3D model and rates its consistency with the CA as seen in biplane fluoroscopic images. Moreover, temporal filtering on the obtained registration results of a sequence is applied using a Markov chain framework. Evaluation was performed on 11 well-contrasted clinical angiographic sequences and 10 additional sequences with less CA. For well-contrasted sequences, the error for all 73 frames was 7.9 ± 6.3 mm and it dropped to 4.6 ± 4.0 mm when registering to an automatically selected, well enhanced frame in each sequence. Temporal filtering reduced the error for all frames from 7.9 ± 6.3 mm to 5.7 ± 4.6 mm. The error was typically higher if less CA was used. A combination of both similarity measures outperforms a previously proposed similarity measure. The mean accuracy for well contrasted sequences is in the range of other proposed manual registration methods. PMID:27051412

  8. Photorealistic image synthesis and camera validation from 2D images

    NASA Astrophysics Data System (ADS)

    Santos Ferrer, Juan C.; González Chévere, David; Manian, Vidya

    2014-06-01

    This paper presents a new 3D scene reconstruction technique using the Unity 3D game engine. The method presented here allow us to reconstruct the shape of simple objects and more complex ones from multiple 2D images, including infrared and digital images from indoor scenes and only digital images from outdoor scenes and then add the reconstructed object to the simulated scene created in Unity 3D, these scenes are then validated with real world scenes. The method used different cameras settings and explores different properties in the reconstructions of the scenes including light, color, texture, shapes and different views. To achieve the highest possible resolution, it was necessary the extraction of partial textures from visible surfaces. To recover the 3D shapes and the depth of simple objects that can be represented by the geometric bodies, there geometric characteristics were used. To estimate the depth of more complex objects the triangulation method was used, for this the intrinsic and extrinsic parameters were calculated using geometric camera calibration. To implement the methods mentioned above the Matlab tool was used. The technique presented here also let's us to simulate small simple videos, by reconstructing a sequence of multiple scenes of the video separated by small margins of time. To measure the quality of the reconstructed images and video scenes the Fast Low Band Model (FLBM) metric from the Video Quality Measurement (VQM) software was used. Low bandwidth perception based features include edges and motion.

  9. Edge-based correlation image registration for multispectral imaging

    DOEpatents

    Nandy, Prabal

    2009-11-17

    Registration information for images of a common target obtained from a plurality of different spectral bands can be obtained by combining edge detection and phase correlation. The images are edge-filtered, and pairs of the edge-filtered images are then phase correlated to produce phase correlation images. The registration information can be determined based on these phase correlation images.

  10. A Multistage Approach for Image Registration.

    PubMed

    Bowen, Francis; Hu, Jianghai; Du, Eliza Yingzi

    2016-09-01

    Successful image registration is an important step for object recognition, target detection, remote sensing, multimodal content fusion, scene blending, and disaster assessment and management. The geometric and photometric variations between images adversely affect the ability for an algorithm to estimate the transformation parameters that relate the two images. Local deformations, lighting conditions, object obstructions, and perspective differences all contribute to the challenges faced by traditional registration techniques. In this paper, a novel multistage registration approach is proposed that is resilient to view point differences, image content variations, and lighting conditions. Robust registration is realized through the utilization of a novel region descriptor which couples with the spatial and texture characteristics of invariant feature points. The proposed region descriptor is exploited in a multistage approach. A multistage process allows the utilization of the graph-based descriptor in many scenarios thus allowing the algorithm to be applied to a broader set of images. Each successive stage of the registration technique is evaluated through an effective similarity metric which determines subsequent action. The registration of aerial and street view images from pre- and post-disaster provide strong evidence that the proposed method estimates more accurate global transformation parameters than traditional feature-based methods. Experimental results show the robustness and accuracy of the proposed multistage image registration methodology. PMID:26292357

  11. Optical imaging systems analyzed with a 2D template.

    PubMed

    Haim, Harel; Konforti, Naim; Marom, Emanuel

    2012-05-10

    Present determination of optical imaging systems specifications are based on performance values and modulation transfer function results carried with a 1D resolution template (such as the USAF resolution target or spoke templates). Such a template allows determining image quality, resolution limit, and contrast. Nevertheless, the conventional 1D template does not provide satisfactory results, since most optical imaging systems handle 2D objects for which imaging system response may be different by virtue of some not readily observable spatial frequencies. In this paper we derive and analyze contrast transfer function results obtained with 1D as well as 2D templates. PMID:22614498

  12. Non-rigid registration of medical images based on estimation of deformation states.

    PubMed

    Marami, Bahram; Sirouspour, Shahin; Capson, David W

    2014-11-21

    A unified framework for automatic non-rigid 3D-3D and 3D-2D registration of medical images with static and dynamic deformations is proposed in this paper. The problem of non-rigid image registration is approached as a classical state estimation problem using a generic deformation model for the soft tissue. The registration technique employs a dynamic linear elastic continuum mechanics model of the tissue deformation, which is discretized using the finite element method. In the proposed method, the registration is achieved through a Kalman-like filtering process, which incorporates information from the deformation model and a vector of observation prediction errors computed from an intensity-based similarity/distance metric between images. With this formulation, single and multiple-modality, 3D-3D and 3D-2D image registration problems can all be treated within the same framework. The performance of the proposed registration technique was evaluated in a number of different registration scenarios. First, 3D magnetic resonance (MR) images of uncompressed and compressed breast tissue were co-registered. 3D MR images of the uncompressed breast tissue were also registered to a sequence of simulated 2D interventional MR images of the compressed breast. Finally, the registration algorithm was employed to dynamically track a target sub-volume inside the breast tissue during the process of the biopsy needle insertion based on registering pre-insertion 3D MR images to a sequence of real-time simulated 2D interventional MR images. Registration results indicate that the proposed method can be effectively employed for the registration of medical images in image-guided procedures, such as breast biopsy in which the tissue undergoes static and dynamic deformations. PMID:25350234

  13. Non-rigid registration of medical images based on estimation of deformation states

    NASA Astrophysics Data System (ADS)

    Marami, Bahram; Sirouspour, Shahin; Capson, David W.

    2014-11-01

    A unified framework for automatic non-rigid 3D-3D and 3D-2D registration of medical images with static and dynamic deformations is proposed in this paper. The problem of non-rigid image registration is approached as a classical state estimation problem using a generic deformation model for the soft tissue. The registration technique employs a dynamic linear elastic continuum mechanics model of the tissue deformation, which is discretized using the finite element method. In the proposed method, the registration is achieved through a Kalman-like filtering process, which incorporates information from the deformation model and a vector of observation prediction errors computed from an intensity-based similarity/distance metric between images. With this formulation, single and multiple-modality, 3D-3D and 3D-2D image registration problems can all be treated within the same framework. The performance of the proposed registration technique was evaluated in a number of different registration scenarios. First, 3D magnetic resonance (MR) images of uncompressed and compressed breast tissue were co-registered. 3D MR images of the uncompressed breast tissue were also registered to a sequence of simulated 2D interventional MR images of the compressed breast. Finally, the registration algorithm was employed to dynamically track a target sub-volume inside the breast tissue during the process of the biopsy needle insertion based on registering pre-insertion 3D MR images to a sequence of real-time simulated 2D interventional MR images. Registration results indicate that the proposed method can be effectively employed for the registration of medical images in image-guided procedures, such as breast biopsy in which the tissue undergoes static and dynamic deformations.

  14. Iterative closest curve: a framework for curvilinear structure registration application to 2D/3D coronary arteries registration.

    PubMed

    Benseghir, Thomas; Malandain, Grégoire; Vaillant, Régis

    2013-01-01

    Treatment coronary arteries endovascular involves catheter navigation through patient vasculature. The projective angiography guidance is limited in the case of chronic total occlusion where occluded vessel can not be seen. Integrating standard preoperative CT angiography information with live fluoroscopic images addresses this limitation but requires alignment of both modalities. This article proposes a structure-based registration method that intrinsically preserves both the geometrical and topological coherencies of the vascular centrelines to be registered, by the means of a dedicated curve-to-curve distance pairs of closest curves are identified, while pairing their points. Preliminary experiments demonstrate that the proposed approach performs better than the standard Iterative Closest Point method giving a wider attraction basin and improved accuracy. PMID:24505664

  15. Deformable Medical Image Registration: A Survey

    PubMed Central

    Sotiras, Aristeidis; Davatzikos, Christos; Paragios, Nikos

    2013-01-01

    Deformable image registration is a fundamental task in medical image processing. Among its most important applications, one may cite: i) multi-modality fusion, where information acquired by different imaging devices or protocols is fused to facilitate diagnosis and treatment planning; ii) longitudinal studies, where temporal structural or anatomical changes are investigated; and iii) population modeling and statistical atlases used to study normal anatomical variability. In this paper, we attempt to give an overview of deformable registration methods, putting emphasis on the most recent advances in the domain. Additional emphasis has been given to techniques applied to medical images. In order to study image registration methods in depth, their main components are identified and studied independently. The most recent techniques are presented in a systematic fashion. The contribution of this paper is to provide an extensive account of registration techniques in a systematic manner. PMID:23739795

  16. Bayesian technique for image classifying registration.

    PubMed

    Hachama, Mohamed; Desolneux, Agnès; Richard, Frédéric J P

    2012-09-01

    In this paper, we address a complex image registration issue arising while the dependencies between intensities of images to be registered are not spatially homogeneous. Such a situation is frequently encountered in medical imaging when a pathology present in one of the images modifies locally intensity dependencies observed on normal tissues. Usual image registration models, which are based on a single global intensity similarity criterion, fail to register such images, as they are blind to local deviations of intensity dependencies. Such a limitation is also encountered in contrast-enhanced images where there exist multiple pixel classes having different properties of contrast agent absorption. In this paper, we propose a new model in which the similarity criterion is adapted locally to images by classification of image intensity dependencies. Defined in a Bayesian framework, the similarity criterion is a mixture of probability distributions describing dependencies on two classes. The model also includes a class map which locates pixels of the two classes and weighs the two mixture components. The registration problem is formulated both as an energy minimization problem and as a maximum a posteriori estimation problem. It is solved using a gradient descent algorithm. In the problem formulation and resolution, the image deformation and the class map are estimated simultaneously, leading to an original combination of registration and classification that we call image classifying registration. Whenever sufficient information about class location is available in applications, the registration can also be performed on its own by fixing a given class map. Finally, we illustrate the interest of our model on two real applications from medical imaging: template-based segmentation of contrast-enhanced images and lesion detection in mammograms. We also conduct an evaluation of our model on simulated medical data and show its ability to take into account spatial variations

  17. Automated Registration Of Images From Multiple Sensors

    NASA Technical Reports Server (NTRS)

    Rignot, Eric J. M.; Kwok, Ronald; Curlander, John C.; Pang, Shirley S. N.

    1994-01-01

    Images of terrain scanned in common by multiple Earth-orbiting remote sensors registered automatically with each other and, where possible, on geographic coordinate grid. Simulated image of terrain viewed by sensor computed from ancillary data, viewing geometry, and mathematical model of physics of imaging. In proposed registration algorithm, simulated and actual sensor images matched by area-correlation technique.

  18. 2-D Imaging of Electron Temperature in Tokamak Plasmas

    SciTech Connect

    T. Munsat; E. Mazzucato; H. Park; C.W. Domier; M. Johnson; N.C. Luhmann Jr.; J. Wang; Z. Xia; I.G.J. Classen; A.J.H. Donne; M.J. van de Pol

    2004-07-08

    By taking advantage of recent developments in millimeter wave imaging technology, an Electron Cyclotron Emission Imaging (ECEI) instrument, capable of simultaneously measuring 128 channels of localized electron temperature over a 2-D map in the poloidal plane, has been developed for the TEXTOR tokamak. Data from the new instrument, detailing the MHD activity associated with a sawtooth crash, is presented.

  19. Sparse radar imaging using 2D compressed sensing

    NASA Astrophysics Data System (ADS)

    Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying

    2014-10-01

    Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.

  20. Registration of In Vivo Prostate Magnetic Resonance Images to Digital Histopathology Images

    NASA Astrophysics Data System (ADS)

    Ward, A. D.; Crukley, C.; McKenzie, C.; Montreuil, J.; Gibson, E.; Gomez, J. A.; Moussa, M.; Bauman, G.; Fenster, A.

    Early and accurate diagnosis of prostate cancer enables minimally invasive therapies to cure the cancer with less morbidity. The purpose of this work is to non-rigidly register in vivo pre-prostatectomy prostate medical images to regionally-graded histopathology images from post-prostatectomy specimens, seeking a relationship between the multi parametric imaging and cancer distribution and aggressiveness. Our approach uses image-based registration in combination with a magnetically tracked probe to orient the physical slicing of the specimen to be parallel to the in vivo imaging planes, yielding a tractable 2D registration problem. We measured a target registration error of 0.85 mm, a mean slicing plane marking error of 0.7 mm, and a mean slicing error of 0.6 mm; these results compare favourably with our 2.2 mm diagnostic MR image thickness. Qualitative evaluation of in vivo imaging-histopathology fusion reveals excellent anatomic concordance between MR and digital histopathology.

  1. Register cardiac fiber orientations from 3D DTI volume to 2D ultrasound image of rat hearts

    NASA Astrophysics Data System (ADS)

    Qin, Xulei; Wang, Silun; Shen, Ming; Zhang, Xiaodong; Lerakis, Stamatios; Wagner, Mary B.; Fei, Baowei

    2015-03-01

    Two-dimensional (2D) ultrasound or echocardiography is one of the most widely used examinations for the diagnosis of cardiac diseases. However, it only supplies the geometric and structural information of the myocardium. In order to supply more detailed microstructure information of the myocardium, this paper proposes a registration method to map cardiac fiber orientations from three-dimensional (3D) magnetic resonance diffusion tensor imaging (MR-DTI) volume to the 2D ultrasound image. It utilizes a 2D/3D intensity based registration procedure including rigid, log-demons, and affine transformations to search the best similar slice from the template volume. After registration, the cardiac fiber orientations are mapped to the 2D ultrasound image via fiber relocations and reorientations. This method was validated by six images of rat hearts ex vivo. The evaluation results indicated that the final Dice similarity coefficient (DSC) achieved more than 90% after geometric registrations; and the inclination angle errors (IAE) between the mapped fiber orientations and the gold standards were less than 15 degree. This method may provide a practical tool for cardiologists to examine cardiac fiber orientations on ultrasound images and have the potential to supply additional information for diagnosis of cardiac diseases.

  2. Focusing surface wave imaging with flexible 2D array

    NASA Astrophysics Data System (ADS)

    Zhou, Shiyuan; Fu, Junqiang; Li, Zhe; Xu, Chunguang; Xiao, Dingguo; Wang, Shaohan

    2016-04-01

    Curved surface is widely exist in key parts of energy and power equipment, such as, turbine blade cylinder block and so on. Cycling loading and harsh working condition of enable fatigue cracks appear on the surface. The crack should be found in time to avoid catastrophic damage to the equipment. A flexible 2D array transducer was developed. 2D Phased Array focusing method (2DPA), Mode-Spatial Double Phased focusing method (MSDPF) and the imaging method using the flexible 2D array probe are studied. Experiments using these focusing and imaging method are carried out. Surface crack image is obtained with both 2DPA and MSDPF focusing method. It have been proved that MSDPF can be more adaptable for curved surface and more calculate efficient than 2DPA.

  3. Image registration using redundant wavelet transforms

    NASA Astrophysics Data System (ADS)

    Brown, Richard K.; Claypoole, Roger L., Jr.

    2001-12-01

    Imagery is collected much faster and in significantly greater quantities today compared to a few years ago. Accurate registration of this imagery is vital for comparing the similarities and differences between multiple images. Image registration is a significant component in computer vision and other pattern recognition problems, medical applications such as Medical Resonance Images (MRI) and Positron Emission Tomography (PET), remotely sensed data for target location and identification, and super-resolution algorithms. Since human analysis is tedious and error prone for large data sets, we require an automatic, efficient, robust, and accurate method to register images. Wavelet transforms have proven useful for a variety of signal and image processing tasks. In our research, we present a fundamentally new wavelet-based registration algorithm utilizing redundant transforms and a masking process to suppress the adverse effects of noise and improve processing efficiency. The shift-invariant wavelet transform is applied in translation estimation and a new rotation-invariant polar wavelet transform is effectively utilized in rotation estimation. We demonstrate the robustness of these redundant wavelet transforms for the registration of two images (i.e., translating or rotating an input image to a reference image), but extensions to larger data sets are feasible. We compare the registration accuracy of our redundant wavelet transforms to the critically sampled discrete wavelet transform using the Daubechies wavelet to illustrate the power of our algorithm in the presence of significant additive white Gaussian noise and strongly translated or rotated images.

  4. Dual-projection 3D-2D registration for surgical guidance: preclinical evaluation of performance and minimum angular separation

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Otake, Y.; Wang, A. S.; Kleinszig, G.; Vogt, S.; Gallia, G. L.; Rigamonti, D.; Wolinsky, J.-P.; Gokaslan, Ziya L.; Khanna, A. J.; Siewerdsen, J. H.

    2014-03-01

    An algorithm for 3D-2D registration of CT and x-ray projections has been developed using dual projection views to provide 3D localization with accuracy exceeding that of conventional tracking systems. The registration framework employs a normalized gradient information (NGI) similarity metric and covariance matrix adaptation evolution strategy (CMAES) to solve for the patient pose in 6 degrees of freedom. Registration performance was evaluated in anthropomorphic head and chest phantoms, as well as a human torso cadaver, using C-arm projection views acquired at angular separations (Δ𝜃) ranging 0-178°. Registration accuracy was assessed in terms target registration error (TRE) and compared to that of an electromagnetic tracker. Studies evaluated the influence of C-arm magnification, x-ray dose, and preoperative CT slice thickness on registration accuracy and the minimum angular separation required to achieve TRE ~2 mm. The results indicate that Δ𝜃 as small as 10-20° is adequate to achieve TRE <2 mm with 95% confidence, comparable or superior to that of commercial trackers. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers, and manual registration. The studies support potential application to percutaneous spine procedures and intracranial neurosurgery.

  5. Bidirectional Elastic Image Registration Using B-Spline Affine Transformation

    PubMed Central

    Gu, Suicheng; Meng, Xin; Sciurba, Frank C.; Wang, Chen; Kaminski, Naftali; Pu, Jiantao

    2014-01-01

    A registration scheme termed as B-spline affine transformation (BSAT) is presented in this study to elastically align two images. We define an affine transformation instead of the traditional translation at each control point. Mathematically, BSAT is a generalized form of the affine transformation and the traditional B-Spline transformation (BST). In order to improve the performance of the iterative closest point (ICP) method in registering two homologous shapes but with large deformation, a bi-directional instead of the traditional unidirectional objective / cost function is proposed. In implementation, the objective function is formulated as a sparse linear equation problem, and a sub-division strategy is used to achieve a reasonable efficiency in registration. The performance of the developed scheme was assessed using both two-dimensional (2D) synthesized dataset and three-dimensional (3D) volumetric computed tomography (CT) data. Our experiments showed that the proposed B-spline affine model could obtain reasonable registration accuracy. PMID:24530210

  6. Bidirectional elastic image registration using B-spline affine transformation.

    PubMed

    Gu, Suicheng; Meng, Xin; Sciurba, Frank C; Ma, Hongxia; Leader, Joseph; Kaminski, Naftali; Gur, David; Pu, Jiantao

    2014-06-01

    A registration scheme termed as B-spline affine transformation (BSAT) is presented in this study to elastically align two images. We define an affine transformation instead of the traditional translation at each control point. Mathematically, BSAT is a generalized form of the affine transformation and the traditional B-spline transformation (BST). In order to improve the performance of the iterative closest point (ICP) method in registering two homologous shapes but with large deformation, a bidirectional instead of the traditional unidirectional objective/cost function is proposed. In implementation, the objective function is formulated as a sparse linear equation problem, and a sub-division strategy is used to achieve a reasonable efficiency in registration. The performance of the developed scheme was assessed using both two-dimensional (2D) synthesized dataset and three-dimensional (3D) volumetric computed tomography (CT) data. Our experiments showed that the proposed B-spline affine model could obtain reasonable registration accuracy. PMID:24530210

  7. Onboard Image Registration from Invariant Features

    NASA Technical Reports Server (NTRS)

    Wang, Yi; Ng, Justin; Garay, Michael J.; Burl, Michael C

    2008-01-01

    This paper describes a feature-based image registration technique that is potentially well-suited for onboard deployment. The overall goal is to provide a fast, robust method for dynamically combining observations from multiple platforms into sensors webs that respond quickly to short-lived events and provide rich observations of objects that evolve in space and time. The approach, which has enjoyed considerable success in mainstream computer vision applications, uses invariant SIFT descriptors extracted at image interest points together with the RANSAC algorithm to robustly estimate transformation parameters that relate one image to another. Experimental results for two satellite image registration tasks are presented: (1) automatic registration of images from the MODIS instrument on Terra to the MODIS instrument on Aqua and (2) automatic stabilization of a multi-day sequence of GOES-West images collected during the October 2007 Southern California wildfires.

  8. Real-time 2-D temperature imaging using ultrasound.

    PubMed

    Liu, Dalong; Ebbini, Emad S

    2010-01-01

    We have previously introduced methods for noninvasive estimation of temperature change using diagnostic ultrasound. The basic principle was validated both in vitro and in vivo by several groups worldwide. Some limitations remain, however, that have prevented these methods from being adopted in monitoring and guidance of minimally invasive thermal therapies, e.g., RF ablation and high-intensity-focused ultrasound (HIFU). In this letter, we present first results from a real-time system for 2-D imaging of temperature change using pulse-echo ultrasound. The front end of the system is a commercially available scanner equipped with a research interface, which allows the control of imaging sequence and access to the RF data in real time. A high-frame-rate 2-D RF acquisition mode, M2D, is used to capture the transients of tissue motion/deformations in response to pulsed HIFU. The M2D RF data is streamlined to the back end of the system, where a 2-D temperature imaging algorithm based on speckle tracking is implemented on a graphics processing unit. The real-time images of temperature change are computed on the same spatial and temporal grid of the M2D RF data, i.e., no decimation. Verification of the algorithm was performed by monitoring localized HIFU-induced heating of a tissue-mimicking elastography phantom. These results clearly demonstrate the repeatability and sensitivity of the algorithm. Furthermore, we present in vitro results demonstrating the possible use of this algorithm for imaging changes in tissue parameters due to HIFU-induced lesions. These results clearly demonstrate the value of the real-time data streaming and processing in monitoring, and guidance of minimally invasive thermotherapy. PMID:19884075

  9. TU-B-19A-01: Image Registration II: TG132-Quality Assurance for Image Registration

    SciTech Connect

    Brock, K; Mutic, S

    2014-06-15

    AAPM Task Group 132 was charged with a review of the current approaches and solutions for image registration in radiotherapy and to provide recommendations for quality assurance and quality control of these clinical processes. As the results of image registration are always used as the input of another process for planning or delivery, it is important for the user to understand and document the uncertainty associate with the algorithm in general and the Result of a specific registration. The recommendations of this task group, which at the time of abstract submission are currently being reviewed by the AAPM, include the following components. The user should understand the basic image registration techniques and methods of visualizing image fusion. The disclosure of basic components of the image registration by commercial vendors is critical in this respect. The physicists should perform end-to-end tests of imaging, registration, and planning/treatment systems if image registration is performed on a stand-alone system. A comprehensive commissioning process should be performed and documented by the physicist prior to clinical use of the system. As documentation is important to the safe implementation of this process, a request and report system should be integrated into the clinical workflow. Finally, a patient specific QA practice should be established for efficient evaluation of image registration results. The implementation of these recommendations will be described and illustrated during this educational session. Learning Objectives: Highlight the importance of understanding the image registration techniques used in their clinic. Describe the end-to-end tests needed for stand-alone registration systems. Illustrate a comprehensive commissioning program using both phantom data and clinical images. Describe a request and report system to ensure communication and documentation. Demonstrate an clinically-efficient patient QA practice for efficient evaluation of image

  10. Medical image registration using fuzzy theory.

    PubMed

    Pan, Meisen; Tang, Jingtian; Xiong, Qi

    2012-01-01

    Mutual information (MI)-based registration, which uses MI as the similarity measure, is a representative method in medical image registration. It has an excellent robustness and accuracy, but with the disadvantages of a large amount of calculation and a long processing time. In this paper, by computing the medical image moments, the centroid is acquired. By applying fuzzy c-means clustering, the coordinates of the medical image are divided into two clusters to fit a straight line, and the rotation angles of the reference and floating images are computed, respectively. Thereby, the initial values for registering the images are determined. When searching the optimal geometric transformation parameters, we put forward the two new concepts of fuzzy distance and fuzzy signal-to-noise ratio (FSNR), and we select FSNR as the similarity measure between the reference and floating images. In the experiments, the Simplex method is chosen as multi-parameter optimisation. The experimental results show that this proposed method has a simple implementation, a low computational cost, a fast registration and good registration accuracy. Moreover, it can effectively avoid trapping into the local optima. It is adapted to both mono-modality and multi-modality image registrations. PMID:21442490

  11. Advances in image registration and fusion

    NASA Astrophysics Data System (ADS)

    Steer, Christopher; Rogers, Jeremy; Smith, Moira; Heather, Jamie; Bernhardt, Mark; Hickman, Duncan

    2008-03-01

    Many image fusion systems involving passive sensors require the accurate registration of the sensor data prior to performing fusion. Since depth information is not readily available in such systems, all registration algorithms are intrinsically approximations based upon various assumption about the depth field. Although often overlooked, many registration algorithms can break down in certain situations and this may adversely affect the image fusion performance. In this paper, we discuss a framework for quantifying the accuracy and robustness of image registration algorithms which allows a more precise understanding of their shortcomings. In addition, some novel algorithms have been investigated that overcome some of these limitations. A second aspect of this work has considered the treatment of images from multiple sensors whose angular and spatial separation is large and where conventional registration algorithms break down (typically greater than a few degrees of separation). A range of novel approaches is reported which exploit the use of parallax to estimate depth information and reconstruct a geometrical model of the scene. The imagery can then be combined with this geometrical model to render a variety of useful representations of the data. These techniques (which we term Volume Registration) show great promise as a means of gathering and presenting 3D and 4D scene information for both military and civilian applications.

  12. Adaptive deformable image registration of inhomogeneous tissues

    NASA Astrophysics Data System (ADS)

    Ren, Jing

    2015-03-01

    Physics based deformable registration can provide physically consistent image match of deformable soft tissues. In order to help radiologist/surgeons to determine the status of malicious tumors, we often need to accurately align the regions with embedded tumors. This is a very challenging task since the tumor and the surrounding tissues have very different tissue properties such as stiffness and elasticity. In order to address this problem, based on minimum strain energy principle in elasticity theory, we propose to partition the whole region of interest into smaller sub-regions and dynamically adjust weights of vessel segments and bifurcation points in each sub-region in the registration objective function. Our previously proposed fast vessel registration is used as a component in the inner loop. We have validated the proposed method using liver MR images from human subjects. The results show that our method can detect the large registration errors and improve the registration accuracy in the neighborhood of the tumors and guarantee the registration errors to be within acceptable accuracy. The proposed technique has the potential to significantly improve the registration capability and the quality of clinical diagnosis and treatment planning.

  13. Reflectance and fluorescence hyperspectral elastic image registration

    NASA Astrophysics Data System (ADS)

    Lange, Holger; Baker, Ross; Hakansson, Johan; Gustafsson, Ulf P.

    2004-05-01

    Science and Technology International (STI) presents a novel multi-modal elastic image registration approach for a new hyperspectral medical imaging modality. STI's HyperSpectral Diagnostic Imaging (HSDI) cervical instrument is used for the early detection of uterine cervical cancer. A Computer-Aided-Diagnostic (CAD) system is being developed to aid the physician with the diagnosis of pre-cancerous and cancerous tissue regions. The CAD system uses the fusion of multiple data sources to optimize its performance. The key enabling technology for the data fusion is image registration. The difficulty lies in the image registration of fluorescence and reflectance hyperspectral data due to the occurrence of soft tissue movement and the limited resemblance of these types of imagery. The presented approach is based on embedding a reflectance image in the fluorescence hyperspectral imagery. Having a reflectance image in both data sets resolves the resemblance problem and thereby enables the use of elastic image registration algorithms required to compensate for soft tissue movements. Several methods of embedding the reflectance image in the fluorescence hyperspectral imagery are described. Initial experiments with human subject data are presented where a reflectance image is embedded in the fluorescence hyperspectral imagery.

  14. Fully automatic initialization of two-dimensional–three-dimensional medical image registration using hybrid classifier

    PubMed Central

    Wu, Jing; Fatah, Emam E. Abdel; Mahfouz, Mohamed R.

    2015-01-01

    Abstract. X-ray video fluoroscopy along with two-dimensional–three-dimensional (2D-3D) registration techniques is widely used to study joints in vivo kinematic behaviors. These techniques, however, are generally very sensitive to the initial alignment of the 3-D model. We present an automatic initialization method for 2D-3D registration of medical images. The contour of the knee bone or implant was first automatically extracted from a 2-D x-ray image. Shape descriptors were calculated by normalized elliptical Fourier descriptors to represent the contour shape. The optimal pose was then determined by a hybrid classifier combining k-nearest neighbors and support vector machine. The feasibility of the method was first validated on computer synthesized images, with 100% successful estimation for the femur and tibia implants, 92% for the femur and 95% for the tibia. The method was further validated on fluoroscopic x-ray images with all the poses of the testing cases successfully estimated. Finally, the method was evaluated as an initialization of a feature-based 2D-3D registration. The initialized and uninitialized registrations had success rates of 100% and 50%, respectively. The proposed method can be easily utilized for 2D-3D image registration on various medical objects and imaging modalities. PMID:26158102

  15. A 2-D ECE Imaging Diagnostic for TEXTOR

    NASA Astrophysics Data System (ADS)

    Wang, J.; Deng, B. H.; Domier, C. W.; Luhmann, H. Lu, Jr.

    2002-11-01

    A true 2-D extension to the UC Davis ECE Imaging (ECEI) concept is under development for installation on the TEXTOR tokamak in 2003. This combines the use of linear arrays with multichannel conventional wideband heterodyne ECE radiometers to provide a true 2-D imaging system. This is in contrast to current 1-D ECEI systems in which 2-D images are obtained through the use of multiple plasma discharges (varying the scanned emission frequency each discharge). Here, each array element of the 20 channel mixer array measures plasma emission at 16 simultaneous frequencies to form a 16x20 image of the plasma electron temperature Te. Correlation techniques can then be applied to any pair of the 320 image elements to study both radial and poloidal characteristics of turbulent Te fluctuations. The system relies strongly on the development of low cost, wideband (2-18 GHz) IF detection electronics for use in both ECE Imaging as well as conventional heterodyne ECE radiometry. System details, with a strong focus on the wideband IF electronics development, will be presented. *Supported by U.S. DoE Contracts DE-FG03-95ER54295 and DE-FG03-99ER54531.

  16. Satellite image registration based on the geometrical arrangement of objects

    NASA Astrophysics Data System (ADS)

    Bartl, Renate; Schneider, Werner

    1995-11-01

    The knowledge of the geometrical relationship between images is a prerequisite for registration. Assuming a conformal affine transformation, 4 transformation parameters have to be determined. This is done on the basis of the geometrical arrangement of characteristic objects extracted from images in a preprocessing step, for example a land use classification yielding forest, pond, or urban regions. The algorithm introduced establishes correspondence between (centers of gravity of) objects by building and matching so-called ANGLE CHAINS, a linear structure for representing a geometric (2D) arrangement. An example with satellite imagery illustrates the usefulness of the algorithm.

  17. Image registration for DSA quality enhancement.

    PubMed

    Buzug, T M; Weese, J

    1998-01-01

    A generalized framework for histogram-based similarity measures is presented and applied to the image-enhancement task in digital subtraction angiography (DSA). The class of differentiable, strictly convex weighting functions is identified as suitable weightings of histograms for measuring the degree of clustering that goes along with registration. With respect to computation time, the energy similarity measure is the function of choice for the registration of mask and contrast image prior to subtraction. The robustness of the energy measure is studied for geometrical image distortions like rotation and scaling. Additionally, it is investigated how the histogram binning and inhomogeneous motion inside the templates influence the quality of the similarity measure. Finally, the registration success for the automated procedure is compared with the manually shift-corrected image pair of the head. PMID:9719851

  18. Nonrigid image registration using an entropic similarity.

    PubMed

    Khader, Mohammed; Ben Hamza, A

    2011-09-01

    In this paper, we propose a nonrigid image registration technique by optimizing a generalized information-theoretic similarity measure using the quasi-Newton method as an optimization scheme and cubic B-splines for modeling the nonrigid deformation field between the fixed and moving 3-D image pairs. To achieve a compromise between the nonrigid registration accuracy and the associated computational cost, we implement a three-level hierarchical multiresolution approach such that the image resolution is increased in a coarse to fine fashion. Experimental results are provided to demonstrate the registration accuracy of our approach. The feasibility of the proposed method is demonstrated on a 3-D magnetic resonance data volume and also on clinically acquired 4-D CT image datasets. PMID:21690017

  19. The image registration of multi-band images by geometrical optics

    NASA Astrophysics Data System (ADS)

    Yan, Yung-Jhe; Chiang, Hou-Chi; Tsai, Yu-Hsiang; Huang, Ting-Wei; Mang, Ou-Yang

    2015-09-01

    The image fusion is combination of two or more images into one image. The fusion of multi-band spectral images has been in many applications, such as thermal system, remote sensing, medical treatment, etc. Images are taken with the different imaging sensors. If the sensors take images through the different optical paths in the same time, it will be in the different positions. The task of the image registration will be more difficult. Because the images are in the different field of views (F.O.V.), the different resolutions and the different view angles. It is important to build the relationship of the viewpoints in one image to the other image. In this paper, we focus on the problem of image registration for two non-pinhole sensors. The affine transformation between the 2-D image and the 3-D real world can be derived from the geometrical optics of the sensors. In the other word, the geometrical affine transformation function of two images are derived from the intrinsic and extrinsic parameters of two sensors. According to the affine transformation function, the overlap of the F.O.V. in two images can be calculated and resample two images in the same resolution. Finally, we construct the image registration model by the mapping function. It merges images for different imaging sensors. And, imaging sensors absorb different wavebands of electromagnetic spectrum at the different position in the same time.

  20. Targeted fluorescence imaging enhanced by 2D materials: a comparison between 2D MoS2 and graphene oxide.

    PubMed

    Xie, Donghao; Ji, Ding-Kun; Zhang, Yue; Cao, Jun; Zheng, Hu; Liu, Lin; Zang, Yi; Li, Jia; Chen, Guo-Rong; James, Tony D; He, Xiao-Peng

    2016-08-01

    Here we demonstrate that 2D MoS2 can enhance the receptor-targeting and imaging ability of a fluorophore-labelled ligand. The 2D MoS2 has an enhanced working concentration range when compared with graphene oxide, resulting in the improved imaging of both cell and tissue samples. PMID:27378648

  1. An accurate registration technique for distorted images

    NASA Technical Reports Server (NTRS)

    Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis

    1990-01-01

    Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.

  2. Semiregular solid texturing from 2D image exemplars.

    PubMed

    Du, Song-Pei; Hu, Shi-Min; Martin, Ralph R

    2013-03-01

    Solid textures, comprising 3D particles embedded in a matrix in a regular or semiregular pattern, are common in natural and man-made materials, such as brickwork, stone walls, plant cells in a leaf, etc. We present a novel technique for synthesizing such textures, starting from 2D image exemplars which provide cross-sections of the desired volume texture. The shapes and colors of typical particles embedded in the structure are estimated from their 2D cross-sections. Particle positions in the texture images are also used to guide spatial placement of the 3D particles during synthesis of the 3D texture. Our experiments demonstrate that our algorithm can produce higher quality structures than previous approaches; they are both compatible with the input images, and have a plausible 3D nature. PMID:22614330

  3. Direct estimation of nonrigid registrations with image-based self-occlusion reasoning.

    PubMed

    Gay-Bellile, Vincent; Bartoli, Adrien; Sayd, Patrick

    2010-01-01

    The registration problem for images of a deforming surface has been well studied. External occlusions are usually well handled. In 2D image-based registration, self-occlusions are more challenging. Consequently, the surface is usually assumed to be only slightly self-occluding. This paper is about image-based nonrigid registration with self-occlusion reasoning. A specific framework explicitly modeling self-occlusions is proposed. It is combined with an intensity-based, "direct" data term for registration. Self-occlusions are detected as shrinkage areas in the 2D warp. Experimental results on several challenging data sets show that our approach successfully registers images with self-occlusions while effectively detecting the self-occluded regions. PMID:19926901

  4. Quantifying Therapeutic and Diagnostic Efficacy in 2D Microvascular Images

    NASA Technical Reports Server (NTRS)

    Parsons-Wingerter, Patricia; Vickerman, Mary B.; Keith, Patricia A.

    2009-01-01

    VESGEN is a newly automated, user-interactive program that maps and quantifies the effects of vascular therapeutics and regulators on microvascular form and function. VESGEN analyzes two-dimensional, black and white vascular images by measuring important vessel morphology parameters. This software guides the user through each required step of the analysis process via a concise graphical user interface (GUI). Primary applications of the VESGEN code are 2D vascular images acquired as clinical diagnostic images of the human retina and as experimental studies of the effects of vascular regulators and therapeutics on vessel remodeling.

  5. Software for Automated Image-to-Image Co-registration

    NASA Technical Reports Server (NTRS)

    Benkelman, Cody A.; Hughes, Heidi

    2007-01-01

    The project objectives are: a) Develop software to fine-tune image-to-image co-registration, presuming images are orthorectified prior to input; b) Create a reusable software development kit (SDK) to enable incorporation of these tools into other software; d) provide automated testing for quantitative analysis; and e) Develop software that applies multiple techniques to achieve subpixel precision in the co-registration of image pairs.

  6. Groupwise Image Registration Guided by a Dynamic Digraph of Images.

    PubMed

    Tang, Zhenyu; Fan, Yong

    2016-04-01

    For groupwise image registration, graph theoretic methods have been adopted for discovering the manifold of images to be registered so that accurate registration of images to a group center image can be achieved by aligning similar images that are linked by the shortest graph paths. However, the image similarity measures adopted to build a graph of images in the extant methods are essentially pairwise measures, not effective for capturing the groupwise similarity among multiple images. To overcome this problem, we present a groupwise image similarity measure that is built on sparse coding for characterizing image similarity among all input images and build a directed graph (digraph) of images so that similar images are connected by the shortest paths of the digraph. Following the shortest paths determined according to the digraph, images are registered to a group center image in an iterative manner by decomposing a large anatomical deformation field required to register an image to the group center image into a series of small ones between similar images. During the iterative image registration, the digraph of images evolves dynamically at each iteration step to pursue an accurate estimation of the image manifold. Moreover, an adaptive dictionary strategy is adopted in the groupwise image similarity measure to ensure fast convergence of the iterative registration procedure. The proposed method has been validated based on both simulated and real brain images, and experiment results have demonstrated that our method was more effective for learning the manifold of input images and achieved higher registration accuracy than state-of-the-art groupwise image registration methods. PMID:26585712

  7. Region-based Statistical Analysis of 2D PAGE Images

    PubMed Central

    Li, Feng; Seillier-Moiseiwitsch, Françoise; Korostyshevskiy, Valeriy R.

    2011-01-01

    A new comprehensive procedure for statistical analysis of two-dimensional polyacrylamide gel electrophoresis (2D PAGE) images is proposed, including protein region quantification, normalization and statistical analysis. Protein regions are defined by the master watershed map that is obtained from the mean gel. By working with these protein regions, the approach bypasses the current bottleneck in the analysis of 2D PAGE images: it does not require spot matching. Background correction is implemented in each protein region by local segmentation. Two-dimensional locally weighted smoothing (LOESS) is proposed to remove any systematic bias after quantification of protein regions. Proteins are separated into mutually independent sets based on detected correlations, and a multivariate analysis is used on each set to detect the group effect. A strategy for multiple hypothesis testing based on this multivariate approach combined with the usual Benjamini-Hochberg FDR procedure is formulated and applied to the differential analysis of 2D PAGE images. Each step in the analytical protocol is shown by using an actual dataset. The effectiveness of the proposed methodology is shown using simulated gels in comparison with the commercial software packages PDQuest and Dymension. We also introduce a new procedure for simulating gel images. PMID:21850152

  8. 2D luminescence imaging of pH in vivo

    PubMed Central

    Schreml, Stephan; Meier, Robert J.; Wolfbeis, Otto S.; Landthaler, Michael; Szeimies, Rolf-Markus; Babilas, Philipp

    2011-01-01

    Luminescence imaging of biological parameters is an emerging field in biomedical sciences. Tools to study 2D pH distribution are needed to gain new insights into complex disease processes, such as wound healing and tumor metabolism. In recent years, luminescence-based methods for pH measurement have been developed. However, for in vivo applications, especially for studies on humans, biocompatibility and reliability under varying conditions have to be ensured. Here, we present a referenced luminescent sensor for 2D high-resolution imaging of pH in vivo. The ratiometric sensing scheme is based on time-domain luminescence imaging of FITC and ruthenium(II)tris-(4,7-diphenyl-1,10-phenanthroline). To create a biocompatible 2D sensor, these dyes were bound to or incorporated into microparticles (aminocellulose and polyacrylonitrile), and particles were immobilized in polyurethane hydrogel on transparent foils. We show sensor precision and validity by conducting in vitro and in vivo experiments, and we show the versatility in imaging pH during physiological and chronic cutaneous wound healing in humans. Implementation of this technique may open vistas in wound healing, tumor biology, and other biomedical fields. PMID:21262842

  9. An image registration based ultrasound probe calibration

    NASA Astrophysics Data System (ADS)

    Li, Xin; Kumar, Dinesh; Sarkar, Saradwata; Narayanan, Ram

    2012-02-01

    Reconstructed 3D ultrasound of prostate gland finds application in several medical areas such as image guided biopsy, therapy planning and dose delivery. In our application, we use an end-fire probe rotated about its axis to acquire a sequence of rotational slices to reconstruct 3D TRUS (Transrectal Ultrasound) image. The image acquisition system consists of an ultrasound transducer situated on a cradle directly attached to a rotational sensor. However, due to system tolerances, axis of probe does not align exactly with the designed axis of rotation resulting in artifacts in the 3D reconstructed ultrasound volume. We present a rigid registration based automatic probe calibration approach. The method uses a sequence of phantom images, each pair acquired at angular separation of 180 degrees and registers corresponding image pairs to compute the deviation from designed axis. A modified shadow removal algorithm is applied for preprocessing. An attribute vector is constructed from image intensity and a speckle-insensitive information-theoretic feature. We compare registration between the presented method and expert-corrected images in 16 prostate phantom scans. Images were acquired at multiple resolutions, and different misalignment settings from two ultrasound machines. Screenshots from 3D reconstruction are shown before and after misalignment correction. Registration parameters from automatic and manual correction were found to be in good agreement. Average absolute differences of translation and rotation between automatic and manual methods were 0.27 mm and 0.65 degree, respectively. The registration parameters also showed lower variability for automatic registration (pooled standard deviation σtranslation = 0.50 mm, σrotation = 0.52 degree) compared to the manual approach (pooled standard deviation σtranslation = 0.62 mm, σrotation = 0.78 degree).

  10. 2D imaging of functional structures in perfused pig heart

    NASA Astrophysics Data System (ADS)

    Kessler, Manfred D.; Cristea, Paul D.; Hiller, Michael; Trinks, Tobias

    2002-06-01

    In 2000 by 2D-imaging we were able for the first time to visualize in subcellular space functional structures of myocardium. For these experiments we used hemoglobin-free perfused pig hearts in our lab. Step by step we learned to understand the meaning of subcellular structures. Principally, the experiment revealed that in subcellular space very fast changes of light scattering can occur. Furthermore, coefficients of different parameters were determined on the basis of multicomponent system theory.

  11. A survey of medical image registration - under review.

    PubMed

    Viergever, Max A; Maintz, J B Antoine; Klein, Stefan; Murphy, Keelin; Staring, Marius; Pluim, Josien P W

    2016-10-01

    A retrospective view on the past two decades of the field of medical image registration is presented, guided by the article "A survey of medical image registration" (Maintz and Viergever, 1998). It shows that the classification of the field introduced in that article is still usable, although some modifications to do justice to advances in the field would be due. The main changes over the last twenty years are the shift from extrinsic to intrinsic registration, the primacy of intensity-based registration, the breakthrough of nonlinear registration, the progress of inter-subject registration, and the availability of generic image registration software packages. Two problems that were called urgent already 20 years ago, are even more urgent nowadays: Validation of registration methods, and translation of results of image registration research to clinical practice. It may be concluded that the field of medical image registration has evolved, but still is in need of further development in various aspects. PMID:27427472

  12. Bayesian 2D Current Reconstruction from Magnetic Images

    NASA Astrophysics Data System (ADS)

    Clement, Colin B.; Bierbaum, Matthew K.; Nowack, Katja; Sethna, James P.

    We employ a Bayesian image reconstruction scheme to recover 2D currents from magnetic flux imaged with scanning SQUIDs (Superconducting Quantum Interferometric Devices). Magnetic flux imaging is a versatile tool to locally probe currents and magnetic moments, however present reconstruction methods sacrifice resolution due to numerical instability. Using state-of-the-art blind deconvolution techniques we recover the currents, point-spread function and height of the SQUID loop by optimizing the probability of measuring an image. We obtain uncertainties on these quantities by sampling reconstructions. This generative modeling technique could be used to develop calibration protocols for scanning SQUIDs, to diagnose systematic noise in the imaging process, and can be applied to many tools beyond scanning SQUIDs.

  13. Image registration using a weighted region adjacency graph

    NASA Astrophysics Data System (ADS)

    Al-Hasan, Muhannad; Fisher, Mark

    2005-04-01

    Image registration is an important problem for image processing and computer vision with many proposed applications in medical image analysis.1, 2 Image registration techniques attempt to map corresponding features between two images. The problem is particularly difficult as anatomy is subject to elastic deformations. This paper considers this problem in the context of graph matching. Firstly, weighted Region Adjacency Graphs (RAGs) are constructed from each image using an approach based on watershed saliency. 3 The vertices of the RAG represent salient regions in the image and the (weighted) edges represent the relationship (bonding) between each region. Correspondences between images are then determined using a weighted graph matching method. Graph matching is considered to be one of the most complex problems in computer vision, due to its combinatorial nature. Our approach uses a multi-spectral technique to graph matching first proposed by Umeyama4 to find an approximate solution to the weighted graph matching problem (WGMP) based on the singular value decomposition of the adjacency matrix. Results show the technique is successful in co-registering 2-D MRI images and the method could be useful in co-registering 3-D volumetric data (e.g. CT, MRI, SPECT, PET etc.).

  14. Spatially weighted mutual information image registration for image guided radiation therapy

    SciTech Connect

    Park, Samuel B.; Rhee, Frank C.; Monroe, James I.; Sohn, Jason W.

    2010-09-15

    Purpose: To develop a new metric for image registration that incorporates the (sub)pixelwise differential importance along spatial location and to demonstrate its application for image guided radiation therapy (IGRT). Methods: It is well known that rigid-body image registration with mutual information is dependent on the size and location of the image subset on which the alignment analysis is based [the designated region of interest (ROI)]. Therefore, careful review and manual adjustments of the resulting registration are frequently necessary. Although there were some investigations of weighted mutual information (WMI), these efforts could not apply the differential importance to a particular spatial location since WMI only applies the weight to the joint histogram space. The authors developed the spatially weighted mutual information (SWMI) metric by incorporating an adaptable weight function with spatial localization into mutual information. SWMI enables the user to apply the selected transform to medically ''important'' areas such as tumors and critical structures, so SWMI is neither dominated by, nor neglects the neighboring structures. Since SWMI can be utilized with any weight function form, the authors presented two examples of weight functions for IGRT application: A Gaussian-shaped weight function (GW) applied to a user-defined location and a structures-of-interest (SOI) based weight function. An image registration example using a synthesized 2D image is presented to illustrate the efficacy of SWMI. The convergence and feasibility of the registration method as applied to clinical imaging is illustrated by fusing a prostate treatment planning CT with a clinical cone beam CT (CBCT) image set acquired for patient alignment. Forty-one trials are run to test the speed of convergence. The authors also applied SWMI registration using two types of weight functions to two head and neck cases and a prostate case with clinically acquired CBCT/MVCT image sets. The

  15. Geometrical Correlation and Matching of 2d Image Shapes

    NASA Astrophysics Data System (ADS)

    Vizilter, Y. V.; Zheltov, S. Y.

    2012-07-01

    The problem of image correspondence measure selection for image comparison and matching is addressed. Many practical applications require image matching "just by shape" with no dependence on the concrete intensity or color values. Most popular technique for image shape comparison utilizes the mutual information measure based on probabilistic reasoning and information theory background. Another approach was proposed by Pytiev (so called "Pytiev morphology") based on geometrical and algebraic reasoning. In this framework images are considered as piecewise-constant 2D functions, tessellation of image frame by the set of non-intersected connected regions determines the "shape" of image and the projection of image onto the shape of other image is determined. Morphological image comparison is performed using the normalized morphological correlation coefficients. These coefficients estimate the closeness of one image to the shape of other image. Such image analysis technique can be characterized as an ""ntensity-to-geometry" matching. This paper generalizes the Pytiev morphological approach for obtaining the pure "geometry-to-geometry" matching techniques. The generalized intensity-geometrical correlation coefficient is proposed including the linear correlation coefficient and the square of Pytiev correlation coefficient as its partial cases. The morphological shape correlation coefficient is proposed based on the statistical averaging of images with the same shape. Centered morphological correlation coefficient is obtained under the condition of intensity centering of averaged images. Two types of symmetric geometrical normalized correlation coefficients are proposed for comparison of shape-tessellations. The technique for correlation and matching of shapes with ordered intensities is proposed with correlation measures invariant to monotonous intensity transformations. The quality of proposed geometrical correlation measures is experimentally estimated in the task of

  16. Automatic ultrasound-MRI registration for neurosurgery using the 2D and 3D LC(2) Metric.

    PubMed

    Fuerst, Bernhard; Wein, Wolfgang; Müller, Markus; Navab, Nassir

    2014-12-01

    To enable image guided neurosurgery, the alignment of pre-interventional magnetic resonance imaging (MRI) and intra-operative ultrasound (US) is commonly required. We present two automatic image registration algorithms using the similarity measure Linear Correlation of Linear Combination (LC(2)) to align either freehand US slices or US volumes with MRI images. Both approaches allow an automatic and robust registration, while the three dimensional method yields a significantly improved percentage of optimally aligned registrations for randomly chosen clinically relevant initializations. This study presents a detailed description of the methodology and an extensive evaluation showing an accuracy of 2.51mm, precision of 0.85mm and capture range of 15mm (>95% convergence) using 14 clinical neurosurgical cases. PMID:24842859

  17. Multigrid optimal mass transport for image registration and morphing

    NASA Astrophysics Data System (ADS)

    Rehman, Tauseef ur; Tannenbaum, Allen

    2007-02-01

    In this paper we present a computationally efficient Optimal Mass Transport algorithm. This method is based on the Monge-Kantorovich theory and is used for computing elastic registration and warping maps in image registration and morphing applications. This is a parameter free method which utilizes all of the grayscale data in an image pair in a symmetric fashion. No landmarks need to be specified for correspondence. In our work, we demonstrate significant improvement in computation time when our algorithm is applied as compared to the originally proposed method by Haker et al [1]. The original algorithm was based on a gradient descent method for removing the curl from an initial mass preserving map regarded as 2D vector field. This involves inverting the Laplacian in each iteration which is now computed using full multigrid technique resulting in an improvement in computational time by a factor of two. Greater improvement is achieved by decimating the curl in a multi-resolutional framework. The algorithm was applied to 2D short axis cardiac MRI images and brain MRI images for testing and comparison.

  18. Automatic parameter selection for multimodal image registration.

    PubMed

    Hahn, Dieter A; Daum, Volker; Hornegger, Joachim

    2010-05-01

    Over the past ten years similarity measures based on intensity distributions have become state-of-the-art in automatic multimodal image registration. An implementation for clinical usage has to support a plurality of images. However, a generally applicable parameter configuration for the number and sizes of histogram bins, optimal Parzen-window kernel widths or background thresholds cannot be found. This explains why various research groups present partly contradictory empirical proposals for these parameters. This paper proposes a set of data-driven estimation schemes for a parameter-free implementation that eliminates major caveats of heuristic trial and error. We present the following novel approaches: a new coincidence weighting scheme to reduce the influence of background noise on the similarity measure in combination with Max-Lloyd requantization, and a tradeoff for the automatic estimation of the number of histogram bins. These methods have been integrated into a state-of-the-art rigid registration that is based on normalized mutual information and applied to CT-MR, PET-MR, and MR-MR image pairs of the RIRE 2.0 database. We compare combinations of the proposed techniques to a standard implementation using default parameters, which can be found in the literature, and to a manual registration by a medical expert. Additionally, we analyze the effects of various histogram sizes, sampling rates, and error thresholds for the number of histogram bins. The comparison of the parameter selection techniques yields 25 approaches in total, with 114 registrations each. The number of bins has no significant influence on the proposed implementation that performs better than both the manual and the standard method in terms of acceptance rates and target registration error (TRE). The overall mean TRE is 2.34 mm compared to 2.54 mm for the manual registration and 6.48 mm for a standard implementation. Our results show a significant TRE reduction for distortion

  19. A novel parametric method for non-rigid image registration.

    PubMed

    Cuzol, Anne; Hellier, Pierre; Mémin, Etienne

    2005-01-01

    This paper presents a novel non-rigid registration method. The main contribution of the method is the modeling of the vorticity (respectively divergence) of the deformation field using vortex (respectively sink and source) particles. Two parameters are associated with a particle: the vorticity (or divergence) strength and the influence domain. This leads to a very compact representation of vorticity and divergence fields. In addition, the optimal position of these particles is determined using a mean shift process. 2D experiments of this method are presented and demonstrate its ability to recover evolving phenomena (MS lesions) so as to register images from 20 patients. PMID:17354717

  20. Deep Adaptive Log-Demons: Diffeomorphic Image Registration with Very Large Deformations

    PubMed Central

    Zhao, Liya; Jia, Kebin

    2015-01-01

    This paper proposes a new framework for capturing large and complex deformation in image registration. Traditionally, this challenging problem relies firstly on a preregistration, usually an affine matrix containing rotation, scale, and translation and afterwards on a nonrigid transformation. According to preregistration, the directly calculated affine matrix, which is obtained by limited pixel information, may misregistrate when large biases exist, thus misleading following registration subversively. To address this problem, for two-dimensional (2D) images, the two-layer deep adaptive registration framework proposed in this paper firstly accurately classifies the rotation parameter through multilayer convolutional neural networks (CNNs) and then identifies scale and translation parameters separately. For three-dimensional (3D) images, affine matrix is located through feature correspondences by a triplanar 2D CNNs. Then deformation removal is done iteratively through preregistration and demons registration. By comparison with the state-of-the-art registration framework, our method gains more accurate registration results on both synthetic and real datasets. Besides, principal component analysis (PCA) is combined with correlation like Pearson and Spearman to form new similarity standards in 2D and 3D registration. Experiment results also show faster convergence speed. PMID:26120356

  1. Deep Adaptive Log-Demons: Diffeomorphic Image Registration with Very Large Deformations.

    PubMed

    Zhao, Liya; Jia, Kebin

    2015-01-01

    This paper proposes a new framework for capturing large and complex deformation in image registration. Traditionally, this challenging problem relies firstly on a preregistration, usually an affine matrix containing rotation, scale, and translation and afterwards on a nonrigid transformation. According to preregistration, the directly calculated affine matrix, which is obtained by limited pixel information, may misregistrate when large biases exist, thus misleading following registration subversively. To address this problem, for two-dimensional (2D) images, the two-layer deep adaptive registration framework proposed in this paper firstly accurately classifies the rotation parameter through multilayer convolutional neural networks (CNNs) and then identifies scale and translation parameters separately. For three-dimensional (3D) images, affine matrix is located through feature correspondences by a triplanar 2D CNNs. Then deformation removal is done iteratively through preregistration and demons registration. By comparison with the state-of-the-art registration framework, our method gains more accurate registration results on both synthetic and real datasets. Besides, principal component analysis (PCA) is combined with correlation like Pearson and Spearman to form new similarity standards in 2D and 3D registration. Experiment results also show faster convergence speed. PMID:26120356

  2. Microwave Imaging with Infrared 2-D Lock-in Amplifier

    NASA Astrophysics Data System (ADS)

    Chiyo, Noritaka; Arai, Mizuki; Tanaka, Yasuhiro; Nishikata, Atsuhiro; Maeno, Takashi

    We have developed a 3-D electromagnetic field measurement system using 2-D lock-in amplifier. This system uses an amplitude modulated electromagnetic wave source to heat a resistive screen. A very small change of temperature on a screen illuminated with the modulated electromagnetic wave is measured using an infrared thermograph camera. In this paper, we attempted to apply our system to microwave imaging. By placing conductor patches in front of the resistive screen and illuminating with microwave, the shape of each conductor was clearly observed as the temperature difference image of the screen. In this way, the conductor pattern inside the non-contact type IC card could be visualized. Moreover, we could observe the temperature difference image reflecting the shape of a Konnyaku (a gelatinous food made from devil's-tonge starch) or a dried fishbone, both as non-conducting material resembling human body. These results proved that our method is applicable to microwave see-through imaging.

  3. Image registration using binary boundary maps

    NASA Technical Reports Server (NTRS)

    Andrus, J. F.; Campbell, C. W.; Jayroe, R. R.

    1978-01-01

    Registration technique that matches binary boundary maps extracted from raw data, rather than matching actual data, is considerably faster than other techniques. Boundary maps, which are digital representations of regions where image amplitudes change significantly, typically represent data compression of 60 to 70 percent. Maps allow average products to be computed with addition rather than multiplication, further reducing computation time.

  4. Image registration for luminescent paint applications

    NASA Technical Reports Server (NTRS)

    Bell, James H.; Mclachlan, Blair G.

    1993-01-01

    The use of pressure sensitive luminescent paints is a viable technique for the measurement of surface pressure on wind tunnel models. This technique requires data reduction of images obtained under known as well as test conditions and spatial transformation of the images. A general transform which registers images to subpixel accuracy is presented and the general characteristics of transforms for image registration and their derivation are discussed. Image resection and its applications are described. The mapping of pressure data to the three dimensional model surface for small wind tunnel models to a spatial accuracy of 0.5 percent of the model length is demonstrated.

  5. Spot identification on 2D electrophoresis gel images

    NASA Astrophysics Data System (ADS)

    Wang, Weixing

    2006-09-01

    2-D electrophoresis gel images can be used for identifying and characterizing many forms of a particular protein encoded by a single gene. Conventional approaches to gel analysis require the three steps: (1) Spot detection on each gel; (2) Spot matching between gels; and (3) Spot quantification and comparison. Many researchers and developers attempt to automate all steps as much as possible, but errors in the detection and matching stages are common. In order to carry out gel image analysis, one first needs to accurately detect and measure the protein spots in a gel image. This paper presents the algorithms for automatically delineating gel spots. The fusion of two types of segmentation algorithms was implemented. One is edge (discontinuity) based type, and the other is region based type. The primary integration of the two types of image segmentation algorithms have been tested too, the test results clearly show that the integrated algorithm can automatically delineate gel spots not only on a simple image and also on a complex image, and it is much better that either only edge based algorithm or only region based algorithm. Based on the testing and analysis results, the fusion of edge information and region information for gel image segmentation is good for this kind of images.

  6. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2003-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  7. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2002-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  8. Video Image Stabilization and Registration

    NASA Astrophysics Data System (ADS)

    Hathaway, David H.; Meyer, Paul J.

    2002-10-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  9. Stochastic inverse consistency in medical image registration.

    PubMed

    Yeung, Sai Kit; Shi, Pengcheng

    2005-01-01

    An essential goal in medical image registration is, the forward and reverse mapping matrices should be inverse to each other, i.e., inverse consistency. Conventional approaches enforce consistency in deterministic fashions, through incorporation of sub-objective cost function to impose source-destination symmetric property during the registration process. Assuming that the initial forward and reverse matching matrices have been computed and used as the inputs to our system, this paper presents a stochastic framework which yields perfect inverse consistency with the simultaneous considerations of the errors underneath the registration matrices and the imperfectness of the consistent constraint. An iterative generalized total least square (GTLS) strategy has been developed such that the inverse consistency is optimally imposed. PMID:16685959

  10. Symmetries of the 2D magnetic particle imaging system matrix.

    PubMed

    Weber, A; Knopp, T

    2015-05-21

    In magnetic particle imaging (MPI), the relation between the particle distribution and the measurement signal can be described by a linear system of equations. For 1D imaging, it can be shown that the system matrix can be expressed as a product of a convolution matrix and a Chebyshev transformation matrix. For multidimensional imaging, the structure of the MPI system matrix is not yet fully explored as the sampling trajectory complicates the physical model. It has been experimentally found that the MPI system matrix rows have symmetries and look similar to the tensor products of Chebyshev polynomials. In this work we will mathematically prove that the 2D MPI system matrix has symmetries that can be used for matrix compression. PMID:25919400

  11. Landsat image registration for agricultural applications

    NASA Technical Reports Server (NTRS)

    Wolfe, R. H., Jr.; Juday, R. D.; Wacker, A. G.; Kaneko, T.

    1982-01-01

    An image registration system has been developed at the NASA Johnson Space Center (JSC) to spatially align multi-temporal Landsat acquisitions for use in agriculture and forestry research. Working in conjunction with the Master Data Processor (MDP) at the Goddard Space Flight Center, it functionally replaces the long-standing LACIE Registration Processor as JSC's data supplier. The system represents an expansion of the techniques developed for the MDP and LACIE Registration Processor, and it utilizes the experience gained in an IBM/JSC effort evaluating the performance of the latter. These techniques are discussed in detail. Several tests were developed to evaluate the registration performance of the system. The results indicate that 1/15-pixel accuracy (about 4m for Landsat MSS) is achievable in ideal circumstances, sub-pixel accuracy (often to 0.2 pixel or better) was attained on a representative set of U.S. acquisitions, and a success rate commensurate with the LACIE Registration Processor was realized. The system has been employed in a production mode on U.S. and foreign data, and a performance similar to the earlier tests has been noted.

  12. Image Appraisal for 2D and 3D Electromagnetic Inversion

    SciTech Connect

    Alumbaugh, D.L.; Newman, G.A.

    1999-01-28

    Linearized methods are presented for appraising image resolution and parameter accuracy in images generated with two and three dimensional non-linear electromagnetic inversion schemes. When direct matrix inversion is employed, the model resolution and posterior model covariance matrices can be directly calculated. A method to examine how the horizontal and vertical resolution varies spatially within the electromagnetic property image is developed by examining the columns of the model resolution matrix. Plotting the square root of the diagonal of the model covariance matrix yields an estimate of how errors in the inversion process such as data noise and incorrect a priori assumptions about the imaged model map into parameter error. This type of image is shown to be useful in analyzing spatial variations in the image sensitivity to the data. A method is analyzed for statistically estimating the model covariance matrix when the conjugate gradient method is employed rather than a direct inversion technique (for example in 3D inversion). A method for calculating individual columns of the model resolution matrix using the conjugate gradient method is also developed. Examples of the image analysis techniques are provided on 2D and 3D synthetic cross well EM data sets, as well as a field data set collected at the Lost Hills Oil Field in Central California.

  13. Geometric assessment of image quality using digital image registration techniques

    NASA Technical Reports Server (NTRS)

    Tisdale, G. E.

    1976-01-01

    Image registration techniques were developed to perform a geometric quality assessment of multispectral and multitemporal image pairs. Based upon LANDSAT tapes, accuracies to a small fraction of a pixel were demonstrated. Because it is insensitive to the choice of registration areas, the technique is well suited to performance in an automatic system. It may be implemented at megapixel-per-second rates using a commercial minicomputer in combination with a special purpose digital preprocessor.

  14. Geometric direct search algorithms for image registration.

    PubMed

    Lee, Seok; Choi, Minseok; Kim, Hyungmin; Park, Frank Chongwoo

    2007-09-01

    A widely used approach to image registration involves finding the general linear transformation that maximizes the mutual information between two images, with the transformation being rigid-body [i.e., belonging to SE(3)] or volume-preserving [i.e., belonging to SL(3)]. In this paper, we present coordinate-invariant, geometric versions of the Nelder-Mead optimization algorithm on the groups SL(3), SE(3), and their various subgroups, that are applicable to a wide class of image registration problems. Because the algorithms respect the geometric structure of the underlying groups, they are numerically more stable, and exhibit better convergence properties than existing local coordinate-based algorithms. Experimental results demonstrate the improved convergence properties of our geometric algorithms. PMID:17784595

  15. 2-D Drift Velocities from the IMAGE EUV Plasmaspheric Imager

    NASA Technical Reports Server (NTRS)

    Gallagher, D. L.

    2006-01-01

    The IMAGE Mission extreme ultraviolet imager (EW) observes He(+) plasmaspheric ions throughout the inner magnetosphere. Limited by ionizing radiation and viewing close to the Sun, images of the He(+) distribution are available every 10 minutes for many hours as the spacecraft passes through apogee in its highly elliptical orbit. As a consistent constituent at about 15%, He(+) is an excellent surrogate for monitoring all of the processes that control the dynamics of plasmaspheric plasma. In particular, the motion of He' transverse to the ambient magnetic field is a direct indication of convective electric fields. The analysis of boundary motions has already achieved new insights into the electrodynamic coupling processes taking place between energetic magnetospheric plasmas and the ionosphere. Yet to be fulfilled, however, is the original promise that global E W images of the plasmasphere might yield two-dimensional pictures of mesoscale to macro-scale electric fields in the inner magnetosphere. This work details the technique and initial application of an IMAGE EUV analysis that appears capable of following thermal plasma motion on a global basis.

  16. 2-D Drift Velocities from the IMAGE EUV Plasmaspheric Imager

    NASA Technical Reports Server (NTRS)

    Gallagher, D.; Adrian, M.

    2007-01-01

    The IMAGE Mission extreme ultraviolet imager (EUY) observes He+ plasmaspheric ions throughout the inner magnetosphere. Limited by ionizing radiation and viewing close to the Sun, images of the He+ distribution are available every 10 minutes for many hours as the spacecraft passes through apogee in its highly elliptical orbit. As a consistent constituent at about 15%, He+ is an excellent surrogate for monitoring all of the processes that control the dynamics of plasmaspheric plasma. In particular, the motion ofHe+ transverse to the ambient magnetic field is a direct indication of convective electric fields. The analysis of boundary motions has already achieved new insights into the electrodynamic coupling processes taking place between energetic magnetospheric plasmas and the ionosphere. Yet to be fulfilled, however, is the original promise that global EUY images of the plasmasphere might yield two-dimensional pictures of meso-scale to macro-scale electric fields in the inner magnetosphere. This work details the technique and initial application of an IMAGE EUY analysis that appears capable of following thermal plasma motion on a global basis.

  17. Single- and multimodal subvoxel registration of dissimilar medical images using robust similarity measures

    NASA Astrophysics Data System (ADS)

    Nikou, Christophoros; Heitz, Fabrice; Armspach, Jean-Paul; Namer, Izzie-Jacques

    1998-06-01

    Although a large variety of image registration methods have been described in the literature, only a few approaches have attempted to address the rigid registration of medical images showing gross dissimilarities (due for instance to lesion evolution). In the present paper, we develop driven registration algorithms, relying on robust pixel similarity metrics, that enable an accurate (subvoxel) rigid registration of dissimilar single or multimodal 2D/3D images. In the proposed approach, gross dissimilarities are handled by considering similarity measures related to robust M-estimators. A `soft redescending' estimator (the Geman- McClure p-function) has been adopted to reject gross image dissimilarities during the registration. The registration parameters are estimated using a top down stochastic multigrid relaxation algorithm. Thanks to the stochastic multigrid strategy, the registration is not affected by local minima in the objective function and a manual initialization near the optimal solution is not necessary. The proposed robust similarity metrics compare favorably to the most popular standard similarity metrics, on patient image pairs showing gross dissimilarities. Two case studies are considered: the registration of MR/MR and MR/SPECT image volumes of patients suffering from multiple sclerosis and epilepsy.

  18. 2D magnetic nanoparticle imaging using magnetization response second harmonic

    NASA Astrophysics Data System (ADS)

    Tanaka, Saburo; Murata, Hayaki; Oishi, Tomoya; Suzuki, Toshifumi; Zhang, Yi

    2015-06-01

    A detection method and an imaging technique for magnetic nanoparticles (MNPs) have been investigated. In MNP detection and in magnetic particle imaging (MPI), the most commonly employed method is the detection of the odd harmonics of the magnetization response. We examined the advantage of using the second harmonic response when applying an AC magnetic modulation field and a DC bias field. If the magnetization response is detected by a Cu-wound-coil detection system, the output voltage from the coil is proportional to the change in the flux, dϕ/dt. Thus, the dependence of the derivative of the magnetization, M, on an AC magnetic modulation field and a DC bias field were calculated and investigated. The calculations were in good agreement with the experimental results. We demonstrated that the use of the second harmonic response for the detection of MNPs has an advantage compared with the usage of the third harmonic response, when the Cu-wound-coil detection system is employed and the amplitude of the ratio of the AC modulation field and a knee field Hac/Hk is less than 2. We also constructed a 2D MPI scanner using a pair of permanent ring magnets with a bore of ϕ80 mm separated by 90 mm. The magnets generated a gradient of Gz=3.17 T/m transverse to the imaging bore and Gx=1.33 T/m along the longitudinal axis. An original concentrated 10 μl Resovist solution in a ϕ2×3 mm2 vessel was used as a sample, and it was imaged by the scanner. As a result, a 2D contour map image could be successfully generated using the method with a lock-in amplifier.

  19. Fast Tensor Image Morphing for Elastic Registration

    PubMed Central

    Yap, Pew-Thian; Wu, Guorong; Zhu, Hongtu; Lin, Weili; Shen, Dinggang

    2009-01-01

    We propose a novel algorithm, called Fast Tensor Image Morphing for Elastic Registration or F-TIMER. F-TIMER leverages multiscale tensor regional distributions and local boundaries for hierarchically driving deformable matching of tensor image volumes. Registration is achieved by aligning a set of automatically determined structural landmarks, via solving a soft correspondence problem. Based on the estimated correspondences, thin-plate splines are employed to generate a smooth, topology preserving, and dense transformation, and to avoid arbitrary mapping of non-landmark voxels. To mitigate the problem of local minima, which is common in the estimation of high dimensional transformations, we employ a hierarchical strategy where a small subset of voxels with more distinctive attribute vectors are first deployed as landmarks to estimate a relatively robust low-degrees-of-freedom transformation. As the registration progresses, an increasing number of voxels are permitted to participate in refining the correspondence matching. A scheme as such allows less conservative progression of the correspondence matching towards the optimal solution, and hence results in a faster matching speed. Results indicate that better accuracy can be achieved by F-TIMER, compared with other deformable registration algorithms [1, 2], with significantly reduced computation time cost of 4–14 folds. PMID:20426052

  20. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA Marshall Space Flight Center, atmospheric scientist Paul Meyer (left) and solar physicist Dr. David Hathaway, have developed promising new software, called Video Image Stabilization and Registration (VISAR), that may help law enforcement agencies to catch criminals by improving the quality of video recorded at crime scenes, VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects; produces clearer images of moving objects; smoothes jagged edges; enhances still images; and reduces video noise of snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of Ultrasounds which are infamous for their grainy, blurred quality. It would be especially useful for tornadoes, tracking whirling objects and helping to determine the tornado's wind speed. This image shows two scientists reviewing an enhanced video image of a license plate taken from a moving automobile.

  1. Digital image registration method using boundary maps

    NASA Technical Reports Server (NTRS)

    Andrus, J. F.; Campbell, C. W.; Jayroe, R. R.

    1975-01-01

    A new method of automatic image registration (matching) is presented. It requires that the original single or multichannel images first be converted to binary boundary maps having elements equal to zero or unity. The method corrects for both translational and rotational errors. One feature of the technique is the rapid calculation of a pseudo correlation matrix NCOR using only integer additions. It is argued that the use of boundary maps is advisable when the data from the two images are acquired under different conditions; i.e., weather conditions, lighting conditions, etc.

  2. Fundus image registration for vestibularis research

    NASA Astrophysics Data System (ADS)

    Ithapu, Vamsi K.; Fritsche, Armin; Oppelt, Ariane; Westhofen, Martin; Deserno, Thomas M.

    2010-03-01

    In research on vestibular nerve disorders, fundus images of both left and right eyes are acquired systematically to precisely assess the rotation of the eye ball that is induced by the rotation of entire head. The measurement is still carried out manually. Although various methods have been proposed for medical image registration, robust detection of rotation especially in images with varied quality in terms of illumination, aberrations, blur and noise still is challenging. This paper evaluates registration algorithms operating on different levels of semantics: (i) data-based using Fourier transform and log polar maps; (ii) point-based using scaled image feature transform (SIFT); (iii) edge-based using Canny edge maps; (iv) object-based using matched filters for vessel detection; (v) scene-based detecting papilla and macula automatically and (vi) manually by two independent medical experts. For evaluation, a database of 22 patients is used, where each of left and right eye images is captured in upright head position and in lateral tilt of +/-200. For 66 pairs of images (132 in total), the results are compared with ground truth, and the performance measures are tabulated. Best correctness of 89.3% were obtained using the pixel-based method and allowing 2.5° deviation from the manual measures. However, the evaluation shows that for applications in computer-aided diagnosis involving a large set of images with varied quality, like in vestibularis research, registration methods based on a single level of semantics are not sufficiently robust. A multi-level semantics approach will improve the results since failure occur on different images.

  3. Automated landmark-guided deformable image registration.

    PubMed

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-01

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency. PMID:25479095

  4. Automated landmark-guided deformable image registration

    NASA Astrophysics Data System (ADS)

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-01

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency.

  5. Tracking of deformable target in 2D ultrasound images

    NASA Astrophysics Data System (ADS)

    Royer, Lucas; Marchal, Maud; Le Bras, Anthony; Dardenne, Guillaume; Krupa, Alexandre

    2015-03-01

    In this paper, we propose a novel approach for automatically tracking deformable target within 2D ultrasound images. Our approach uses only dense information combined with a physically-based model and has therefore the advantage of not using any fiducial marker nor a priori knowledge on the anatomical environment. The physical model is represented by a mass-spring damper system driven by different types of forces where the external forces are obtained by maximizing image similarity metric between a reference target and a deformed target across the time. This deformation is represented by a parametric warping model where the optimal parameters are estimated from the intensity variation. This warping function is well-suited to represent localized deformations in the ultrasound images because it directly links the forces applied on each mass with the motion of all the pixels in its vicinity. The internal forces constrain the deformation to physically plausible motions, and reduce the sensitivity to the speckle noise. The approach was validated on simulated and real data, both for rigid and free-form motions of soft tissues. The results are very promising since the deformable target could be tracked with a good accuracy for both types of motion. Our approach opens novel possibilities for computer-assisted interventions where deformable organs are involved and could be used as a new tool for interactive tracking of soft tissues in ultrasound images.

  6. Verifying radiotherapy treatment setup by interactive image registration.

    PubMed Central

    Boxwala, A. A.; Chaney, E. L.; Friedman, C. P.

    1996-01-01

    Digital image analysis techniques can be used to assist the physician in diagnostic or therapeutic decision making. In radiation oncology, portal image registration can improve the accuracy of detection of errors during radiation treatment. Following a discussion of the general paradigm of interactive image registration, we describe PortFolio, a workstation for portal image analysis. Images Figure 1 Figure 2 PMID:8947672

  7. A scanning-mode 2D shear wave imaging (s2D-SWI) system for ultrasound elastography.

    PubMed

    Qiu, Weibao; Wang, Congzhi; Li, Yongchuan; Zhou, Juan; Yang, Ge; Xiao, Yang; Feng, Ge; Jin, Qiaofeng; Mu, Peitian; Qian, Ming; Zheng, Hairong

    2015-09-01

    Ultrasound elastography is widely used for the non-invasive measurement of tissue elasticity properties. Shear wave imaging (SWI) is a quantitative method for assessing tissue stiffness. SWI has been demonstrated to be less operator dependent than quasi-static elastography, and has the ability to acquire quantitative elasticity information in contrast with acoustic radiation force impulse (ARFI) imaging. However, traditional SWI implementations cannot acquire two dimensional (2D) quantitative images of the tissue elasticity distribution. This study proposes and evaluates a scanning-mode 2D SWI (s2D-SWI) system. The hardware and image processing algorithms are presented in detail. Programmable devices are used to support flexible control of the system and the image processing algorithms. An analytic signal based cross-correlation method and a Radon transformation based shear wave speed determination method are proposed, which can be implemented using parallel computation. Imaging of tissue mimicking phantoms, and in vitro, and in vivo imaging test are conducted to demonstrate the performance of the proposed system. The s2D-SWI system represents a new choice for the quantitative mapping of tissue elasticity, and has great potential for implementation in commercial ultrasound scanners. PMID:26025508

  8. Statistically deformable 2D/3D registration for accurate determination of post-operative cup orientation from single standard X-ray radiograph.

    PubMed

    Zheng, Guoyan

    2009-01-01

    The widely used procedure of evaluation of cup orientation following total hip arthroplasty using single standard anteroposterior (AP) radiograph is known inaccurate, largely due to the wide variability in individual pelvic orientation relative to X-ray plate. 2D/3D rigid image registration methods have been introduced for an accurate determination of the post-operative cup alignment with respect to an anatomical reference extracted from the CT data. Although encouraging results have been reported, their extensive usage in clinical routine is still limited. This may be explained by their requirement of a CAD model of the prosthesis, which is often difficult to be organized from the manufacturer due to the proprietary issue, and by their requirement of a pre-operative CT scan, which is not available for most retrospective studies. To address these issues, we developed and validated a statistically deformable 2D/3D registration approach for accurate determination of post-operative cup orientation. No CAD model and pre-operative CT data is required any more. Quantitative and qualitative results evaluated on cadaveric and clinical datasets are given, which indicate the validity of the approach. PMID:20426064

  9. Automated 3D-2D registration of X-ray microcomputed tomography with histological sections for dental implants in bone using chamfer matching and simulated annealing.

    PubMed

    Becker, Kathrin; Stauber, Martin; Schwarz, Frank; Beißbarth, Tim

    2015-09-01

    We propose a novel 3D-2D registration approach for micro-computed tomography (μCT) and histology (HI), constructed for dental implant biopsies, that finds the position and normal vector of the oblique slice from μCT that corresponds to HI. During image pre-processing, the implants and the bone tissue are segmented using a combination of thresholding, morphological filters and component labeling. After this, chamfer matching is employed to register the implant edges and fine registration of the bone tissues is achieved using simulated annealing. The method was tested on n=10 biopsies, obtained at 20 weeks after non-submerged healing in the canine mandible. The specimens were scanned with μCT 100 and processed for hard tissue sectioning. After registration, we assessed the agreement of bone to implant contact (BIC) using automated and manual measurements. Statistical analysis was conducted to test the agreement of the BIC measurements in the registered samples. Registration was successful for all specimens and agreement of the respective binary images was high (median: 0.90, 1.-3. Qu.: 0.89-0.91). Direct comparison of BIC yielded that automated (median 0.82, 1.-3. Qu.: 0.75-0.85) and manual (median 0.61, 1.-3. Qu.: 0.52-0.67) measures from μCT were significant positively correlated with HI (median 0.65, 1.-3. Qu.: 0.59-0.72) between μCT and HI groups (manual: R(2)=0.87, automated: R(2)=0.75, p<0.001). The results show that this method yields promising results and that μCT may become a valid alternative to assess osseointegration in three dimensions. PMID:26026659

  10. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  11. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center, atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  12. High Speed 2D Hadamard Transform Spectral Imager

    SciTech Connect

    WEHLBURG, JOSEPH C.; WEHLBURG, CHRISTINE M.; SMITH, JODY L.; SPAHN, OLGA B.; SMITH, MARK W.; BONEY, CRAIG M.

    2003-02-01

    Hadamard Transform Spectrometer (HTS) approaches share the multiplexing advantages found in Fourier transform spectrometers. Interest in Hadamard systems has been limited due to data storage/computational limitations and the inability to perform accurate high order masking in a reasonable amount of time. Advances in digital micro-mirror array (DMA) technology have opened the door to implementing an HTS for a variety of applications including fluorescent microscope imaging and Raman imaging. A Hadamard transform spectral imager (HTSI) for remote sensing offers a variety of unique capabilities in one package such as variable spectral and temporal resolution, no moving parts (other than the micro-mirrors) and vibration tolerance. Two approaches to for 2D HTS systems have been investigated in this LDRD. The first approach involves dispersing the incident light, encoding the dispersed light then recombining the light. This method is referred to as spectral encoding. The other method encodes the incident light then disperses the encoded light. The second technique is called spatial encoding. After creating optical designs for both methods the spatial encoding method was selected as the method that would be implemented because the optical design was less costly to implement.

  13. Direct Image-To Registration Using Mobile Sensor Data

    NASA Astrophysics Data System (ADS)

    Kehl, C.; Buckley, S. J.; Gawthorpe, R. L.; Viola, I.; Howell, J. A.

    2016-06-01

    Adding supplementary texture and 2D image-based annotations to 3D surface models is a useful next step for domain specialists to make use of photorealistic products of laser scanning and photogrammetry. This requires a registration between the new camera imagery and the model geometry to be solved, which can be a time-consuming task without appropriate automation. The increasing availability of photorealistic models, coupled with the proliferation of mobile devices, gives users the possibility to complement their models in real time. Modern mobile devices deliver digital photographs of increasing quality, as well as on-board sensor data, which can be used as input for practical and automatic camera registration procedures. Their familiar user interface also improves manual registration procedures. This paper introduces a fully automatic pose estimation method using the on-board sensor data for initial exterior orientation, and feature matching between an acquired photograph and a synthesised rendering of the orientated 3D scene as input for fine alignment. The paper also introduces a user-friendly manual camera registration- and pose estimation interface for mobile devices, based on existing surface geometry and numerical optimisation methods. The article further assesses the automatic algorithm's accuracy compared to traditional methods, and the impact of computational- and environmental parameters. Experiments using urban and geological case studies show a significant sensitivity of the automatic procedure to the quality of the initial mobile sensor values. Changing natural lighting conditions remain a challenge for automatic pose estimation techniques, although progress is presented here. Finally, the automatically-registered mobile images are used as the basis for adding user annotations to the input textured model.

  14. Respiratory motion compensation for simultaneous PET/MR based on a 3D-2D registration of strongly undersampled radial MR data: a simulation study

    NASA Astrophysics Data System (ADS)

    Rank, Christopher M.; Heußer, Thorsten; Flach, Barbara; Brehm, Marcus; Kachelrieß, Marc

    2015-03-01

    We propose a new method for PET/MR respiratory motion compensation, which is based on a 3D-2D registration of strongly undersampled MR data and a) runs in parallel with the PET acquisition, b) can be interlaced with clinical MR sequences, and c) requires less than one minute of the total MR acquisition time per bed position. In our simulation study, we applied a 3D encoded radial stack-of-stars sampling scheme with 160 radial spokes per slice and an acquisition time of 38 s. Gated 4D MR images were reconstructed using a 4D iterative reconstruction algorithm. Based on these images, motion vector fields were estimated using our newly-developed 3D-2D registration framework. A 4D PET volume of a patient with eight hot lesions in the lungs and upper abdomen was simulated and MoCo 4D PET images were reconstructed based on the motion vector fields derived from MR. For evaluation, average SUVmean values of the artificial lesions were determined for a 3D, a gated 4D, a MoCo 4D and a reference (with ten-fold measurement time) gated 4D reconstruction. Compared to the reference, 3D reconstructions yielded an underestimation of SUVmean values due to motion blurring. In contrast, gated 4D reconstructions showed the highest variation of SUVmean due to low statistics. MoCo 4D reconstructions were only slightly affected by these two sources of uncertainty resulting in a significant visual and quantitative improvement in terms of SUVmean values. Whereas temporal resolution was comparable to the gated 4D images, signal-to-noise ratio and contrast-to-noise ratio were close to the 3D reconstructions.

  15. Results of automatic image registration are dependent on initial manual registration.

    PubMed

    Johnson, Joshua E; Fischer, Kenneth J

    2015-01-01

    Measurement of static alignment of articulating joints is of clinical benefit and can be determined using image-based registration. We propose a method that could potentially improve the outcome of image-based registration by using initial manual registration. Magnetic resonance images of two wrist specimens were acquired in the relaxed position and during simulated grasp. Transformations were determined from voxel-based image registration between the two volumes. The volumes were manually aligned to match as closely as possible before auto-registration, from which standard transformations were obtained. Then, translation/rotation perturbations were applied to the manual registration to obtain altered initial positions, from which altered auto-registration transformations were obtained. Models of the radiolunate joint were also constructed from the images to simulate joint contact mechanics. We compared the sensitivity of transformations (translations and rotations) and contact mechanics to altering the initial registration condition from the defined standard. We observed that with increasing perturbation, transformation errors appeared to increase and values for contact force and contact area appeared to decrease. Based on these preliminary findings, it appears that the final registration outcome is sensitive to the initial registration. PMID:25408167

  16. Mono- and multimodal registration of optical breast images

    NASA Astrophysics Data System (ADS)

    Pearlman, Paul C.; Adams, Arthur; Elias, Sjoerd G.; Mali, Willem P. Th. M.; Viergever, Max A.; Pluim, Josien P. W.

    2012-08-01

    Optical breast imaging offers the possibility of noninvasive, low cost, and high sensitivity imaging of breast cancers. Poor spatial resolution and a lack of anatomical landmarks in optical images of the breast make interpretation difficult and motivate registration and fusion of these data with subsequent optical images and other breast imaging modalities. Methods used for registration and fusion of optical breast images are reviewed. Imaging concerns relevant to the registration problem are first highlighted, followed by a focus on both monomodal and multimodal registration of optical breast imaging. Where relevant, methods pertaining to other imaging modalities or imaged anatomies are presented. The multimodal registration discussion concerns digital x-ray mammography, ultrasound, magnetic resonance imaging, and positron emission tomography.

  17. Group-wise feature-based registration of CT and ultrasound images of spine

    NASA Astrophysics Data System (ADS)

    Rasoulian, Abtin; Mousavi, Parvin; Hedjazi Moghari, Mehdi; Foroughi, Pezhman; Abolmaesumi, Purang

    2010-02-01

    Registration of pre-operative CT and freehand intra-operative ultrasound of lumbar spine could aid surgeons in the spinal needle injection which is a common procedure for pain management. Patients are always in a supine position during the CT scan, and in the prone or sitting position during the intervention. This leads to a difference in the spinal curvature between the two imaging modalities, which means a single rigid registration cannot be used for all of the lumbar vertebrae. In this work, a method for group-wise registration of pre-operative CT and intra-operative freehand 2-D ultrasound images of the lumbar spine is presented. The approach utilizes a pointbased registration technique based on the unscented Kalman filter, taking as input segmented vertebrae surfaces in both CT and ultrasound data. Ultrasound images are automatically segmented using a dynamic programming approach, while the CT images are semi-automatically segmented using thresholding. Since the curvature of the spine is different between the pre-operative and the intra-operative data, the registration approach is designed to simultaneously align individual groups of points segmented from each vertebra in the two imaging modalities. A biomechanical model is used to constrain the vertebrae transformation parameters during the registration and to ensure convergence. The mean target registration error achieved for individual vertebrae on five spine phantoms generated from CT data of patients, is 2.47 mm with standard deviation of 1.14 mm.

  18. Estimating mass of crushed limestone particles from 2D images

    NASA Astrophysics Data System (ADS)

    Banta, Larry E.; Cheng, Ken; Zaniewski, John P.

    2002-02-01

    In the construction of asphalt pavements, the stability of the asphalt is determined in large part by the gradation, or size distribution of the mineral aggregates that make up the matrix. Gradation is specified on the basis of sieve sizes and percent passing, where the latter is a cumulative measure of the mass of the aggregate passing the sieve as fraction of the total mass in the batch. In this paper, an approach for predicting particle mass based on 2D electronic images is explored. Images of crushed limestone aggregates were acquired using backlighting to create silhouettes. A morphological erosion process was used to separate touching and overlapping particles. Useful features of the particle silhouettes, such as area, centroid and shape descriptors were collected. Several dimensionless parameters were defined and were used as regressor variables in a multiple linear regression model to predict particle mass. Regressor coefficients were found by fitting to a sample of 501 particles ranging in size from 4.75 mm < particle sieve size < 25 mm. When tested against a different aggregate sample, the model predicted the mass of the batch to within +/- 2%.

  19. 2D and 3D visualization methods of endoscopic panoramic bladder images

    NASA Astrophysics Data System (ADS)

    Behrens, Alexander; Heisterklaus, Iris; Müller, Yannick; Stehle, Thomas; Gross, Sebastian; Aach, Til

    2011-03-01

    While several mosaicking algorithms have been developed to compose endoscopic images of the internal urinary bladder wall into panoramic images, the quantitative evaluation of these output images in terms of geometrical distortions have often not been discussed. However, the visualization of the distortion level is highly desired for an objective image-based medical diagnosis. Thus, we present in this paper a method to create quality maps from the characteristics of transformation parameters, which were applied to the endoscopic images during the registration process of the mosaicking algorithm. For a global first view impression, the quality maps are laid over the panoramic image and highlight image regions in pseudo-colors according to their local distortions. This illustration supports then surgeons to identify geometrically distorted structures easily in the panoramic image, which allow more objective medical interpretations of tumor tissue in shape and size. Aside from introducing quality maps in 2-D, we also discuss a visualization method to map panoramic images onto a 3-D spherical bladder model. Reference points are manually selected by the surgeon in the panoramic image and the 3-D model. Then the panoramic image is mapped by the Hammer-Aitoff equal-area projection onto the 3-D surface using texture mapping. Finally the textured bladder model can be freely moved in a virtual environment for inspection. Using a two-hemisphere bladder representation, references between panoramic image regions and their corresponding space coordinates within the bladder model are reconstructed. This additional spatial 3-D information thus assists the surgeon in navigation, documentation, as well as surgical planning.

  20. Registration of multi-view apical 3D echocardiography images

    NASA Astrophysics Data System (ADS)

    Mulder, H. W.; van Stralen, M.; van der Zwaan, H. B.; Leung, K. Y. E.; Bosch, J. G.; Pluim, J. P. W.

    2011-03-01

    Real-time three-dimensional echocardiography (RT3DE) is a non-invasive method to visualize the heart. Disadvantageously, it suffers from non-uniform image quality and a limited field of view. Image quality can be improved by fusion of multiple echocardiography images. Successful registration of the images is essential for prosperous fusion. Therefore, this study examines the performance of different methods for intrasubject registration of multi-view apical RT3DE images. A total of 14 data sets was annotated by two observers who indicated the position of the apex and four points on the mitral valve ring. These annotations were used to evaluate registration. Multi-view end-diastolic (ED) as well as end-systolic (ES) images were rigidly registered in a multi-resolution strategy. The performance of single-frame and multi-frame registration was examined. Multi-frame registration optimizes the metric for several time frames simultaneously. Furthermore, the suitability of mutual information (MI) as similarity measure was compared to normalized cross-correlation (NCC). For initialization of the registration, a transformation that describes the probe movement was obtained by manually registering five representative data sets. It was found that multi-frame registration can improve registration results with respect to single-frame registration. Additionally, NCC outperformed MI as similarity measure. If NCC was optimized in a multi-frame registration strategy including ED and ES time frames, the performance of the automatic method was comparable to that of manual registration. In conclusion, automatic registration of RT3DE images performs as good as manual registration. As registration precedes image fusion, this method can contribute to improved quality of echocardiography images.

  1. Image registration of naval IR images

    NASA Astrophysics Data System (ADS)

    Rodland, Arne J.

    1996-06-01

    In a real world application an image from a stabilized sensor on a moving platform will not be 100 percent stabilized. There will always be a small unknown error in the stabilization due to factors such as dynamic deformations in the structure between sensor and reference Inertial Navigation Unit, servo inaccuracies, etc. For a high resolution imaging sensor this stabilization error causes the image to move several pixels in unknown direction between frames. TO be able to detect and track small moving objects from such a sensor, this unknown movement of the sensor image must be estimated. An algorithm that searches for land contours in the image has been evaluated. The algorithm searches for high contrast points distributed over the whole image. As long as moving objects in the scene only cover a small area of the scene, most of the points are located on solid ground. By matching the list of points from frame to frame, the movement of the image due to stabilization errors can be estimated and compensated. The point list is searched for points with diverging movement from the estimated stabilization error. These points are then assumed to be located on moving objects. Points assumed to be located on moving objects are gradually exchanged with new points located in the same area. Most of the processing is performed on the list of points and not on the complete image. The algorithm is therefore very fast and well suited for real time implementation. The algorithm has been tested on images from an experimental IR scanner. Stabilization errors were added artificially to the image such that the output from the algorithm could be compared with the artificially added stabilization errors.

  2. Registration of heat capacity mapping mission day and night images

    NASA Technical Reports Server (NTRS)

    Watson, K.; Hummer-Miller, S.; Sawatzky, D. L.

    1982-01-01

    Registration of thermal images is complicated by distinctive differences in the appearance of day and night features needed as control in the registration process. These changes are unlike those that occur between Landsat scenes and pose unique constraints. Experimentation with several potentially promising techniques has led to selection of a fairly simple scheme for registration of data from the experimental thermal satellite HCMM using an affine transformation. Two registration examples are provided.

  3. Medical image registration using machine learning-based interest point detector

    NASA Astrophysics Data System (ADS)

    Sergeev, Sergey; Zhao, Yang; Linguraru, Marius George; Okada, Kazunori

    2012-02-01

    This paper presents a feature-based image registration framework which exploits a novel machine learning (ML)-based interest point detection (IPD) algorithm for feature selection and correspondence detection. We use a feed-forward neural network (NN) with back-propagation as our base ML detector. Literature on ML-based IPD is scarce and to our best knowledge no previous research has addressed feature selection strategy for IPD purpose with cross-validation (CV) detectability measure. Our target application is the registration of clinical abdominal CT scans with abnormal anatomies. We evaluated the correspondence detection performance of the proposed ML-based detector against two well-known IPD algorithms: SIFT and SURF. The proposed method is capable of performing affine rigid registrations of 2D and 3D CT images, demonstrating more than two times better accuracy in correspondence detection than SIFT and SURF. The registration accuracy has been validated manually using identified landmark points. Our experimental results shows an improvement in 3D image registration quality of 18.92% compared with affine transformation image registration method from standard ITK affine registration toolkit.

  4. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  5. [Human cerebral image registration using generalized mutual information].

    PubMed

    Zhang, Jingzhou; Li, Ting; Zhang, Jia

    2008-12-01

    Medical image registration is a highlight of actual research on medical image processing. Based onsimilarity measure of Shannon entropy, a new generalized distance measurement based on Rényi entropy applied to image rigid registration is introduced and is called here generalized mutual information (GMI). It is used in three dimensional cerebral image registration experiments. The simulation results show that generalized distance measurement and Shannon entropy measurement apply to different areas; that the registration measure based o n generalized distance is a natural extension of mutual information of Shannon entropy. The results prove that generalized mutual information uses less time than simple mutual information does, and the new similarity measure manifests higher degree of consistency between the two cerebral registration images. Also, the registration results provide the clinical diagnoses with more important references. In conclusion, generalized mutual information has satisfied the demands of clinical application to a wide extent. PMID:19166197

  6. 3D structural measurements of the proximal femur from 2D DXA images using a statistical atlas

    NASA Astrophysics Data System (ADS)

    Ahmad, Omar M.; Ramamurthi, Krishna; Wilson, Kevin E.; Engelke, Klaus; Bouxsein, Mary; Taylor, Russell H.

    2009-02-01

    A method to obtain 3D structural measurements of the proximal femur from 2D DXA images and a statistical atlas is presented. A statistical atlas of a proximal femur was created consisting of both 3D shape and volumetric density information and then deformably registered to 2D fan-beam DXA images. After the registration process, a series of 3D structural measurements were taken on QCT-estimates generated by transforming the registered statistical atlas into a voxel volume. These measurements were compared to the equivalent measurements taken on the actual QCT (ground truth) associated with the DXA images for each of 20 human cadaveric femora. The methodology and results are presented to address the potential clinical feasibility of obtaining 3D structural measurements from limited angle DXA scans and a statistical atlas of the proximal femur in-vivo.

  7. Image registration with auto-mapped control volumes

    SciTech Connect

    Schreibmann, Eduard; Xing Lei

    2006-04-15

    Many image registration algorithms rely on the use of homologous control points on the two input image sets to be registered. In reality, the interactive identification of the control points on both images is tedious, difficult, and often a source of error. We propose a two-step algorithm to automatically identify homologous regions that are used as a priori information during the image registration procedure. First, a number of small control volumes having distinct anatomical features are identified on the model image in a somewhat arbitrary fashion. Instead of attempting to find their correspondences in the reference image through user interaction, in the proposed method, each of the control regions is mapped to the corresponding part of the reference image by using an automated image registration algorithm. A normalized cross-correlation (NCC) function or mutual information was used as the auto-mapping metric and a limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm (L-BFGS) was employed to optimize the function to find the optimal mapping. For rigid registration, the transformation parameters of the system are obtained by averaging that derived from the individual control volumes. In our deformable calculation, the mapped control volumes are treated as the nodes or control points with known positions on the two images. If the number of control volumes is not enough to cover the whole image to be registered, additional nodes are placed on the model image and then located on the reference image in a manner similar to the conventional BSpline deformable calculation. For deformable registration, the established correspondence by the auto-mapped control volumes provides valuable guidance for the registration calculation and greatly reduces the dimensionality of the problem. The performance of the two-step registrations was applied to three rigid registration cases (two PET-CT registrations and a brain MRI-CT registration) and one deformable registration of

  8. Rethinking image registration on customizable hardware

    NASA Astrophysics Data System (ADS)

    Bowman, David; Tahtali, Murat; Lambert, Andrew

    2010-08-01

    Image registration is one of the most important tasks in image processing and is frequently one of the most computationally intensive. In cases where there is a high likelihood of finding the exact template in the search image, correlation-based methods predominate. Presumably this is because the computational complexity of a correlation operation can be reduced substantially by transforming the task into the frequency domain. Alternative methods such as minimum Sum of Squared Differences (minSSD) are not so tractable and are normally disfavored. This bias is justified when dealing with conventional computer processors since the operations must be conducted in an essentially sequential manner however we demonstrate it is normally unjustified when the processing is undertaken on customizable hardware such as FPGAs where tasks can be temporally and/or spatially parallelized. This is because the gate-based logic of an FPGA is better suited to the tasks of minSSD i.e. signed-addition hardware can be very cheaply implemented in FPGA fabric, and square operations are easily implemented via a look-up table. In contrast, correlationbased methods require extensive use of multiplier hardware which cannot be so cheaply implemented in the device. Even with modern DSP-oriented FPGAs which contain many "hard" multipliers we experience at least an order of magnitude increase in the number of minSSD hardware modules we can implement compared to cross-correlation modules. We demonstrate successful use and comparison of techniques within an FPGA for registration and correction of turbulence degraded images.

  9. Registration of Optical Data with High-Resolution SAR Data: a New Image Registration Solution

    NASA Astrophysics Data System (ADS)

    Bahr, T.; Jin, X.

    2013-04-01

    Accurate image-to-image registration is critical for many image processing workflows, including georeferencing, change detection, data fusion, image mosaicking, DEM extraction and 3D modeling. Users need a solution to generate tie points accurately and geometrically align the images automatically. To solve these requirements we developed the Hybrid Powered Auto-Registration Engine (HyPARE). HyPARE combines all available spatial reference information with a number of image registration approaches to improve the accuracy, performance, and automation of tie point generation and image registration. We demonstrate this approach by the registration of a Pléiades-1a image with a TerraSAR-X SpotLight image of Hannover, Germany. Registering images with different modalities is a known challenging problem; e.g. manual tie point collection is prone to error. The registration engine allows to generate tie points automatically, using an optimized mutual information-based matching method. It produces more accurate results than traditional correlation-based measures. In this example the resulting tie points are well distributed across the overlapping areas, even as the images have significant local feature differences.

  10. High-performance computing in image registration

    NASA Astrophysics Data System (ADS)

    Zanin, Michele; Remondino, Fabio; Dalla Mura, Mauro

    2012-10-01

    Thanks to the recent technological advances, a large variety of image data is at our disposal with variable geometric, radiometric and temporal resolution. In many applications the processing of such images needs high performance computing techniques in order to deliver timely responses e.g. for rapid decisions or real-time actions. Thus, parallel or distributed computing methods, Digital Signal Processor (DSP) architectures, Graphical Processing Unit (GPU) programming and Field-Programmable Gate Array (FPGA) devices have become essential tools for the challenging issue of processing large amount of geo-data. The article focuses on the processing and registration of large datasets of terrestrial and aerial images for 3D reconstruction, diagnostic purposes and monitoring of the environment. For the image alignment procedure, sets of corresponding feature points need to be automatically extracted in order to successively compute the geometric transformation that aligns the data. The feature extraction and matching are ones of the most computationally demanding operations in the processing chain thus, a great degree of automation and speed is mandatory. The details of the implemented operations (named LARES) exploiting parallel architectures and GPU are thus presented. The innovative aspects of the implementation are (i) the effectiveness on a large variety of unorganized and complex datasets, (ii) capability to work with high-resolution images and (iii) the speed of the computations. Examples and comparisons with standard CPU processing are also reported and commented.

  11. Enhancing retinal images by nonlinear registration

    NASA Astrophysics Data System (ADS)

    Molodij, G.; Ribak, E. N.; Glanc, M.; Chenegros, G.

    2015-05-01

    Being able to image the human retina in high resolution opens a new era in many important fields, such as pharmacological research for retinal diseases, researches in human cognition, nervous system, metabolism and blood stream, to name a few. In this paper, we propose to share the knowledge acquired in the fields of optics and imaging in solar astrophysics in order to improve the retinal imaging in the perspective to perform a medical diagnosis. The main purpose would be to assist health care practitioners by enhancing the spatial resolution of the retinal images and increase the level of confidence of the abnormal feature detection. We apply a nonlinear registration method using local correlation tracking to increase the field of view and follow structure evolutions using correlation techniques borrowed from solar astronomy technique expertise. Another purpose is to define the tracer of movements after analyzing local correlations to follow the proper motions of an image from one moment to another, such as changes in optical flows that would be of high interest in a medical diagnosis.

  12. Color image registration based on quaternion Fourier transformation

    NASA Astrophysics Data System (ADS)

    Wang, Qiang; Wang, Zhengzhi

    2012-05-01

    The traditional Fourier Mellin transform is applied to quaternion algebra in order to investigate quaternion Fourier transformation properties useful for color image registration in frequency domain. Combining with the quaternion phase correlation, we propose a method for color image registration based on the quaternion Fourier transform. The registration method, which processes color image in a holistic manner, is convenient to realign color images differing in translation, rotation, and scaling. Experimental results on different types of color images indicate that the proposed method not only obtains high accuracy in similarity transform in the image plane but also is computationally efficient.

  13. Biomechanical model as a registration tool for image-guided neurosurgery: evaluation against BSpline registration

    PubMed Central

    Mostayed, Ahmed; Garlapati, Revanth Reddy; Joldes, Grand Roman; Wittek, Adam; Roy, Aditi; Kikinis, Ron; Warfield, Simon K.; Miller, Karol

    2013-01-01

    In this paper we evaluate the accuracy of warping of neuro-images using brain deformation predicted by means of a patient-specific biomechanical model against registration using a BSpline-based free form deformation algorithm. Unlike the Bspline algorithm, biomechanics-based registration does not require an intra-operative MR image which is very expensive and cumbersome to acquire. Only sparse intra-operative data on the brain surface is sufficient to compute deformation for the whole brain. In this contribution the deformation fields obtained from both methods are qualitatively compared and overlaps of Canny edges extracted from the images are examined. We define an edge based Hausdorff distance metric to quantitatively evaluate the accuracy of registration for these two algorithms. The qualitative and quantitative evaluations indicate that our biomechanics-based registration algorithm, despite using much less input data, has at least as high registration accuracy as that of the BSpline algorithm. PMID:23771299

  14. Registration of clinical volumes to beams-eye-view images for real-time tracking

    SciTech Connect

    Bryant, Jonathan H.; Rottmann, Joerg; Lewis, John H.; Mishra, Pankaj; Berbeco, Ross I.; Keall, Paul J.

    2014-12-15

    Purpose: The authors combine the registration of 2D beam’s eye view (BEV) images and 3D planning computed tomography (CT) images, with relative, markerless tumor tracking to provide automatic absolute tracking of physician defined volumes such as the gross tumor volume (GTV). Methods: During treatment of lung SBRT cases, BEV images were continuously acquired with an electronic portal imaging device (EPID) operating in cine mode. For absolute registration of physician-defined volumes, an intensity based 2D/3D registration to the planning CT was performed using the end-of-exhale (EoE) phase of the four dimensional computed tomography (4DCT). The volume was converted from Hounsfield units into electron density by a calibration curve and digitally reconstructed radiographs (DRRs) were generated for each beam geometry. Using normalized cross correlation between the DRR and an EoE BEV image, the best in-plane rigid transformation was found. The transformation was applied to physician-defined contours in the planning CT, mapping them into the EPID image domain. A robust multiregion method of relative markerless lung tumor tracking quantified deviations from the EoE position. Results: The success of 2D/3D registration was demonstrated at the EoE breathing phase. By registering at this phase and then employing a separate technique for relative tracking, the authors are able to successfully track target volumes in the BEV images throughout the entire treatment delivery. Conclusions: Through the combination of EPID/4DCT registration and relative tracking, a necessary step toward the clinical implementation of BEV tracking has been completed. The knowledge of tumor volumes relative to the treatment field is important for future applications like real-time motion management, adaptive radiotherapy, and delivered dose calculations.

  15. A volumetric model-based 2D to 3D registration method for measuring kinematics of natural knees with single-plane fluoroscopy

    SciTech Connect

    Tsai, Tsung-Yuan; Lu, Tung-Wu; Chen, Chung-Ming; Kuo, Mei-Ying; Hsu, Horng-Chaung

    2010-03-15

    Purpose: Accurate measurement of the three-dimensional (3D) rigid body and surface kinematics of the natural human knee is essential for many clinical applications. Existing techniques are limited either in their accuracy or lack more realistic experimental evaluation of the measurement errors. The purposes of the study were to develop a volumetric model-based 2D to 3D registration method, called the weighted edge-matching score (WEMS) method, for measuring natural knee kinematics with single-plane fluoroscopy to determine experimentally the measurement errors and to compare its performance with that of pattern intensity (PI) and gradient difference (GD) methods. Methods: The WEMS method gives higher priority to matching of longer edges of the digitally reconstructed radiograph and fluoroscopic images. The measurement errors of the methods were evaluated based on a human cadaveric knee at 11 flexion positions. Results: The accuracy of the WEMS method was determined experimentally to be less than 0.77 mm for the in-plane translations, 3.06 mm for out-of-plane translation, and 1.13 deg. for all rotations, which is better than that of the PI and GD methods. Conclusions: A new volumetric model-based 2D to 3D registration method has been developed for measuring 3D in vivo kinematics of natural knee joints with single-plane fluoroscopy. With the equipment used in the current study, the accuracy of the WEMS method is considered acceptable for the measurement of the 3D kinematics of the natural knee in clinical applications.

  16. GPUs benchmarking in subpixel image registration algorithm

    NASA Astrophysics Data System (ADS)

    Sanz-Sabater, Martin; Picazo-Bueno, Jose Angel; Micó, Vicente; Ferrerira, Carlos; Granero, Luis; Garcia, Javier

    2015-05-01

    Image registration techniques are used among different scientific fields, like medical imaging or optical metrology. The straightest way to calculate shifting between two images is using the cross correlation, taking the highest value of this correlation image. Shifting resolution is given in whole pixels which cannot be enough for certain applications. Better results can be achieved interpolating both images, as much as the desired resolution we want to get, and applying the same technique described before, but the memory needed by the system is significantly higher. To avoid memory consuming we are implementing a subpixel shifting method based on FFT. With the original images, subpixel shifting can be achieved multiplying its discrete Fourier transform by a linear phase with different slopes. This method is high time consuming method because checking a concrete shifting means new calculations. The algorithm, highly parallelizable, is very suitable for high performance computing systems. GPU (Graphics Processing Unit) accelerated computing became very popular more than ten years ago because they have hundreds of computational cores in a reasonable cheap card. In our case, we are going to register the shifting between two images, doing the first approach by FFT based correlation, and later doing the subpixel approach using the technique described before. We consider it as `brute force' method. So we will present a benchmark of the algorithm consisting on a first approach (pixel resolution) and then do subpixel resolution approaching, decreasing the shifting step in every loop achieving a high resolution in few steps. This program will be executed in three different computers. At the end, we will present the results of the computation, with different kind of CPUs and GPUs, checking the accuracy of the method, and the time consumed in each computer, discussing the advantages, disadvantages of the use of GPUs.

  17. SAR/LANDSAT image registration study

    NASA Technical Reports Server (NTRS)

    Murphrey, S. W. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. Temporal registration of synthetic aperture radar data with LANDSAT-MSS data is both feasible (from a technical standpoint) and useful (from an information-content viewpoint). The greatest difficulty in registering aircraft SAR data to corrected LANDSAT-MSS data is control-point location. The differences in SAR and MSS data impact the selection of features that will serve as a good control points. The SAR and MSS data are unsuitable for automatic computer correlation of digital control-point data. The gray-level data can not be compared by the computer because of the different response characteristics of the MSS and SAR images.

  18. Registration of structurally dissimilar images in MRI-based brachytherapy

    NASA Astrophysics Data System (ADS)

    Berendsen, F. F.; Kotte, A. N. T. J.; de Leeuw, A. A. C.; Jürgenliemk-Schulz, I. M.; Viergever, M. A.; Pluim, J. P. W.

    2014-08-01

    A serious challenge in image registration is the accurate alignment of two images in which a certain structure is present in only one of the two. Such topological changes are problematic for conventional non-rigid registration algorithms. We propose to incorporate in a conventional free-form registration framework a geometrical penalty term that minimizes the volume of the missing structure in one image. We demonstrate our method on cervical MR images for brachytherapy. The intrapatient registration problem involves one image in which a therapy applicator is present and one in which it is not. By including the penalty term, a substantial improvement in the surface distance to the gold standard anatomical position and the residual volume of the applicator void are obtained. Registration of neighboring structures, i.e. the rectum and the bladder is generally improved as well, albeit to a lesser degree.

  19. Shearlet Features for Registration of Remotely Sensed Multitemporal Images

    NASA Technical Reports Server (NTRS)

    Murphy, James M.; Le Moigne, Jacqueline

    2015-01-01

    We investigate the role of anisotropic feature extraction methods for automatic image registration of remotely sensed multitemporal images. Building on the classical use of wavelets in image registration, we develop an algorithm based on shearlets, a mathematical generalization of wavelets that offers increased directional sensitivity. Initial experimental results on LANDSAT images are presented, which indicate superior performance of the shearlet algorithm when compared to classical wavelet algorithms.

  20. Lucas-Kanade image registration using camera parameters

    NASA Astrophysics Data System (ADS)

    Cho, Sunghyun; Cho, Hojin; Tai, Yu-Wing; Moon, Young Su; Cho, Junguk; Lee, Shihwa; Lee, Seungyong

    2012-01-01

    The Lucas-Kanade algorithm and its variants have been successfully used for numerous works in computer vision, which include image registration as a component in the process. In this paper, we propose a Lucas-Kanade based image registration method using camera parameters. We decompose a homography into camera intrinsic and extrinsic parameters, and assume that the intrinsic parameters are given, e.g., from the EXIF information of a photograph. We then estimate only the extrinsic parameters for image registration, considering two types of camera motions, 3D rotations and full 3D motions with translations and rotations. As the known information about the camera is fully utilized, the proposed method can perform image registration more reliably. In addition, as the number of extrinsic parameters is smaller than the number of homography elements, our method runs faster than the Lucas-Kanade based registration method that estimates a homography itself.

  1. Analytic regularization for landmark-based image registration

    NASA Astrophysics Data System (ADS)

    Shusharina, Nadezhda; Sharp, Gregory

    2012-03-01

    Landmark-based registration using radial basis functions (RBF) is an efficient and mathematically transparent method for the registration of medical images. To ensure invertibility and diffeomorphism of the RBF-based vector field, various regularization schemes have been suggested. Here, we report a novel analytic method of RBF regularization and demonstrate its power for Gaussian RBF. Our analytic formula can be used to obtain a regularized vector field from the solution of a system of linear equations, exactly as in traditional RBF, and can be generalized to any RBF with infinite support. We statistically validate the method on global registration of synthetic and pulmonary images. Furthermore, we present several clinical examples of multistage intensity/landmark-based registrations, where regularized Gaussian RBF are successful in correcting locally misregistered areas resulting from automatic B-spline registration. The intended ultimate application of our method is rapid, interactive local correction of deformable registration with a small number of mouse clicks.

  2. Research Issues in Image Registration for Remote Sensing

    NASA Technical Reports Server (NTRS)

    Eastman, Roger D.; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    Image registration is an important element in data processing for remote sensing with many applications and a wide range of solutions. Despite considerable investigation the field has not settled on a definitive solution for most applications and a number of questions remain open. This article looks at selected research issues by surveying the experience of operational satellite teams, application-specific requirements for Earth science, and our experiments in the evaluation of image registration algorithms with emphasis on the comparison of algorithms for subpixel accuracy. We conclude that remote sensing applications put particular demands on image registration algorithms to take into account domain-specific knowledge of geometric transformations and image content.

  3. Automatic 3D image registration using voxel similarity measurements based on a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Sullivan, John M., Jr.; Kulkarni, Praveen; Murugavel, Murali

    2006-03-01

    An automatic 3D non-rigid body registration system based upon the genetic algorithm (GA) process is presented. The system has been successfully applied to 2D and 3D situations using both rigid-body and affine transformations. Conventional optimization techniques and gradient search strategies generally require a good initial start location. The GA approach avoids the local minima/maxima traps of conventional optimization techniques. Based on the principles of Darwinian natural selection (survival of the fittest), the genetic algorithm has two basic steps: 1. Randomly generate an initial population. 2. Repeated application of the natural selection operation until a termination measure is satisfied. The natural selection process selects individuals based on their fitness to participate in the genetic operations; and it creates new individuals by inheritance from both parents, genetic recombination (crossover) and mutation. Once the termination criteria are satisfied, the optimum is selected from the population. The algorithm was applied on 2D and 3D magnetic resonance images (MRI). It does not require any preprocessing such as threshold, smoothing, segmentation, or definition of base points or edges. To evaluate the performance of the GA registration, the results were compared with results of the Automatic Image Registration technique (AIR) and manual registration which was used as the gold standard. Results showed that our GA implementation was a robust algorithm and gives very close results to the gold standard. A pre-cropping strategy was also discussed as an efficient preprocessing step to enhance the registration accuracy.

  4. Registration of in vivo MR to histology of rodent brains using blockface imaging

    NASA Astrophysics Data System (ADS)

    Uberti, Mariano; Liu, Yutong; Dou, Huanyu; Mosley, R. Lee; Gendelman, Howard E.; Boska, Michael

    2009-02-01

    Registration of MRI to histopathological sections can enhance bioimaging validation for use in pathobiologic, diagnostic, and therapeutic evaluations. However, commonly used registration methods fall short of this goal due to tissue shrinkage and tearing after brain extraction and preparation. In attempts to overcome these limitations we developed a software toolbox using 3D blockface imaging as the common space of reference. This toolbox includes a semi-automatic brain extraction technique using constraint level sets (CLS), 3D reconstruction methods for the blockface and MR volume, and a 2D warping technique using thin-plate splines with landmark optimization. Using this toolbox, the rodent brain volume is first extracted from the whole head MRI using CLS. The blockface volume is reconstructed followed by 3D brain MRI registration to the blockface volume to correct the global deformations due to brain extraction and fixation. Finally, registered MRI and histological slices are warped to corresponding blockface images to correct slice specific deformations. The CLS brain extraction technique was validated by comparing manual results showing 94% overlap. The image warping technique was validated by calculating target registration error (TRE). Results showed a registration accuracy of a TRE < 1 pixel. Lastly, the registration method and the software tools developed were used to validate cell migration in murine human immunodeficiency virus type one encephalitis.

  5. High-accuracy registration of intraoperative CT imaging

    NASA Astrophysics Data System (ADS)

    Oentoro, A.; Ellis, R. E.

    2010-02-01

    Image-guided interventions using intraoperative 3D imaging can be less cumbersome than systems dependent on preoperative images, especially by needing neither potentially invasive image-to-patient registration nor a lengthy process of segmenting and generating a 3D surface model. In this study, a method for computer-assisted surgery using direct navigation on intraoperative imaging is presented. In this system the registration step of a navigated procedure was divided into two stages: preoperative calibration of images to a ceiling-mounted optical tracking system, and intraoperative tracking during acquisition of the 3D medical image volume. The preoperative stage used a custom-made multi-modal calibrator that could be optically tracked and also contained fiducial spheres for radiological detection; a robust registration algorithm was used to compensate for the very high false-detection rate that was due to the high physical density of the optical light-emitting diodes. Intraoperatively, a tracking device was attached to plastic bone models that were also instrumented with radio-opaque spheres; A calibrated pointer was used to contact the latter spheres as a validation of the registration. Experiments showed that the fiducial registration error of the preoperative calibration stage was approximately 0.1 mm. The target registration error in the validation stage was approximately 1.2 mm. This study suggests that direct registration, coupled with procedure-specific graphical rendering, is potentially a highly accurate means of performing image-guided interventions in a fast, simple manner.

  6. Web-based interactive 2D/3D medical image processing and visualization software.

    PubMed

    Mahmoudi, Seyyed Ehsan; Akhondi-Asl, Alireza; Rahmani, Roohollah; Faghih-Roohi, Shahrooz; Taimouri, Vahid; Sabouri, Ahmad; Soltanian-Zadeh, Hamid

    2010-05-01

    There are many medical image processing software tools available for research and diagnosis purposes. However, most of these tools are available only as local applications. This limits the accessibility of the software to a specific machine, and thus the data and processing power of that application are not available to other workstations. Further, there are operating system and processing power limitations which prevent such applications from running on every type of workstation. By developing web-based tools, it is possible for users to access the medical image processing functionalities wherever the internet is available. In this paper, we introduce a pure web-based, interactive, extendable, 2D and 3D medical image processing and visualization application that requires no client installation. Our software uses a four-layered design consisting of an algorithm layer, web-user-interface layer, server communication layer, and wrapper layer. To compete with extendibility of the current local medical image processing software, each layer is highly independent of other layers. A wide range of medical image preprocessing, registration, and segmentation methods are implemented using open source libraries. Desktop-like user interaction is provided by using AJAX technology in the web-user-interface. For the visualization functionality of the software, the VRML standard is used to provide 3D features over the web. Integration of these technologies has allowed implementation of our purely web-based software with high functionality without requiring powerful computational resources in the client side. The user-interface is designed such that the users can select appropriate parameters for practical research and clinical studies. PMID:20022133

  7. Tendon strain imaging using non-rigid image registration: a validation study

    NASA Astrophysics Data System (ADS)

    Almeida, Nuno M.; Slagmolen, Pieter; Barbosa, Daniel; Scheys, Lennart; Geukens, Leonie; Fukagawa, Shingo; Peers, Koen; Bellemans, Johan; Suetens, Paul; D'Hooge, Jan

    2012-03-01

    Ultrasound image has already been proved to be a useful tool for non-invasive strain quantifications in soft tissue. While clinical applications only include cardiac imaging, the development of techniques suitable for musculoskeletal system is an active area of research. On this study, a technique for speckle tracking on ultrasound images using non-rigid image registration is presented. This approach is based on a single 2D+t registration procedure, in which the temporal changes on the B-mode speckle patterns are locally assessed. This allows estimating strain from ultrasound image sequences of tissues under deformation while imposing temporal smoothness in the deformation field, originating smooth strain curves. METHODS: The tracking algorithm was systematically tested on synthetic images and gelatin phantoms, under sinusoidal deformations with amplitudes between 0.5% and 4.0%, at frequencies between 0.25Hz and 2.0Hz. Preliminary tests were also performed on Achilles tendons isolated from human cadavers. RESULTS: The strain was estimated with deviations of -0.011%+/-0.053% on the synthetic images and agreements of +/-0.28% on the phantoms. Some tests with real tendons show good tracking results. However, significant variability between the trials still exists. CONCLUSIONS: The proposed image registration methodology constitutes a robust tool for motion and deformation tracking in both simulated and real phantom data. Strain estimation in both cases reveals that the proposed method is accurate and provides good precision. Although the ex-vivo results are still preliminary, the potential of the proposed algorithm is promising. This suggests that further improvements, together with systematic testing, can lead to in-vivo and clinical applications.

  8. Automatic localization of vertebral levels in x-ray fluoroscopy using 3D-2D registration: a tool to reduce wrong-site surgery

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Schafer, S.; Stayman, J. W.; Zbijewski, W.; Kleinszig, G.; Graumann, R.; Khanna, A. J.; Siewerdsen, J. H.

    2012-09-01

    Surgical targeting of the incorrect vertebral level (wrong-level surgery) is among the more common wrong-site surgical errors, attributed primarily to the lack of uniquely identifiable radiographic landmarks in the mid-thoracic spine. The conventional localization method involves manual counting of vertebral bodies under fluoroscopy, is prone to human error and carries additional time and dose. We propose an image registration and visualization system (referred to as LevelCheck), for decision support in spine surgery by automatically labeling vertebral levels in fluoroscopy using a GPU-accelerated, intensity-based 3D-2D (namely CT-to-fluoroscopy) registration. A gradient information (GI) similarity metric and a CMA-ES optimizer were chosen due to their robustness and inherent suitability for parallelization. Simulation studies involved ten patient CT datasets from which 50 000 simulated fluoroscopic images were generated from C-arm poses selected to approximate the C-arm operator and positioning variability. Physical experiments used an anthropomorphic chest phantom imaged under real fluoroscopy. The registration accuracy was evaluated as the mean projection distance (mPD) between the estimated and true center of vertebral levels. Trials were defined as successful if the estimated position was within the projection of the vertebral body (namely mPD <5 mm). Simulation studies showed a success rate of 99.998% (1 failure in 50 000 trials) and computation time of 4.7 s on a midrange GPU. Analysis of failure modes identified cases of false local optima in the search space arising from longitudinal periodicity in vertebral structures. Physical experiments demonstrated the robustness of the algorithm against quantum noise and x-ray scatter. The ability to automatically localize target anatomy in fluoroscopy in near-real-time could be valuable in reducing the occurrence of wrong-site surgery while helping to reduce radiation exposure. The method is applicable beyond

  9. The role of image registration in brain mapping.

    PubMed

    Toga, A W; Thompson, P M

    2001-01-01

    Image registration is a key step in a great variety of biomedical imaging applications. It provides the ability to geometrically align one dataset with another, and is a prerequisite for all imaging applications that compare datasets across subjects, imaging modalities, or across time. Registration algorithms also enable the pooling and comparison of experimental findings across laboratories, the construction of population-based brain atlases, and the creation of systems to detect group patterns in structural and functional imaging data. We review the major types of registration approaches used in brain imaging today. We focus on their conceptual basis, the underlying mathematics, and their strengths and weaknesses in different contexts. We describe the major goals of registration, including data fusion, quantification of change, automated image segmentation and labeling, shape measurement, and pathology detection. We indicate that registration algorithms have great potential when used in conjunction with a digital brain atlas, which acts as a reference system in which brain images can be compared for statistical analysis. The resulting armory of registration approaches is fundamental to medical image analysis, and in a brain mapping context provides a means to elucidate clinical, demographic, or functional trends in the anatomy or physiology of the brain. PMID:19890483

  10. A Novel Technique for Prealignment in Multimodality Medical Image Registration

    PubMed Central

    Zhou, Wu; Zhang, Lijuan; Xie, Yaoqin; Liang, Changhong

    2014-01-01

    Image pair is often aligned initially based on a rigid or affine transformation before a deformable registration method is applied in medical image registration. Inappropriate initial registration may compromise the registration speed or impede the convergence of the optimization algorithm. In this work, a novel technique was proposed for prealignment in both monomodality and multimodality image registration based on statistical correlation of gradient information. A simple and robust algorithm was proposed to determine the rotational differences between two images based on orientation histogram matching accumulated from local orientation of each pixel without any feature extraction. Experimental results showed that it was effective to acquire the orientation angle between two unregistered images with advantages over the existed method based on edge-map in multimodalities. Applying the orientation detection into the registration of CT/MR, T1/T2 MRI, and monomadality images with respect to rigid and nonrigid deformation improved the chances of finding the global optimization of the registration and reduced the search space of optimization. PMID:25162024

  11. Antenna-coupled microbolometer based uncooled 2D array and camera for 2D real-time terahertz imaging

    NASA Astrophysics Data System (ADS)

    Simoens, F.; Meilhan, J.; Gidon, S.; Lasfargues, G.; Lalanne Dera, J.; Ouvrier-Buffet, J. L.; Pocas, S.; Rabaud, W.; Guellec, F.; Dupont, B.; Martin, S.; Simon, A. C.

    2013-09-01

    CEA-Leti has developed a monolithic large focal plane array bolometric technology optimized for 2D real-time imaging in the terahertz range. Each pixel consists in a silicon microbolometer coupled to specific antennas and a resonant quarter-wavelength cavity. First prototypes of imaging arrays have been designed and manufactured for optimized sensing in the 1-3.5THz range where THz quantum cascade lasers are delivering high optical power. NEP in the order of 1 pW/sqrt(Hz) has been assessed at 2.5 THz. This paper reports the steps of this development, starting from the pixel level, to an array associated monolithically to its CMOS ROIC and finally a stand-alone camera. For each step, modeling, technological prototyping and experimental characterizations are presented.

  12. The appropriate parameter retrieval algorithm for feature-based SAR image registration

    NASA Astrophysics Data System (ADS)

    Li, Dong; Zhang, Yunhua

    2012-09-01

    This paper is dedicated to investigate the appropriate parameter retrieval algorithm for feature-based synthetic aperture radar (SAR) image registration. The widely-used random sample consensus (RANSAC) is observed to be instable for its inappropriate estimation strategy and loss function for SAR images. In order to enable a stable and robust registration for SAR, an extended fast least trimmed squares (EF-LTS) is proposed which conducts the registration by least squares fitting at least half of the correspondences to minimize the squared polynomial residuals instead of fitting the minimal sampling set to maximize the cardinality of the consensus set as RANSAC. Experiment on interferometric SAR image pair demonstrates that the proposed algorithm behaves very stably and the obtained registration is averagely better than that by RANSAC in terms of cross-correlation and spectral SNR. By this algorithm, a stable estimation for any kind of 2D polynomial warp model with high robustness and accuracy can be efficiently achieved. Thus EF-LTS is more appropriate for SAR image registration.

  13. Piecewise nonlinear image registration using DCT basis functions

    NASA Astrophysics Data System (ADS)

    Gan, Lin; Agam, Gady

    2015-03-01

    The deformation field in nonlinear image registration is usually modeled by a global model. Such models are often faced with the problem that a locally complex deformation cannot be accurately modeled by simply increasing degrees of freedom (DOF). In addition, highly complex models require additional regularization which is usually ineffective when applied globally. Registering locally corresponding regions addresses this problem in a divide and conquer strategy. In this paper we propose a piecewise image registration approach using Discrete Cosine Transform (DCT) basis functions for a nonlinear model. The contributions of this paper are three-folds. First, we develop a multi-level piecewise registration framework that extends the concept of piecewise linear registration and works with any nonlinear deformation model. This framework is then applied to nonlinear DCT registration. Second, we show how adaptive model complexity and regularization could be applied for local piece registration, thus accounting for higher variability. Third, we show how the proposed piecewise DCT can overcome the fundamental problem of a large curvature matrix inversion in global DCT when using high degrees of freedoms. The proposed approach can be viewed as an extension of global DCT registration where the overall model complexity is increased while achieving effective local regularization. Experimental evaluation results provide comparison of the proposed approach to piecewise linear registration using an affine transformation model and a global nonlinear registration using DCT model. Preliminary results show that the proposed approach achieves improved performance.

  14. Multimodal image fusion with SIMS: Preprocessing with image registration.

    PubMed

    Tarolli, Jay Gage; Bloom, Anna; Winograd, Nicholas

    2016-06-01

    In order to utilize complementary imaging techniques to supply higher resolution data for fusion with secondary ion mass spectrometry (SIMS) chemical images, there are a number of aspects that, if not given proper consideration, could produce results which are easy to misinterpret. One of the most critical aspects is that the two input images must be of the same exact analysis area. With the desire to explore new higher resolution data sources that exists outside of the mass spectrometer, this requirement becomes even more important. To ensure that two input images are of the same region, an implementation of the insight segmentation and registration toolkit (ITK) was developed to act as a preprocessing step before performing image fusion. This implementation of ITK allows for several degrees of movement between two input images to be accounted for, including translation, rotation, and scale transforms. First, the implementation was confirmed to accurately register two multimodal images by supplying a known transform. Once validated, two model systems, a copper mesh grid and a group of RAW 264.7 cells, were used to demonstrate the use of the ITK implementation to register a SIMS image with a microscopy image for the purpose of performing image fusion. PMID:26772745

  15. INTER-GROUP IMAGE REGISTRATION BY HIERARCHICAL GRAPH SHRINKAGE

    PubMed Central

    Ying, Shihui; Wu, Guorong; Liao, Shu; Shen, Dinggang

    2013-01-01

    In this paper, we propose a novel inter-group image registration method to register different groups of images (e.g., young and elderly brains) simultaneously. Specifically, we use a hierarchical two-level graph to model the distribution of entire images on the manifold, with intra-graph representing the image distribution in each group and the inter-graph describing the relationship between two groups. Then the procedure of inter-group registration is formulated as a dynamic evolution of graph shrinkage. The advantage of our method is that the topology of entire image distribution is explored to guide the image registration. In this way, each image coordinates with its neighboring images on the manifold to deform towards the population center, by following the deformation pathway simultaneously optimized within the graph. Our proposed method has been also compared with other state-of-the-art inter-group registration methods, where our method achieves better registration results in terms of registration accuracy and robustness. PMID:24443692

  16. Visible and infrared image registration based on visual salient features

    NASA Astrophysics Data System (ADS)

    Wu, Feihong; Wang, Bingjian; Yi, Xiang; Li, Min; Hao, Jingya; Qin, Hanlin; Zhou, Huixin

    2015-09-01

    In order to improve the precision of visible and infrared (VIS/IR) image registration, an image registration method based on visual salient (VS) features is presented. First, a VS feature detector based on the modified visual attention model is presented to extract VS points. Because the iterative, within-feature competition method used in visual attention models is time consuming, an alternative fast visual salient (FVS) feature detector is proposed to make VS features more efficient. Then, a descriptor-rearranging (DR) strategy is adopted to describe feature points. This strategy combines information of both IR image and its negative image to overcome the contrast reverse problem between VIS and IR images, making it easier to find the corresponding points on VIS/IR images. Experiments show that both VS and FVS detectors have higher repeatability scores than scale invariant feature transform in the cases of blurring, brightness change, JPEG compression, noise, and viewpoint, except big scale change. The combination of VS detector and DR registration strategy can achieve precise image registration, but it is time-consuming. The combination of FVS detector and DR registration strategy can also reach a good registration of VIS/IR images but in a shorter time.

  17. Towards real time 2D to 3D registration for ultrasound-guided endoscopic and laparoscopic procedures

    PubMed Central

    Westin, Carl-Fredrik; Vosburgh, Kirby G.

    2010-01-01

    Purpose A method to register endoscopic and laparoscopic ultrasound (US) images in real time with pre-operative computed tomography (CT) data sets has been developed with the goal of improving diagnosis, biopsy guidance, and surgical interventions in the abdomen. Methods The technique, which has the potential to operate in real time, is based on a new phase correlation technique: LEPART, which specifies the location of a plane in the CT data which best corresponds to the US image. Validation of the method was carried out using an US phantom with cyst regions and with retrospective analysis of data sets from animal model experiments. Results The phantom validation study shows that local translation displacements can be recovered for each US frame with a root mean squared error of 1.56 ± 0.78 mm in less than 5 sec, using non-optimized algorithm implementations. Conclusion A new method for multimodality (preoperative CT and intraoperative US endoscopic images) registration to guide endoscopic interventions was developed and found to be efficient using clinically realistic datasets. The algorithm is inherently capable of being implemented in a parallel computing system so that full real time operation appears likely. PMID:20033331

  18. Parallel image registration with a thin client interface

    NASA Astrophysics Data System (ADS)

    Saiprasad, Ganesh; Lo, Yi-Jung; Plishker, William; Lei, Peng; Ahmad, Tabassum; Shekhar, Raj

    2010-03-01

    Despite its high significance, the clinical utilization of image registration remains limited because of its lengthy execution time and a lack of easy access. The focus of this work was twofold. First, we accelerated our course-to-fine, volume subdivision-based image registration algorithm by a novel parallel implementation that maintains the accuracy of our uniprocessor implementation. Second, we developed a thin-client computing model with a user-friendly interface to perform rigid and nonrigid image registration. Our novel parallel computing model uses the message passing interface model on a 32-core cluster. The results show that, compared with the uniprocessor implementation, the parallel implementation of our image registration algorithm is approximately 5 times faster for rigid image registration and approximately 9 times faster for nonrigid registration for the images used. To test the viability of such systems for clinical use, we developed a thin client in the form of a plug-in in OsiriX, a well-known open source PACS workstation and DICOM viewer, and used it for two applications. The first application registered the baseline and follow-up MR brain images, whose subtraction was used to track progression of multiple sclerosis. The second application registered pretreatment PET and intratreatment CT of radiofrequency ablation patients to demonstrate a new capability of multimodality imaging guidance. The registration acceleration coupled with the remote implementation using a thin client should ultimately increase accuracy, speed, and access of image registration-based interpretations in a number of diagnostic and interventional applications.

  19. SU-E-J-137: Image Registration Tool for Patient Setup in Korea Heavy Ion Medical Accelerator Center

    SciTech Connect

    Kim, M; Suh, T; Cho, W; Jung, W

    2015-06-15

    Purpose: A potential validation tool for compensating patient positioning error was developed using 2D/3D and 3D/3D image registration. Methods: For 2D/3D registration, digitally reconstructed radiography (DRR) and three-dimensional computed tomography (3D-CT) images were applied. The ray-casting algorithm is the most straightforward method for generating DRR. We adopted the traditional ray-casting method, which finds the intersections of a ray with all objects, voxels of the 3D-CT volume in the scene. The similarity between the extracted DRR and orthogonal image was measured by using a normalized mutual information method. Two orthogonal images were acquired from a Cyber-Knife system from the anterior-posterior (AP) and right lateral (RL) views. The 3D-CT and two orthogonal images of an anthropomorphic phantom and head and neck cancer patient were used in this study. For 3D/3D registration, planning CT and in-room CT image were applied. After registration, the translation and rotation factors were calculated to position a couch to be movable in six dimensions. Results: Registration accuracies and average errors of 2.12 mm ± 0.50 mm for transformations and 1.23° ± 0.40° for rotations were acquired by 2D/3D registration using an anthropomorphic Alderson-Rando phantom. In addition, registration accuracies and average errors of 0.90 mm ± 0.30 mm for transformations and 1.00° ± 0.2° for rotations were acquired using CT image sets. Conclusion: We demonstrated that this validation tool could compensate for patient positioning error. In addition, this research could be the fundamental step for compensating patient positioning error at the first Korea heavy-ion medical accelerator treatment center.

  20. Registration of Heat Capacity Mapping Mission day and night images

    NASA Technical Reports Server (NTRS)

    Watson, K.; Hummer-Miller, S.; Sawatzky, D. L. (Principal Investigator)

    1982-01-01

    Neither iterative registration, using drainage intersection maps for control, nor cross correlation techniques were satisfactory in registering day and night HCMM imagery. A procedure was developed which registers the image pairs by selecting control points and mapping the night thermal image to the daytime thermal and reflectance images using an affine transformation on a 1300 by 1100 pixel image. The resulting image registration is accurate to better than two pixels (RMS) and does not exhibit the significant misregistration that was noted in the temperature-difference and thermal-inertia products supplied by NASA. The affine transformation was determined using simple matrix arithmetic, a step that can be performed rapidly on a minicomputer.

  1. Nonrigid Medical Image Registration Based on Mesh Deformation Constraints

    PubMed Central

    Qiu, TianShuang; Guo, DongMei

    2013-01-01

    Regularizing the deformation field is an important aspect in nonrigid medical image registration. By covering the template image with a triangular mesh, this paper proposes a new regularization constraint in terms of connections between mesh vertices. The connection relationship is preserved by the spring analogy method. The method is evaluated by registering cerebral magnetic resonance imaging (MRI) image data obtained from different individuals. Experimental results show that the proposed method has good deformation ability and topology-preserving ability, providing a new way to the nonrigid medical image registration. PMID:23424604

  2. MR to CT registration of brains using image synthesis

    NASA Astrophysics Data System (ADS)

    Roy, Snehashis; Carass, Aaron; Jog, Amod; Prince, Jerry L.; Lee, Junghoon

    2014-03-01

    Computed tomography (CT) is the preferred imaging modality for patient dose calculation for radiation therapy. Magnetic resonance (MR) imaging (MRI) is used along with CT to identify brain structures due to its superior soft tissue contrast. Registration of MR and CT is necessary for accurate delineation of the tumor and other structures, and is critical in radiotherapy planning. Mutual information (MI) or its variants are typically used as a similarity metric to register MRI to CT. However, unlike CT, MRI intensity does not have an accepted calibrated intensity scale. Therefore, MI-based MR-CT registration may vary from scan to scan as MI depends on the joint histogram of the images. In this paper, we propose a fully automatic framework for MR-CT registration by synthesizing a synthetic CT image from MRI using a co-registered pair of MR and CT images as an atlas. Patches of the subject MRI are matched to the atlas and the synthetic CT patches are estimated in a probabilistic framework. The synthetic CT is registered to the original CT using a deformable registration and the computed deformation is applied to the MRI. In contrast to most existing methods, we do not need any manual intervention such as picking landmarks or regions of interests. The proposed method was validated on ten brain cancer patient cases, showing 25% improvement in MI and correlation between MR and CT images after registration compared to state-of-the-art registration methods.

  3. Infrared thermal facial image sequence registration analysis and verification

    NASA Astrophysics Data System (ADS)

    Chen, Chieh-Li; Jian, Bo-Lin

    2015-03-01

    To study the emotional responses of subjects to the International Affective Picture System (IAPS), infrared thermal facial image sequence is preprocessed for registration before further analysis such that the variance caused by minor and irregular subject movements is reduced. Without affecting the comfort level and inducing minimal harm, this study proposes an infrared thermal facial image sequence registration process that will reduce the deviations caused by the unconscious head shaking of the subjects. A fixed image for registration is produced through the localization of the centroid of the eye region as well as image translation and rotation processes. Thermal image sequencing will then be automatically registered using the two-stage genetic algorithm proposed. The deviation before and after image registration will be demonstrated by image quality indices. The results show that the infrared thermal image sequence registration process proposed in this study is effective in localizing facial images accurately, which will be beneficial to the correlation analysis of psychological information related to the facial area.

  4. Rapid pedobarographic image registration based on contour curvature and optimization.

    PubMed

    Oliveira, Francisco P M; Tavares, João Manuel R S; Pataky, Todd C

    2009-11-13

    Image registration, the process of optimally aligning homologous structures in multiple images, has recently been demonstrated to support automated pixel-level analysis of pedobarographic images and, subsequently, to extract unique and biomechanically relevant information from plantar pressure data. Recent registration methods have focused on robustness, with slow but globally powerful algorithms. In this paper, we present an alternative registration approach that affords both speed and accuracy, with the goal of making pedobarographic image registration more practical for near-real-time laboratory and clinical applications. The current algorithm first extracts centroid-based curvature trajectories from pressure image contours, and then optimally matches these curvature profiles using optimization based on dynamic programming. Special cases of disconnected images (that occur in high-arched subjects, for example) are dealt with by introducing an artificial spatially linear bridge between adjacent image clusters. Two registration algorithms were developed: a 'geometric' algorithm, which exclusively matched geometry, and a 'hybrid' algorithm, which performed subsequent pseudo-optimization. After testing the two algorithms on 30 control image pairs considered in a previous study, we found that, when compared with previously published results, the hybrid algorithm improved overlap ratio (p=0.010), but both current algorithms had slightly higher mean-squared error, assumedly because they did not consider pixel intensity. Nonetheless, both algorithms greatly improved the computational efficiency (25+/-8 and 53+/-9 ms per image pair for geometric and hybrid registrations, respectively). These results imply that registration-based pixel-level pressure image analyses can, eventually, be implemented for practical clinical purposes. PMID:19647829

  5. Avoiding Stair-Step Artifacts in Image Registration for GOES-R Navigation and Registration Assessment

    NASA Technical Reports Server (NTRS)

    Grycewicz, Thomas J.; Tan, Bin; Isaacson, Peter J.; De Luccia, Frank J.; Dellomo, John

    2016-01-01

    In developing software for independent verification and validation (IVV) of the Image Navigation and Registration (INR) capability for the Geostationary Operational Environmental Satellite R Series (GOES-R) Advanced Baseline Imager (ABI), we have encountered an image registration artifact which limits the accuracy of image offset estimation at the subpixel scale using image correlation. Where the two images to be registered have the same pixel size, subpixel image registration preferentially selects registration values where the image pixel boundaries are close to lined up. Because of the shape of a curve plotting input displacement to estimated offset, we call this a stair-step artifact. When one image is at a higher resolution than the other, the stair-step artifact is minimized by correlating at the higher resolution. For validating ABI image navigation, GOES-R images are correlated with Landsat-based ground truth maps. To create the ground truth map, the Landsat image is first transformed to the perspective seen from the GOES-R satellite, and then is scaled to an appropriate pixel size. Minimizing processing time motivates choosing the map pixels to be the same size as the GOES-R pixels. At this pixel size image processing of the shift estimate is efficient, but the stair-step artifact is present. If the map pixel is very small, stair-step is not a problem, but image correlation is computation-intensive. This paper describes simulation-based selection of the scale for truth maps for registering GOES-R ABI images.

  6. Image appraisal for 2D and 3D electromagnetic inversion

    SciTech Connect

    Alumbaugh, D.L.; Newman, G.A.

    1998-04-01

    Linearized methods are presented for appraising image resolution and parameter accuracy in images generated with two and three dimensional non-linear electromagnetic inversion schemes. When direct matrix inversion is employed, the model resolution and model covariance matrices can be directly calculated. The columns of the model resolution matrix are shown to yield empirical estimates of the horizontal and vertical resolution throughout the imaging region. Plotting the square root of the diagonal of the model covariance matrix yields an estimate of how the estimated data noise maps into parameter error. When the conjugate gradient method is employed rather than a direct inversion technique (for example in 3D inversion), an iterative method can be applied to statistically estimate the model covariance matrix, as well as a regularization covariance matrix. The latter estimates the error in the inverted results caused by small variations in the regularization parameter. A method for calculating individual columns of the model resolution matrix using the conjugate gradient method is also developed. Examples of the image analysis techniques are provided on a synthetic cross well EM data set.

  7. Multifractal analysis of 2D gray soil images

    NASA Astrophysics Data System (ADS)

    González-Torres, Ivan; Losada, Juan Carlos; Heck, Richard; Tarquis, Ana M.

    2015-04-01

    Soil structure, understood as the spatial arrangement of soil pores, is one of the key factors in soil modelling processes. Geometric properties of individual and interpretation of the morphological parameters of pores can be estimated from thin sections or 3D Computed Tomography images (Tarquis et al., 2003), but there is no satisfactory method to binarized these images and quantify the complexity of their spatial arrangement (Tarquis et al., 2008, Tarquis et al., 2009; Baveye et al., 2010). The objective of this work was to apply a multifractal technique, their singularities (α) and f(α) spectra, to quantify it without applying any threshold (Gónzalez-Torres, 2014). Intact soil samples were collected from four horizons of an Argisol, formed on the Tertiary Barreiras group of formations in Pernambuco state, Brazil (Itapirema Experimental Station). The natural vegetation of the region is tropical, coastal rainforest. From each horizon, showing different porosities and spatial arrangements, three adjacent samples were taken having a set of twelve samples. The intact soil samples were imaged using an EVS (now GE Medical. London, Canada) MS-8 MicroCT scanner with 45 μm pixel-1 resolution (256x256 pixels). Though some samples required paring to fit the 64 mm diameter imaging tubes, field orientation was maintained. References Baveye, P.C., M. Laba, W. Otten, L. Bouckaert, P. Dello, R.R. Goswami, D. Grinev, A. Houston, Yaoping Hu, Jianli Liu, S. Mooney, R. Pajor, S. Sleutel, A. Tarquis, Wei Wang, Qiao Wei, Mehmet Sezgin. Observer-dependent variability of the thresholding step in the quantitative analysis of soil images and X-ray microtomography data. Geoderma, 157, 51-63, 2010. González-Torres, Iván. Theory and application of multifractal analysis methods in images for the study of soil structure. Master thesis, UPM, 2014. Tarquis, A.M., R.J. Heck, J.B. Grau; J. Fabregat, M.E. Sanchez and J.M. Antón. Influence of Thresholding in Mass and Entropy Dimension of 3-D

  8. A 2-D imaging heat-flux gauge

    SciTech Connect

    Noel, B.W.; Borella, H.M. ); Beshears, D.L.; Sartory, W.K.; Tobin, K.W.; Williams, R.K. ); Turley, W.D. . Santa Barbara Operations)

    1991-07-01

    This report describes a new leadless two-dimensional imaging optical heat-flux gauge. The gauge is made by depositing arrays of thermorgraphic-phosphor (TP) spots onto the faces of a polymethylpentene is insulator. In the first section of the report, we describe several gauge configurations and their prototype realizations. A satisfactory configuration is an array of right triangles on each face that overlay to form squares when the gauge is viewed normal to the surface. The next section of the report treats the thermal conductivity of TPs. We set up an experiment using a comparative longitudinal heat-flow apparatus to measure the previously unknown thermal conductivity of these materials. The thermal conductivity of one TP, Y{sub 2}O{sub 3}:Eu, is 0.0137 W/cm{center dot}K over the temperature range from about 300 to 360 K. The theories underlying the time response of TP gauges and the imaging characteristics are discussed in the next section. Then we discuss several laboratory experiments to (1) demonstrate that the TP heat-flux gauge can be used in imaging applications; (2) obtain a quantum yield that enumerates what typical optical output signal amplitudes can be obtained from TP heat-flux gauges; and (3) determine whether LANL-designed intensified video cameras have sufficient sensitivity to acquire images from the heat-flux gauges. We obtained positive results from all the measurements. Throughout the text, we note limitations, areas where improvements are needed, and where further research is necessary. 12 refs., 25 figs., 4 tabs.

  9. Temporal mammogram image registration using optimized curvilinear coordinates.

    PubMed

    Abdel-Nasser, Mohamed; Moreno, Antonio; Puig, Domenec

    2016-04-01

    Registration of mammograms plays an important role in breast cancer computer-aided diagnosis systems. Radiologists usually compare mammogram images in order to detect abnormalities. The comparison of mammograms requires a registration between them. A temporal mammogram registration method is proposed in this paper. It is based on the curvilinear coordinates, which are utilized to cope both with global and local deformations in the breast area. Temporal mammogram pairs are used to validate the proposed method. After registration, the similarity between the mammograms is maximized, and the distance between manually defined landmarks is decreased. In addition, a thorough comparison with the state-of-the-art mammogram registration methods is performed to show its effectiveness. PMID:27000285

  10. High-performance automatic image registration for remote sensing

    NASA Astrophysics Data System (ADS)

    Chalermwat, Prachya

    Image registration is one of the crucial steps in the analysis of remotely sensed data. A new acquired image must be transformed, using image registration techniques, to match the orientation and scale of previous related images. Image registration requires intensive computational effort not only because of its computational complexity, but also due to the continuous increase in image resolution and spectral bands. Thus, high-performance computing techniques for image registration are critically needed. Very few works have addressed image registration on contemporary high-performance computing systems. Furthermore, issues of load balancing, scalability, and formal analysis of algorithmic efficiency were seldom considered. This dissertation introduces high-performance automatic image registration (HAIR) algorithms. High performance is achieved by: (1) reduction in search data, (2) reduction in search space, and (3) parallel processing. Reduction in search data is achieved by performing registration using only subimages. A new metric called registrability is used to select those subimages such that accuracy is maintained. In addition, a histogram comparison is used to discard anomalous subimages, such as those with clouds. Further data reduction is obtained using an iterative refinement search (IRA), which exploits the wavelet multi-resolution representation. This technique starts searching images with lower resolution first, then refining the results using higher resolution images to use the least possible data points in the overall registration task. Reduction of search space is achieved through two methods. First, iterative refinement reduces dramatically the number of solutions examined. In addition, genetic algorithms were also used to further expedite the search. Parallel processing techniques have been utilized to provide coarse-grain load-balanced parallel algorithms based on iterative refinement as well as genetic algorithms. Two hybrid algorithms have been

  11. A contour-based approach to multisensor image registration.

    PubMed

    Li, H; Manjunath, B S; Mitra, S K

    1995-01-01

    Image registration is concerned with the establishment of correspondence between images of the same scene. One challenging problem in this area is the registration of multispectral/multisensor images. In general, such images have different gray level characteristics, and simple techniques such as those based on area correlations cannot be applied directly. On the other hand, contours representing region boundaries are preserved in most cases. The authors present two contour-based methods which use region boundaries and other strong edges as matching primitives. The first contour matching algorithm is based on the chain-code correlation and other shape similarity criteria such as invariant moments. Closed contours and the salient segments along the open contours are matched separately. This method works well for image pairs in which the contour information is well preserved, such as the optical images from Landsat and Spot satellites. For the registration of the optical images with synthetic aperture radar (SAR) images, the authors propose an elastic contour matching scheme based on the active contour model. Using the contours from the optical image as the initial condition, accurate contour locations in the SAR image are obtained by applying the active contour model. Both contour matching methods are automatic and computationally quite efficient. Experimental results with various kinds of image data have verified the robustness of the algorithms, which have outperformed manual registration in terms of root mean square error at the control points. PMID:18289982

  12. Automated Image Registration Using Geometrically Invariant Parameter Space Clustering (GIPSC)

    SciTech Connect

    Seedahmed, Gamal H.; Martucci, Louis M.

    2002-09-01

    Accurate, robust, and automatic image registration is a critical task in many typical applications, which employ multi-sensor and/or multi-date imagery information. In this paper we present a new approach to automatic image registration, which obviates the need for feature matching and solves for the registration parameters in a Hough-like approach. The basic idea underpinning, GIPSC methodology is to pair each data element belonging to two overlapping images, with all other data in each image, through a mathematical transformation. The results of pairing are encoded and exploited in histogram-like arrays as clusters of votes. Geometrically invariant features are adopted in this approach to reduce the computational complexity generated by the high dimensionality of the mathematical transformation. In this way, the problem of image registration is characterized, not by spatial or radiometric properties, but by the mathematical transformation that describes the geometrical relationship between the two images or more. While this approach does not require feature matching, it does permit recovery of matched features (e.g., points) as a useful by-product. The developed methodology incorporates uncertainty modeling using a least squares solution. Successful and promising experimental results of multi-date automatic image registration are reported in this paper.

  13. Registration of challenging pre-clinical brain images

    PubMed Central

    Crum, William R.; Modo, Michel; Vernon, Anthony C.; Barker, Gareth J.; Williams, Steven C.R.

    2013-01-01

    The size and complexity of brain imaging studies in pre-clinical populations are increasing, and automated image analysis pipelines are urgently required. Pre-clinical populations can be subjected to controlled interventions (e.g., targeted lesions), which significantly change the appearance of the brain obtained by imaging. Existing systems for registration (the systematic alignment of scans into a consistent anatomical coordinate system), which assume image similarity to a reference scan, may fail when applied to these images. However, affine registration is a particularly vital pre-processing step for subsequent image analysis which is assumed to be an effective procedure in recent literature describing sophisticated techniques such as manifold learning. Therefore, in this paper, we present an affine registration solution that uses a graphical model of a population to decompose difficult pairwise registrations into a composition of steps using other members of the population. We developed this methodology in the context of a pre-clinical model of stroke in which large, variable hyper-intense lesions significantly impact registration performance. We tested this technique systematically in a simulated human population of brain tumour images before applying it to pre-clinical models of Parkinson's disease and stroke. PMID:23558335

  14. Evaluating the utility of 3D TRUS image information in guiding intra-procedure registration for motion compensation

    NASA Astrophysics Data System (ADS)

    De Silva, Tharindu; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2014-03-01

    In targeted 3D transrectal ultrasound (TRUS)-guided biopsy, patient and prostate movement during the procedure can cause target misalignments that hinder accurate sampling of pre-planned suspicious tissue locations. Multiple solutions have been proposed for motion compensation via registration of intra-procedural TRUS images to a baseline 3D TRUS image acquired at the beginning of the biopsy procedure. While 2D TRUS images are widely used for intra-procedural guidance, some solutions utilize richer intra-procedural images such as bi- or multi-planar TRUS or 3D TRUS, acquired by specialized probes. In this work, we measured the impact of such richer intra-procedural imaging on motion compensation accuracy, to evaluate the tradeoff between cost and complexity of intra-procedural imaging versus improved motion compensation. We acquired baseline and intra-procedural 3D TRUS images from 29 patients at standard sextant-template biopsy locations. We used the planes extracted from the 3D intra-procedural scans to simulate 2D and 3D information available in different clinically relevant scenarios for registration. The registration accuracy was evaluated by calculating the target registration error (TRE) using manually identified homologous fiducial markers (micro-calcifications). Our results indicate that TRE improves gradually when the number of intra-procedural imaging planes used in registration is increased. Full 3D TRUS information helps the registration algorithm to robustly converge to more accurate solutions. These results can also inform the design of a fail-safe workflow during motion compensation in a system using a tracked 2D TRUS probe, by prescribing rotational acquisitions that can be performed quickly and easily by the physician immediately prior to needle targeting.

  15. Semi-automatic elastic registration on thyroid gland ultrasonic image

    NASA Astrophysics Data System (ADS)

    Xu, Xia; Zhong, Yue; Luo, Yan; Li, Deyu; Lin, Jiangli; Wang, Tianfu

    2007-12-01

    Knowledge of in vivo thyroid volume has both diagnostic and therapeutic importance and could lead to a more precise quantification of absolute activity contained in the thyroid gland. However, the shape of thyroid gland is irregular and difficult to calculate. For precise estimation of thyroid volume by ultrasound imaging, this paper presents a novel semiautomatic minutiae matching method in thyroid gland ultrasonic image by means of thin-plate spline model. Registration consists of four basic steps: feature detection, feature matching, mapping function design, and image transformation and resampling. Due to the connectivity of thyroid gland boundary, we choose active contour model as feature detector, and radials from centric points for feature matching. The proposed approach has been used in thyroid gland ultrasound images registration. Registration results of 18 healthy adults' thyroid gland ultrasound images show this method consumes less time and energy with good objectivity than algorithms selecting landmarks manually.

  16. HipMatch: an object-oriented cross-platform program for accurate determination of cup orientation using 2D-3D registration of single standard X-ray radiograph and a CT volume.

    PubMed

    Zheng, Guoyan; Zhang, Xuan; Steppacher, Simon D; Murphy, Stephen B; Siebenrock, Klaus A; Tannast, Moritz

    2009-09-01

    The widely used procedure of evaluation of cup orientation following total hip arthroplasty using single standard anteroposterior (AP) radiograph is known inaccurate, largely due to the wide variability in individual pelvic orientation relative to X-ray plate. 2D-3D image registration methods have been introduced for an accurate determination of the post-operative cup alignment with respect to an anatomical reference extracted from the CT data. Although encouraging results have been reported, their extensive usage in clinical routine is still limited. This may be explained by their requirement of a CAD model of the prosthesis, which is often difficult to be organized from the manufacturer due to the proprietary issue, and by their requirement of either multiple radiographs or a radiograph-specific calibration, both of which are not available for most retrospective studies. To address these issues, we developed and validated an object-oriented cross-platform program called "HipMatch" where a hybrid 2D-3D registration scheme combining an iterative landmark-to-ray registration with a 2D-3D intensity-based registration was implemented to estimate a rigid transformation between a pre-operative CT volume and the post-operative X-ray radiograph for a precise estimation of cup alignment. No CAD model of the prosthesis is required. Quantitative and qualitative results evaluated on cadaveric and clinical datasets are given, which indicate the robustness and the accuracy of the program. HipMatch is written in object-oriented programming language C++ using cross-platform software Qt (TrollTech, Oslo, Norway), VTK, and Coin3D and is transportable to any platform. PMID:19328585

  17. 3D non-rigid surface-based MR-TRUS registration for image-guided prostate biopsy

    NASA Astrophysics Data System (ADS)

    Sun, Yue; Qiu, Wu; Romagnoli, Cesare; Fenster, Aaron

    2014-03-01

    Two dimensional (2D) transrectal ultrasound (TRUS) guided prostate biopsy is the standard approach for definitive diagnosis of prostate cancer (PCa). However, due to the lack of image contrast of prostate tumors needed to clearly visualize early-stage PCa, prostate biopsy often results in false negatives, requiring repeat biopsies. Magnetic Resonance Imaging (MRI) has been considered to be a promising imaging modality for noninvasive identification of PCa, since it can provide a high sensitivity and specificity for the detection of early stage PCa. Our main objective is to develop and validate a registration method of 3D MR-TRUS images, allowing generation of volumetric 3D maps of targets identified in 3D MR images to be biopsied using 3D TRUS images. Our registration method first makes use of an initial rigid registration of 3D MR images to 3D TRUS images using 6 manually placed approximately corresponding landmarks in each image. Following the manual initialization, two prostate surfaces are segmented from 3D MR and TRUS images and then non-rigidly registered using a thin-plate spline (TPS) algorithm. The registration accuracy was evaluated using 4 patient images by measuring target registration error (TRE) of manually identified corresponding intrinsic fiducials (calcifications and/or cysts) in the prostates. Experimental results show that the proposed method yielded an overall mean TRE of 2.05 mm, which is favorably comparable to a clinical requirement for an error of less than 2.5 mm.

  18. Registration and 3D visualization of large microscopy images

    NASA Astrophysics Data System (ADS)

    Mosaliganti, Kishore; Pan, Tony; Sharp, Richard; Ridgway, Randall; Iyengar, Srivathsan; Gulacy, Alexandra; Wenzel, Pamela; de Bruin, Alain; Machiraju, Raghu; Huang, Kun; Leone, Gustavo; Saltz, Joel

    2006-03-01

    Inactivation of the retinoblastoma gene in mouse embryos causes tissue infiltrations into critical sections of the placenta, which has been shown to affect fetal survivability. Our collaborators in cancer genetics are extremely interested in examining the three dimensional nature of these infiltrations given a stack of two dimensional light microscopy images. Three sets of wildtype and mutant placentas was sectioned serially and digitized using a commercial light microscopy scanner. Each individual placenta dataset consisted of approximately 1000 images totaling 700 GB in size, which were registered into a volumetric dataset using National Library of Medicine's (NIH/NLM) Insight Segmentation and Registration Toolkit (ITK). This paper describes our method for image registration to aid in volume visualization of tissue level intermixing for both wildtype and Rb - specimens. The registration process faces many challenges arising from the large image sizes, damages during sectioning, staining gradients both within and across sections, and background noise. These issues limit the direct application of standard registration techniques due to frequent convergence to local solutions. In this work, we develop a mixture of automated and semi-automated enhancements with ground-truth validation for the mutual information-based registration algorithm. Our final volume renderings clearly show tissue intermixing differences between both wildtype and Rb - specimens which are not obvious prior to registration.

  19. Robust methods for automatic image-to-world registration in cone-beam CT interventional guidance

    PubMed Central

    Dang, H.; Otake, Y.; Schafer, S.; Stayman, J. W.; Kleinszig, G.; Siewerdsen, J. H.

    2012-01-01

    Purpose: Real-time surgical navigation relies on accurate image-to-world registration to align the coordinate systems of the image and patient. Conventional manual registration can present a workflow bottleneck and is prone to manual error and intraoperator variability. This work reports alternative means of automatic image-to-world registration, each method involving an automatic registration marker (ARM) used in conjunction with C-arm cone-beam CT (CBCT). The first involves a Known-Model registration method in which the ARM is a predefined tool, and the second is a Free-Form method in which the ARM is freely configurable. Methods: Studies were performed using a prototype C-arm for CBCT and a surgical tracking system. A simple ARM was designed with markers comprising a tungsten sphere within infrared reflectors to permit detection of markers in both x-ray projections and by an infrared tracker. The Known-Model method exercised a predefined specification of the ARM in combination with 3D-2D registration to estimate the transformation that yields the optimal match between forward projection of the ARM and the measured projection images. The Free-Form method localizes markers individually in projection data by a robust Hough transform approach extended from previous work, backprojected to 3D image coordinates based on C-arm geometric calibration. Image-domain point sets were transformed to world coordinates by rigid-body point-based registration. The robustness and registration accuracy of each method was tested in comparison to manual registration across a range of body sites (head, thorax, and abdomen) of interest in CBCT-guided surgery, including cases with interventional tools in the radiographic scene. Results: The automatic methods exhibited similar target registration error (TRE) and were comparable or superior to manual registration for placement of the ARM within ∼200 mm of C-arm isocenter. Marker localization in projection data was robust across all

  20. Robust methods for automatic image-to-world registration in cone-beam CT interventional guidance

    SciTech Connect

    Dang, H.; Otake, Y.; Schafer, S.; Stayman, J. W.; Kleinszig, G.; Siewerdsen, J. H.

    2012-10-15

    Purpose: Real-time surgical navigation relies on accurate image-to-world registration to align the coordinate systems of the image and patient. Conventional manual registration can present a workflow bottleneck and is prone to manual error and intraoperator variability. This work reports alternative means of automatic image-to-world registration, each method involving an automatic registration marker (ARM) used in conjunction with C-arm cone-beam CT (CBCT). The first involves a Known-Model registration method in which the ARM is a predefined tool, and the second is a Free-Form method in which the ARM is freely configurable. Methods: Studies were performed using a prototype C-arm for CBCT and a surgical tracking system. A simple ARM was designed with markers comprising a tungsten sphere within infrared reflectors to permit detection of markers in both x-ray projections and by an infrared tracker. The Known-Model method exercised a predefined specification of the ARM in combination with 3D-2D registration to estimate the transformation that yields the optimal match between forward projection of the ARM and the measured projection images. The Free-Form method localizes markers individually in projection data by a robust Hough transform approach extended from previous work, backprojected to 3D image coordinates based on C-arm geometric calibration. Image-domain point sets were transformed to world coordinates by rigid-body point-based registration. The robustness and registration accuracy of each method was tested in comparison to manual registration across a range of body sites (head, thorax, and abdomen) of interest in CBCT-guided surgery, including cases with interventional tools in the radiographic scene. Results: The automatic methods exhibited similar target registration error (TRE) and were comparable or superior to manual registration for placement of the ARM within {approx}200 mm of C-arm isocenter. Marker localization in projection data was robust across all

  1. Diffeomorphic Registration of Images with Variable Contrast Enhancement

    PubMed Central

    Janssens, Guillaume; Jacques, Laurent; Orban de Xivry, Jonathan; Geets, Xavier; Macq, Benoit

    2011-01-01

    Nonrigid image registration is widely used to estimate tissue deformations in highly deformable anatomies. Among the existing methods, nonparametric registration algorithms such as optical flow, or Demons, usually have the advantage of being fast and easy to use. Recently, a diffeomorphic version of the Demons algorithm was proposed. This provides the advantage of producing invertible displacement fields, which is a necessary condition for these to be physical. However, such methods are based on the matching of intensities and are not suitable for registering images with different contrast enhancement. In such cases, a registration method based on the local phase like the Morphons has to be used. In this paper, a diffeomorphic version of the Morphons registration method is proposed and compared to conventional Morphons, Demons, and diffeomorphic Demons. The method is validated in the context of radiotherapy for lung cancer patients on several 4D respiratory-correlated CT scans of the thorax with and without variable contrast enhancement. PMID:21197460

  2. Imaging 2-D Structures With Receiver Functions Using Harmonic Stripping

    NASA Astrophysics Data System (ADS)

    Schulte-Pelkum, V.

    2010-12-01

    I present a novel technique to image dipping and anisotropic structures using receiver functions. Receiver functions isolate phase conversions from interfaces close to the seismic station. Standard analysis assumes a quasi-flat layered structure and dampens arrivals from dipping interfaces and anisotropic layers, with attempts to extract information on such structures relying on cumbersome and nonunique forward modeling. I use a simple relationship between the radial and transverse component receiver function to detect dipping and anisotropic layers and map their depth and orientation. For dipping interfaces, layers with horizontal or plunging axis anisotropy, and point scatterers, the following relationships hold: After subtracting the azimuthally invariant portion of the radial receiver functions, the remaining signal is an azimuthally shifted version of the transverse receiver functions. The strike of the dipping interface or anisotropy is given by the azimuth of polarity reversals, and the type of structure can be inferred from the amount of phase shift between the components. For a known structure type, the phase shift between the two components provides pseudoevents from back-azimuths with little seismicity. The technique allows structural mapping at depth akin to geological mapping of rock fabric and dipping layers at the surface. It reduces complex wavefield effects to two simple and geologically meaningful parameters, similar to shear wave splitting. I demonstrate the method on the Wind River Thrust as well as other structures within the Transportable Array footprint.

  3. Imaging Excited State Dynamics with 2d Electronic Spectroscopy

    NASA Astrophysics Data System (ADS)

    Engel, Gregory S.

    2012-06-01

    Excited states in the condensed phase have extremely high chemical potentials making them highly reactive and difficult to control. Yet in biology, excited state dynamics operate with exquisite precision driving solar light harvesting in photosynthetic complexes though excitonic transport and photochemistry through non-radiative relaxation to photochemical products. Optimized by evolution, these biological systems display manifestly quantum mechanical behaviors including coherent energy transfer, steering wavepacket trajectories through conical intersections and protection of long-lived quantum coherence. To image the underlying excited state dynamics, we have developed a new spectroscopic method allowing us to capture excitonic structure in real time. Through this method and other ultrafast multidimensional spectroscopies, we have captured coherent dynamics within photosynthetic antenna complexes. The data not only reveal how biological systems operate, but these same spectral signatures can be exploited to create new spectroscopic tools to elucidate the underlying Hamiltonian. New data on the role of the protein in photosynthetic systems indicates that the chromophores mix strongly with some bath modes within the system. The implications of this mixing for excitonic transport will be discussed along with prospects for transferring underlying design principles to synthetic systems.

  4. Geodesic active fields--a geometric framework for image registration.

    PubMed

    Zosso, Dominique; Bresson, Xavier; Thiran, Jean-Philippe

    2011-05-01

    In this paper we present a novel geometric framework called geodesic active fields for general image registration. In image registration, one looks for the underlying deformation field that best maps one image onto another. This is a classic ill-posed inverse problem, which is usually solved by adding a regularization term. Here, we propose a multiplicative coupling between the registration term and the regularization term, which turns out to be equivalent to embed the deformation field in a weighted minimal surface problem. Then, the deformation field is driven by a minimization flow toward a harmonic map corresponding to the solution of the registration problem. This proposed approach for registration shares close similarities with the well-known geodesic active contours model in image segmentation, where the segmentation term (the edge detector function) is coupled with the regularization term (the length functional) via multiplication as well. As a matter of fact, our proposed geometric model is actually the exact mathematical generalization to vector fields of the weighted length problem for curves and surfaces introduced by Caselles-Kimmel-Sapiro. The energy of the deformation field is measured with the Polyakov energy weighted by a suitable image distance, borrowed from standard registration models. We investigate three different weighting functions, the squared error and the approximated absolute error for monomodal images, and the local joint entropy for multimodal images. As compared to specialized state-of-the-art methods tailored for specific applications, our geometric framework involves important contributions. Firstly, our general formulation for registration works on any parametrizable, smooth and differentiable surface, including nonflat and multiscale images. In the latter case, multiscale images are registered at all scales simultaneously, and the relations between space and scale are intrinsically being accounted for. Second, this method is, to

  5. Biomechanical based image registration for head and neck radiation treatment

    NASA Astrophysics Data System (ADS)

    Al-Mayah, Adil; Moseley, Joanne; Hunter, Shannon; Velec, Mike; Chau, Lily; Breen, Stephen; Brock, Kristy

    2010-02-01

    Deformable image registration of four head and neck cancer patients was conducted using biomechanical based model. Patient specific 3D finite element models have been developed using CT and cone beam CT image data of the planning and a radiation treatment session. The model consists of seven vertebrae (C1 to C7), mandible, larynx, left and right parotid glands, tumor and body. Different combinations of boundary conditions are applied in the model in order to find the configuration with a minimum registration error. Each vertebra in the planning session is individually aligned with its correspondence in the treatment session. Rigid alignment is used for each individual vertebra and to the mandible since deformation is not expected in the bones. In addition, the effect of morphological differences in external body between the two image sessions is investigated. The accuracy of the registration is evaluated using the tumor, and left and right parotid glands by comparing the calculated Dice similarity index of these structures following deformation in relation to their true surface defined in the image of the second session. The registration improves when the vertebrae and mandible are aligned in the two sessions with the highest Dice index of 0.86+/-0.08, 0.84+/-0.11, and 0.89+/-0.04 for the tumor, left and right parotid glands, respectively. The accuracy of the center of mass location of tumor and parotid glands is also improved by deformable image registration where the error in the tumor and parotid glands decreases from 4.0+/-1.1, 3.4+/-1.5, and 3.8+/-0.9 mm using rigid registration to 2.3+/-1.0, 2.5+/-0.8 and 2.0+/-0.9 mm in the deformable image registration when alignment of vertebrae and mandible is conducted in addition to the surface projection of the body.

  6. Improved Image Registration by Sparse Patch-Based Deformation Estimation

    PubMed Central

    Kim, Minjeong; Wu, Guorong; Wang, Qian; Shen, Dinggang

    2014-01-01

    Despite of intensive efforts for decades, deformable image registration is still a challenging problem due to the potential large anatomical differences across individual images, which limits the registration performance. Fortunately, this issue could be alleviated if a good initial deformation can be provided for the two images under registration, which are often termed as the moving subject and the fixed template, respectively. In this work, we present a novel patch-based initial deformation prediction framework for improving the performance of existing registration algorithms. Our main idea is to estimate the initial deformation between subject and template in a patch-wise fashion by using the sparse representation technique. We argue that two image patches should follow the same deformation towards the template image if their patch-wise appearance patterns are similar. To this end, our framework consists of two stages, i.e., the training stage and the application stage. In the training stage, we register all training images to the pre-selected template, such that the deformation of each training image with respect to the template is known. In the application stage, we apply the following four steps to efficiently calculate the initial deformation field for the new test subject: (1) We pick a small number of key points in the distinctive regions of the test subject; (2) For each key point, we extract a local patch and form a coupled appearance-deformation dictionary from training images where each dictionary atom consists of the image intensity patch as well as their respective local deformations; (3) A small set of training image patches in the coupled dictionary are selected to represent the image patch of each subject key point by sparse representation. Then, we can predict the initial deformation for each subject key point by propagating the pre-estimated deformations on the selected training patches with the same sparse representation coefficients. (4) We

  7. Multimodal registration of retinal images using self organizing maps.

    PubMed

    Matsopoulos, George K; Asvestas, Pantelis A; Mouravliansky, Nikolaos A; Delibasis, Konstantinos K

    2004-12-01

    In this paper, an automatic method for registering multimodal retinal images is presented. The method consists of three steps: the vessel centerline detection and extraction of bifurcation points only in the reference image, the automatic correspondence of bifurcation points in the two images using a novel implementation of the self organizing maps and the extraction of the parameters of the affine transform using the previously obtained correspondences. The proposed registration algorithm was tested on 24 multimodal retinal pairs and the obtained results show an advantageous performance in terms of accuracy with respect to the manual registration. PMID:15575412

  8. DIRBoost-an algorithm for boosting deformable image registration: application to lung CT intra-subject registration.

    PubMed

    Muenzing, Sascha E A; van Ginneken, Bram; Viergever, Max A; Pluim, Josien P W

    2014-04-01

    We introduce a boosting algorithm to improve on existing methods for deformable image registration (DIR). The proposed DIRBoost algorithm is inspired by the theory on hypothesis boosting, well known in the field of machine learning. DIRBoost utilizes a method for automatic registration error detection to obtain estimates of local registration quality. All areas detected as erroneously registered are subjected to boosting, i.e. undergo iterative registrations by employing boosting masks on both the fixed and moving image. We validated the DIRBoost algorithm on three different DIR methods (ANTS gSyn, NiftyReg, and DROP) on three independent reference datasets of pulmonary image scan pairs. DIRBoost reduced registration errors significantly and consistently on all reference datasets for each DIR algorithm, yielding an improvement of the registration accuracy by 5-34% depending on the dataset and the registration algorithm employed. PMID:24556079

  9. Registration of multitemporal aerial optical images using line features

    NASA Astrophysics Data System (ADS)

    Zhao, Chenyang; Goshtasby, A. Ardeshir

    2016-07-01

    Registration of multitemporal images is generally considered difficult because scene changes can occur between the times the images are obtained. Since the changes are mostly radiometric in nature, features are needed that are insensitive to radiometric differences between the images. Lines are geometric features that represent straight edges of rigid man-made structures. Because such structures rarely change over time, lines represent stable geometric features that can be used to register multitemporal remote sensing images. An algorithm to establish correspondence between lines in two images of a planar scene is introduced and formulas to relate the parameters of a homography transformation to the parameters of corresponding lines in images are derived. Results of the proposed image registration on various multitemporal images are presented and discussed.

  10. PCA-based groupwise image registration for quantitative MRI.

    PubMed

    Huizinga, W; Poot, D H J; Guyader, J-M; Klaassen, R; Coolen, B F; van Kranenburg, M; van Geuns, R J M; Uitterdijk, A; Polfliet, M; Vandemeulebroucke, J; Leemans, A; Niessen, W J; Klein, S

    2016-04-01

    Quantitative magnetic resonance imaging (qMRI) is a technique for estimating quantitative tissue properties, such as the T1 and T2 relaxation times, apparent diffusion coefficient (ADC), and various perfusion measures. This estimation is achieved by acquiring multiple images with different acquisition parameters (or at multiple time points after injection of a contrast agent) and by fitting a qMRI signal model to the image intensities. Image registration is often necessary to compensate for misalignments due to subject motion and/or geometric distortions caused by the acquisition. However, large differences in image appearance make accurate image registration challenging. In this work, we propose a groupwise image registration method for compensating misalignment in qMRI. The groupwise formulation of the method eliminates the requirement of choosing a reference image, thus avoiding a registration bias. The method minimizes a cost function that is based on principal component analysis (PCA), exploiting the fact that intensity changes in qMRI can be described by a low-dimensional signal model, but not requiring knowledge on the specific acquisition model. The method was evaluated on 4D CT data of the lungs, and both real and synthetic images of five different qMRI applications: T1 mapping in a porcine heart, combined T1 and T2 mapping in carotid arteries, ADC mapping in the abdomen, diffusion tensor mapping in the brain, and dynamic contrast-enhanced mapping in the abdomen. Each application is based on a different acquisition model. The method is compared to a mutual information-based pairwise registration method and four other state-of-the-art groupwise registration methods. Registration accuracy is evaluated in terms of the precision of the estimated qMRI parameters, overlap of segmented structures, distance between corresponding landmarks, and smoothness of the deformation. In all qMRI applications the proposed method performed better than or equally well as

  11. Automatic image registration performance for two different CBCT systems; variation with imaging dose

    NASA Astrophysics Data System (ADS)

    Barber, J.; Sykes, J. R.; Holloway, L.; Thwaites, D. I.

    2014-03-01

    The performance of an automatic image registration algorithm was compared on image sets collected with two commercial CBCT systems, and the relationship with imaging dose was explored. CBCT images of a CIRS Virtually Human Male Pelvis phantom (VHMP) were collected on Varian TrueBeam/OBI and Elekta Synergy/XVI linear accelerators, across a range of mAs settings. Each CBCT image was registered 100 times, with random initial offsets introduced. Image registration was performed using the grey value correlation ratio algorithm in the Elekta XVI software, to a mask of the prostate volume with 5 mm expansion. Residual registration errors were calculated after correcting for the initial introduced phantom set-up error. Registration performance with the OBI images was similar to that of XVI. There was a clear dependence on imaging dose for the XVI images with residual errors increasing below 4mGy. It was not possible to acquire images with doses lower than ~5mGy with the OBI system and no evidence of reduced performance was observed at this dose. Registration failures (maximum target registration error > 3.6 mm on the surface of a 30mm sphere) occurred in 5% to 9% of registrations except for the lowest dose XVI scan (31%). The uncertainty in automatic image registration with both OBI and XVI images was found to be adequate for clinical use within a normal range of acquisition settings.

  12. Co-registration of multispectral images for enhanced target recognition

    NASA Astrophysics Data System (ADS)

    Khaghani, Farbod; Nelson, Richard J.

    2007-04-01

    Unlike straightforward registration problems encountered in broadband imaging, spectral imaging in fielded instruments often suffers from a combination of imaging aberrations that make spatial co-registration of the images a challenging problem. Depending on the sensor architecture, typical problems to be mitigated include differing focus, magnification, and warping between the images in the various spectral bands due to optics differences; scene shift between spectral images due to parallax; and scene shift due to temporal misregistration between the spectral images. However, typical spectral images sometimes contain scene commonalities that can be exploited in traditional ways. As a first step toward automatic spatial co-registration for spectral images, we exploit manually-selected scene commonalities to produce transformation parameters in a four-channel spectral imager. The four bands consist of two mid-wave infrared channels and two short-wave infrared channels. Each of the four bands is blurred differently due to differing focal lengths of the imaging optics, magnified differently, warped differently, and translated differently. Centroid location techniques are used on the scene commonalities in order to generate sub-pixel values for the fiducial markers used in the transformation polygons, and conclusions are drawn about the effectiveness of such techniques in spectral imaging applications.

  13. Nonlinear spatial warping for between-subjects pedobarographic image registration.

    PubMed

    Pataky, T C; Keijsers, N L W; Goulermas, J Y; Crompton, R H

    2009-04-01

    Foot size and shape vary between individuals and the foot adopts arbitrary stance phase postures, so traditional pedobarographic analyses regionalize foot pressure images to afford homologous data comparison. An alternative approach that does not require explicit anatomical labelling and that is used widely in other functional imaging domains is to register images such that homologous structures optimally overlap and then to compare images directly at the pixel level. Image registration represents the preprocessing cornerstone of such pixel-level techniques, so its performance warrants independent attention. The purpose of this study was to evaluate the performance of four between-subjects warping registration algorithms including: Principal Axes (PA), four-parameter Optimal Scaling (OS4), eight-parameter Optimal Projective (OP8), and locally affine Nonlinear (NL). Fifteen subjects performed 10 trials of self-paced walking, and their peak pressure images were registered within-subjects using an optimal rigid body transformation. The resulting mean images were then registered between-subjects using all four methods in all 210 (15x14) subject combinations. All registration methods improved alignment, and each method performed qualitatively well for certain image pairs. However, only the NL consistently performed satisfactorily because of disproportionate anatomical variation in toe lengths and rearfoot/forefoot width, for example. Using three independent image (dis)similarity metrics, MANOVA confirmed that the NL method yielded superior registration performance (p<0.001). These data demonstrate that nonlinear spatial warping is necessary for robust between-subject pedobarographic image registration and, by extension, robust homologous data comparison at the pixel level. PMID:19112023

  14. Intraoperative ultrasound to stereocamera registration using interventional photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Vyas, Saurabh; Su, Steven; Kim, Robert; Kuo, Nathanael; Taylor, Russell H.; Kang, Jin U.; Boctor, Emad M.

    2012-02-01

    There are approximately 6000 hospitals in the United States, of which approximately 5400 employ minimally invasive surgical robots for a variety of procedures. Furthermore, 95% of these robots require extensive registration before they can be fitted into the operating room. These "registrations" are performed by surgical navigation systems, which allow the surgical tools, the robot and the surgeon to be synchronized together-hence operating in concert. The most common surgical navigation modalities include: electromagnetic (EM) tracking and optical tracking. Currently, these navigation systems are large, intrusive, come with a steep learning curve, require sacrifices on the part of the attending medical staff, and are quite expensive (since they require several components). Recently, photoacoustic (PA) imaging has become a practical and promising new medical imaging technology. PA imaging only requires the minimal equipment standard with most modern ultrasound (US) imaging systems as well as a common laser source. In this paper, we demonstrate that given a PA imaging system, as well as a stereocamera (SC), the registration between the US image of a particular anatomy and the SC image of the same anatomy can be obtained with reliable accuracy. In our experiments, we collected data for N = 80 trials of sample 3D US and SC coordinates. We then computed the registration between the SC and the US coordinates. Upon validation, the mean error and standard deviation between the predicted sample coordinates and the corresponding ground truth coordinates were found to be 3.33 mm and 2.20 mm respectively.

  15. Multi-Image Registration for an Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn

    2002-01-01

    An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.

  16. 3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine

    NASA Astrophysics Data System (ADS)

    Hamamoto, Kazuhiko; Sato, Motoyoshi

    3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.

  17. Weighted medical image registration with automatic mask generation

    NASA Astrophysics Data System (ADS)

    Schumacher, Hanno; Franz, Astrid; Fischer, Bernd

    2006-03-01

    Registration of images is a crucial part of many medical imaging tasks. The problem is to find a transformation which aligns two given images. The resulting displacement fields may be for example described as a linear combination of pre-selected basis functions (parametric approach), or, as in our case, they may be computed as the solution of an associated partial differential equation (non-parametric approach). Here, the underlying functional consists of a smoothness term ensuring that the transformation is anatomically meaningful and a distance term describing the similarity between the two images. To be successful, the registration scheme has to be tuned for the problem under consideration. One way of incorporating user knowledge is the employment of weighting masks into the distance measure, and thereby enhancing or hiding dedicated image parts. In general, these masks are based on a given segmentation of both images. We present a method which generates a weighting mask for the second image, given the mask for the first image. The scheme is based on active contours and makes use of a gradient vector flow method. As an example application, we consider the registration of abdominal computer tomography (CT) images used for radiation therapy. The reference image is acquired well ahead of time and is used for setting up the radiation plan. The second image is taken just before the treatment and its processing is time-critical. We show that the proposed automatic mask generation scheme yields similar results as compared to the approach based on a pre-segmentation of both images. Hence for time-critical applications, as intra-surgery registration, we are able to significantly speed up the computation by avoiding a pre-segmentation of the second image.

  18. Scope and applications of translation invariant wavelets to image registration

    NASA Technical Reports Server (NTRS)

    Chettri, Samir; LeMoigne, Jacqueline; Campbell, William

    1997-01-01

    The first part of this article introduces the notion of translation invariance in wavelets and discusses several wavelets that have this property. The second part discusses the possible applications of such wavelets to image registration. In the case of registration of affinely transformed images, we would conclude that the notion of translation invariance is not really necessary. What is needed is affine invariance and one way to do this is via the method of moment invariants. Wavelets or, in general, pyramid processing can then be combined with the method of moment invariants to reduce the computational load.

  19. Towards local estimation of emphysema progression using image registration

    NASA Astrophysics Data System (ADS)

    Staring, M.; Bakker, M. E.; Shamonin, D. P.; Stolk, J.; Reiber, J. H. C.; Stoel, B. C.

    2009-02-01

    Progression measurement of emphysema is required to evaluate the health condition of a patient and the effect of drugs. To locally estimate progression we use image registration, which allows for volume correction using the determinant of the Jacobian of the transformation. We introduce an adaptation of the so-called sponge model that circumvents its constant-mass assumption. Preliminary results from CT scans of a lung phantom and from CT data sets of three patients suggest that image registration may be a suitable method to locally estimate emphysema progression.

  20. Analysis of deformable image registration accuracy using computational modeling.

    PubMed

    Zhong, Hualiang; Kim, Jinkoo; Chetty, Indrin J

    2010-03-01

    Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results show that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter

  1. Elastic registration for auto-fluorescence image averaging.

    PubMed

    Kubecka, Libor; Jan, Jiri; Kolar, Radim; Jirik, Radovan

    2006-01-01

    The paper describes restitution of geometrical distortions and improvement of signal-to-noise ratio of auto-fluorescence retinal images, finally aimed at segmentation and area estimation of the lipofuscin spots as one of the features to be included in glaucoma diagnosis. The main problems - geometrical and illumination incompatibility of frames in the image sequence and a non-negligible "shear" distortion in the individual frames - have been solved by the presented registration procedure. The concept and some details of the MI-based regularized registration, together with evaluation of test results form the core of the contribution. PMID:17945684

  2. Analysis of deformable image registration accuracy using computational modeling

    SciTech Connect

    Zhong Hualiang; Kim, Jinkoo; Chetty, Indrin J.

    2010-03-15

    Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results show that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter

  3. Registration of multimodal volume head images via attached markers

    NASA Astrophysics Data System (ADS)

    Mandava, Venkateswara R.; Fitzpatrick, J. Michael; Maurer, Calvin R., Jr.; Maciunas, Robert J.; Allen, George S.

    1992-06-01

    We investigate the accuracy of registering arbitrarily oriented, multimodal, volume images of the human head, both to other images and to physical space, by aligning a configuration of three or more fiducial points that are the centers of attached markers. To compute the centers we use an extension of an adaptive thresholding algorithm due to Kittler. Because the markers are indistinguishable it is necessary to establish their correspondence between images. We have evaluated geometric matching algorithms for this purpose. The inherent errors in fiducial localization arising with digital images limits the accuracy with which anatomical targets can be registered. To accommodate this error we apply a least-squares registration algorithm to the fiducials. To evaluate the resulting target registration accuracy we have conducted experiments on images of internally implanted markers in a cadaver and images of externally attached markers in volunteers. We have also produced computer simulations of volume images of a hemispherical model of the head, randomly picking corresponding fiducial points and targets in the images, introducing uniformly distributed error into the fiducial locations, registering the images, and measuring target registration accuracy at the 95% confidence level. Our results indicate that submillimetric accuracy is feasible for high resolution images with four markers.

  4. Multimodality medical image fusion: probabilistic quantification, segmentation, and registration

    NASA Astrophysics Data System (ADS)

    Wang, Yue J.; Freedman, Matthew T.; Xuan, Jian Hua; Zheng, Qinfen; Mun, Seong K.

    1998-06-01

    Multimodality medical image fusion is becoming increasingly important in clinical applications, which involves information processing, registration and visualization of interventional and/or diagnostic images obtained from different modalities. This work is to develop a multimodality medical image fusion technique through probabilistic quantification, segmentation, and registration, based on statistical data mapping, multiple feature correlation, and probabilistic mean ergodic theorems. The goal of image fusion is to geometrically align two or more image areas/volumes so that pixels/voxels representing the same underlying anatomical structure can be superimposed meaningfully. Three steps are involved. To accurately extract the regions of interest, we developed the model supported Bayesian relaxation labeling, and edge detection and region growing integrated algorithms to segment the images into objects. After identifying the shift-invariant features (i.e., edge and region information), we provided an accurate and robust registration technique which is based on matching multiple binary feature images through a site model based image re-projection. The image was initially segmented into specified number of regions. A rough contour can be obtained by delineating and merging some of the segmented regions. We applied region growing and morphological filtering to extract the contour and get rid of some disconnected residual pixels after segmentation. The matching algorithm is implemented as follows: (1) the centroids of PET/CT and MR images are computed and then translated to the center of both images. (2) preliminary registration is performed first to determine an initial range of scaling factors and rotations, and the MR image is then resampled according to the specified parameters. (3) the total binary difference of the corresponding binary maps in both images is calculated for the selected registration parameters, and the final registration is achieved when the

  5. Quantitative evaluation of image registration techniques in the case of retinal images

    NASA Astrophysics Data System (ADS)

    Gavet, Yann; Fernandes, Mathieu; Pinoli, Jean-Charles

    2012-04-01

    In human retina observation (with non mydriatic optical microscopes), an image registration process is often employed to enlarge the field of view. Analyzing all the images takes a lot of time. Numerous techniques have been proposed to perform the registration process. Its good evaluation is a difficult question that is then raising. This article presents the use of two quantitative criterions to evaluate and compare some classical feature-based image registration techniques. The images are first segmented and the resulting binary images are then registered. The good quality of the registration process is evaluated with a normalized criterion based on the ɛ dissimilarity criterion, and the figure of merit criterion (fom), for 25 pairs of images with a manual selection of control points. These criterions are normalized by the results of the affine method (considered as the most simple method). Then, for each pair, the influence of the number of points used to perform the registration is evaluated.

  6. Mouse Atlas Registration with Non-tomographic Imaging Modalities—a Pilot Study Based on Simulation

    PubMed Central

    Wang, Hongkai; Stout, David B.; Chatziioannou, Arion F.

    2012-01-01

    Purpose This study investigates methodologies for the estimation of small animal anatomy from non-tomographic modalities, such as planar X-ray projections, optical cameras, and surface scanners. The key goal is to register a digital mouse atlas to a combination of non-tomographic modalities, in order to provide organ-level anatomical references of small animals in 3D. Procedures A 2D/3D registration method was developed to register the 3D atlas to the combination of non-tomographic imaging modalities. Eleven combinations of three non-tomographic imaging modalities were simulated, and the registration accuracy of each combination was evaluated. Results Comparing the 11 combinations, the top-view X-ray projection combined with the side-view optical camera yielded the best overall registration accuracy of all organs. The use of a surface scanner improved the registration accuracy of skin, spleen, and kidneys. Conclusions The methodologies and evaluation presented in this study should provide helpful information for designing preclinical atlas-based anatomical data acquisition systems. PMID:21983855

  7. Automatic image-to-world registration based on x-ray projections in cone-beam CT-guided interventions

    PubMed Central

    Hamming, N. M.; Daly, M. J.; Irish, J. C.; Siewerdsen, J. H.

    2009-01-01

    Intraoperative imaging offers a means to account for morphological changes occurring during the procedure and resolve geometric uncertainties via integration with a surgical navigation system. Such integration requires registration of the image and world reference frames, conventionally a time consuming, error-prone manual process. This work presents a method of automatic image-to-world registration of intraoperative cone-beam computed tomography (CBCT) and an optical tracking system. Multimodality (MM) markers consisting of an infrared (IR) reflective sphere with a 2 mm tungsten sphere (BB) placed precisely at the center were designed to permit automatic detection in both the image and tracking (world) reference frames. Image localization is performed by intensity thresholding and pattern matching directly in 2D projections acquired in each CBCT scan, with 3D image coordinates computed using backprojection and accounting for C-arm geometric calibration. The IR tracking system localized MM markers in the world reference frame, and the image-to-world registration was computed by rigid point matching of image and tracker point sets. The accuracy and reproducibility of the automatic registration technique were compared to conventional (manual) registration using a variety of marker configurations suitable to neurosurgery (markers fixed to cranium) and head and neck surgery (markers suspended on a subcranial frame). The automatic technique exhibited subvoxel marker localization accuracy (<0.8 mm) for all marker configurations. The fiducial registration error of the automatic technique was (0.35±0.01) mm, compared to (0.64±0.07 mm) for the manual technique, indicating improved accuracy and reproducibility. The target registration error (TRE) averaged over all configurations was 1.14 mm for the automatic technique, compared to 1.29 mm for the manual in accuracy, although the difference was not statistically significant (p=0.3). A statistically significant improvement

  8. Morphological Feature Extraction for Automatic Registration of Multispectral Images

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.

  9. Landsat image registration - A study of system parameters

    NASA Technical Reports Server (NTRS)

    Wacker, A. G.; Juday, R. D.; Wolfe, R. H., Jr.

    1984-01-01

    Some applications of Landsat data, particularily agricultural and forestry applications, require the ability to geometrically superimpose or register data acquired at different times and possibly by different satellites. An experimental investigation relating to a registration processor used by the Johnson Space Center for this purpose is the subject of this paper. Correlation of small subareas of images is at the heart of this registration processor and the manner in which various system parameters affect the correlation process is the prime area of investigation. Parameters investigated include preprocessing methods, methods for detecting sucessful correlations, fitting a surface to the correlation patch, fraction of pixels designated as edge pixels in edge detection adn local versus global generation of edge images. A suboptimum search procedure is used to find a good parameter set for this registration processor.

  10. 3D registration through pseudo x-ray image generation.

    PubMed

    Viant, W J; Barnel, F

    2001-01-01

    Registration of a pre operative plan with the intra operative position of the patient is still a largely unsolved problem. Current techniques generally require fiducials, either artificial or anatomic, to achieve the registration solution. Invariably these fiducials require implantation and/or direct digitisation. The technique described in this paper requires no digitisation or implantation of fiducials, but instead relies on the shape and form of the anatomy through a fully automated image comparison process. A pseudo image, generated from a virtual image intensifier's view of a CT dataset, is intra operatively compared with a real x-ray image. The principle is to align the virtual with the real image intensifier. The technique is an extension to the work undertaken by Domergue [1] and based on original ideas by Weese [4]. PMID:11317805

  11. Inter-subject MR-PET image registration and integration

    SciTech Connect

    Lin, K.P.; Chen, T.S.; Yao, W.F.

    1996-12-31

    A MR-PET inter-subject image integration technique is developed to provide more precise anatomical location based on a template MR image, and to examine the anatomical variation in sensory-motor stimulation or to obtain cross-subject signal averaging to enhance the delectability of focal brain activity detected by different subject PET images. In this study, a multimodality intrasubject image registration procedure is firstly applied to align MR and PET images of the same subject. The second procedure is to estimate an elastic image transformation that can nonlinearly deform each 3D brain MR image and map them to the template MR image. The estimation procedure of the elastic image transformation is based on a strategy that searches the best local image match to achieve an optimal global image match, iteratively. The final elastic image transformation estimated for each subject will then be used to deform the MR-PET registered PET image. After the nonlinear PET image deformation, MR-PET intersubject mapping, averaging, and fusing are simultaneously accomplished. The developed technique has been implemented to an UNIX based workstation with Motif window system. The software named Elastic-IRIS has few requirements of user interaction. The registered anatomical location of 10 different subjects has a standard deviation of {approximately}2mm. in the x, y, and z directions. The processing time for one MR-PET inter-subject registration ranged from 20 to 30 minutes on a SUN SPARC-20.

  12. Multimodal image registration for preoperative planning and image-guided neurosurgical procedures.

    PubMed

    Risholm, Petter; Golby, Alexandra J; Wells, William

    2011-04-01

    Image registration is the process of transforming images acquired at different time points, or with different imaging modalities, into the same coordinate system. It is an essential part of any neurosurgical planning and navigation system because it facilitates combining images with important complementary, structural, and functional information to improve the information based on which a surgeon makes critical decisions. Brigham and Women's Hospital (BWH) has been one of the pioneers in developing intraoperative registration methods for aligning preoperative and intraoperative images of the brain. This article presents an overview of intraoperative registration and highlights some recent developments at BWH. PMID:21435571

  13. Registration of 3-D images using weighted geometrical features

    SciTech Connect

    Maurer, C.R. Jr.; Aboutanos, G.B.; Dawant, B.M.; Maciunas, R.J.; Fitzpatrick, J.M.

    1996-12-01

    In this paper, the authors present a weighted geometrical features (WGF) registration algorithm. Its efficacy is demonstrated by combining points and a surface. The technique is an extension of Besl and McKay`s iterative closest point (ICP) algorithm. The authors use the WGF algorithm to register X-ray computed tomography (CT) and T2-weighted magnetic resonance (MR) volume head images acquired from eleven patients that underwent craniotomies in a neurosurgical clinical trial. Each patient had five external markers attached to transcutaneous posts screwed into the outer table of the skull. The authors define registration error as the distance between positions of corresponding markers that are not used for registration. The CT and MR images are registered using fiducial points (marker positions) only, a surface only, and various weighted combinations of points and a surface. The CT surface is derived from contours corresponding to the inner surface of the skull. The MR surface is derived from contours corresponding to the cerebrospinal fluid (CSF)-dura interface. Registration using points and a surface is found to be significantly more accurate than registration using only points or a surface.

  14. 2dx--user-friendly image processing for 2D crystals.

    PubMed

    Gipson, Bryant; Zeng, Xiangyan; Zhang, Zi Yan; Stahlberg, Henning

    2007-01-01

    Electron crystallography determines the structure of two-dimensional (2D) membrane protein crystals and other 2D crystal systems. Cryo-transmission electron microscopy records high-resolution electron micrographs, which require computer processing for three-dimensional structure reconstruction. We present a new software system 2dx, which is designed as a user-friendly, platform-independent software package for electron crystallography. 2dx assists in the management of an image-processing project, guides the user through the processing of 2D crystal images, and provides transparence for processing tasks and results. Algorithms are implemented in the form of script templates reminiscent of c-shell scripts. These templates can be easily modified or replaced by the user and can also execute modular stand-alone programs from the MRC software or from other image processing software packages. 2dx is available under the GNU General Public License at 2dx.org. PMID:17055742

  15. Automated Image Registration Using Morphological Region of Interest Feature Extraction

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2005-01-01

    With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.

  16. Registration of multimodal brain images: some experimental results

    NASA Astrophysics Data System (ADS)

    Chen, Hua-mei; Varshney, Pramod K.

    2002-03-01

    Joint histogram of two images is required to uniquely determine the mutual information between the two images. It has been pointed out that, under certain conditions, existing joint histogram estimation algorithms like partial volume interpolation (PVI) and linear interpolation may result in different types of artifact patterns in the MI based registration function by introducing spurious maxima. As a result, the artifacts may hamper the global optimization process and limit registration accuracy. In this paper we present an extensive study of interpolation-induced artifacts using simulated brain images and show that similar artifact patterns also exist when other intensity interpolation algorithms like cubic convolution interpolation and cubic B-spline interpolation are used. A new joint histogram estimation scheme named generalized partial volume estimation (GPVE) is proposed to eliminate the artifacts. A kernel function is involved in the proposed scheme and when the 1st order B-spline is chosen as the kernel function, it is equivalent to the PVI. A clinical brain image database furnished by Vanderbilt University is used to compare the accuracy of our algorithm with that of PVI. Our experimental results show that the use of higher order kernels can effectively remove the artifacts and, in cases when MI based registration result suffers from the artifacts, registration accuracy can be improved significantly.

  17. The Insight ToolKit image registration framework

    PubMed Central

    Avants, Brian B.; Tustison, Nicholas J.; Stauffer, Michael; Song, Gang; Wu, Baohua; Gee, James C.

    2014-01-01

    Publicly available scientific resources help establish evaluation standards, provide a platform for teaching and improve reproducibility. Version 4 of the Insight ToolKit (ITK4) seeks to establish new standards in publicly available image registration methodology. ITK4 makes several advances in comparison to previous versions of ITK. ITK4 supports both multivariate images and objective functions; it also unifies high-dimensional (deformation field) and low-dimensional (affine) transformations with metrics that are reusable across transform types and with composite transforms that allow arbitrary series of geometric mappings to be chained together seamlessly. Metrics and optimizers take advantage of multi-core resources, when available. Furthermore, ITK4 reduces the parameter optimization burden via principled heuristics that automatically set scaling across disparate parameter types (rotations vs. translations). A related approach also constrains steps sizes for gradient-based optimizers. The result is that tuning for different metrics and/or image pairs is rarely necessary allowing the researcher to more easily focus on design/comparison of registration strategies. In total, the ITK4 contribution is intended as a structure to support reproducible research practices, will provide a more extensive foundation against which to evaluate new work in image registration and also enable application level programmers a broad suite of tools on which to build. Finally, we contextualize this work with a reference registration evaluation study with application to pediatric brain labeling.1 PMID:24817849

  18. A Local IDW Transformation Algorithm for Medical Image Registration

    NASA Astrophysics Data System (ADS)

    Cavoretto, Roberto; De Rossi, Alessandra

    2008-09-01

    In this paper we propose the use of a modified version of the Inverse Distance Weighted (IDW) method for landmark—based registration of medical images. More precisely, we consider radial basis functions (RBFs) as nodal functions in the modified IDW method, circumventing the drawback due to RBF global support.

  19. Temporal registration of multispectral digital satellite images using their edge images

    NASA Technical Reports Server (NTRS)

    Nack, M. L.

    1975-01-01

    An algorithm is described which will form an edge image by detecting the edges of features in a particular spectral band of a digital satellite image. It is capable also of forming composite multispectral edge images. In addition, an edge image correlation algorithm is presented which performs rapid automatic registration of the edge images and, consequently, the grey level images.

  20. A translational registration system for LANDSAT image segments

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Erthal, G. J.; Velasco, F. R. D.; Mascarenhas, N. D. D.

    1983-01-01

    The use of satellite images obtained from various dates is essential for crop forecast systems. In order to make possible a multitemporal analysis, it is necessary that images belonging to each acquisition have pixel-wise correspondence. A system developed to obtain, register and record image segments from LANDSAT images in computer compatible tapes is described. The translational registration of the segments is performed by correlating image edges in different acquisitions. The system was constructed for the Burroughs B6800 computer in ALGOL language.

  1. A 2-D orientation-adaptive prediction filter in lifting structures for image coding.

    PubMed

    Gerek, Omer N; Cetin, A Enis

    2006-01-01

    Lifting-style implementations of wavelets are widely used in image coders. A two-dimensional (2-D) edge adaptive lifting structure, which is similar to Daubechies 5/3 wavelet, is presented. The 2-D prediction filter predicts the value of the next polyphase component according to an edge orientation estimator of the image. Consequently, the prediction domain is allowed to rotate +/-45 degrees in regions with diagonal gradient. The gradient estimator is computationally inexpensive with additional costs of only six subtractions per lifting instruction, and no multiplications are required. PMID:16435541

  2. 2-D nonlinear IIR-filters for image processing - An exploratory analysis

    NASA Technical Reports Server (NTRS)

    Bauer, P. H.; Sartori, M.

    1991-01-01

    A new nonlinear IIR filter structure is introduced and its deterministic properties are analyzed. It is shown to be better suited for image processing applications than its linear shift-invariant counterpart. The new structure is obtained from causality inversion of a 2D quarterplane causal linear filter with respect to the two directions of propagation. It is demonstrated, that by using this design, a nonlinear 2D lowpass filter can be constructed, which is capable of effectively suppressing Gaussian or impulse noise without destroying important image information.

  3. Warped document image correction method based on heterogeneous registration strategies

    NASA Astrophysics Data System (ADS)

    Tong, Lijing; Zhan, Guoliang; Peng, Quanyao; Li, Yang; Li, Yifan

    2013-03-01

    With the popularity of digital camera and the application requirement of digitalized document images, using digital cameras to digitalize document images has become an irresistible trend. However, the warping of the document surface impacts on the quality of the Optical Character Recognition (OCR) system seriously. To improve the warped document image's vision quality and the OCR rate, this paper proposed a warped document image correction method based on heterogeneous registration strategies. This method mosaics two warped images of the same document from different viewpoints. Firstly, two feature points are selected from one image. Then the two feature points are registered in the other image base on heterogeneous registration strategies. At last, image mosaics are done for the two images, and the best mosaiced image is selected by OCR recognition results. As a result, for the best mosaiced image, the distortions are mostly removed and the OCR results are improved markedly. Experimental results show that the proposed method can resolve the issue of warped document image correction more effectively.

  4. An Iterative Image Registration Algorithm by Optimizing Similarity Measurement

    PubMed Central

    Chu, Wei; Ma, Li; Song, John; Vorburger, Theodore

    2010-01-01

    A new registration algorithm based on Newton-Raphson iteration is proposed to align images with rigid body transformation. A set of transformation parameters consisting of translation in x and y and rotation angle around z is calculated by optimizing a specified similarity metric using the Newton-Raphson method. This algorithm has been tested by registering and correlating pairs of topography measurements of nominally identical NIST Standard Reference Material (SRM 2461) standard cartridge cases, and very good registration accuracy has been obtained. PMID:27134776

  5. Separation of image parts using 2-D parallel form recursive filters.

    PubMed

    Sivaramakrishna, R

    1996-01-01

    This correspondence deals with a new technique to separate objects or image parts in a composite image. A parallel form extension of a 2-D Steiglitz-McBride method is applied to the discrete cosine transform (DCT) of the image containing the objects that are to be separated. The obtained parallel form is the sum of several filters or systems, where the impulse response of each filter corresponds to the DCT of one object in the original image. Preliminary results on an image with two objects show that the algorithm works well, even in the case where one object occludes another as well as in the case of moderate noise. PMID:18285105

  6. Gated cardiac NMR imaging and 2D echocardiography in the detection of intracardial neoplasm

    SciTech Connect

    Go, R.T.; O'Donnell, J.K.; Salcedo, E.E.; Feiglin, D.H.; Underwood, D.A.; MacIntyre, W.J.; Meaney, T.F.

    1985-05-01

    Noninvasive 2D echocardiography has replaced contrast angiography as the procedure of choice in the diagnosis of intracardiac neoplasm. The purpose of this study was to determine whether intracardiac neoplasm can be detected as well by gated cardiac NMR. Four patients with known intracardiac neoplasm previously diagnosed by 2D echocardiography had gated cardiac NMR imaging using a superconductive 0.6 Tesla magnet. All patients were performed using a Tl weighted spin echo pulse sequence with a TE of 30 msec and TR of one R-R interval. Two-dimensional planar single or multiple slice techniques were used. In one patient, imaging at different times along the R-R interval were performed for cine display. The results of the present study show detection of the intracardiac neoplasm in all four cases by gated cardiac NMR imaging and the results were comparable to 2D echocardiography. The former imaging technique showed superior spatial resolution. Despite its early stage of development, gated cardiac NMR imaging appears at least equal to 2D echocardiography in the detection of intracardiac neoplasm. The availability of multislice coupled with multiframe acquisition techniques now being developed will provide a cinematic display that will be more effective in the display of the tumor in motion within the cardiac chamber involved and facilitate visualization of the relationship of the tumor to adjacent cardiac structures.

  7. 3D reconstruction of a carotid bifurcation from 2D transversal ultrasound images.

    PubMed

    Yeom, Eunseop; Nam, Kweon-Ho; Jin, Changzhu; Paeng, Dong-Guk; Lee, Sang-Joon

    2014-12-01

    Visualizing and analyzing the morphological structure of carotid bifurcations are important for understanding the etiology of carotid atherosclerosis, which is a major cause of stroke and transient ischemic attack. For delineation of vasculatures in the carotid artery, ultrasound examinations have been widely employed because of a noninvasive procedure without ionizing radiation. However, conventional 2D ultrasound imaging has technical limitations in observing the complicated 3D shapes and asymmetric vasodilation of bifurcations. This study aims to propose image-processing techniques for better 3D reconstruction of a carotid bifurcation in a rat by using 2D cross-sectional ultrasound images. A high-resolution ultrasound imaging system with a probe centered at 40MHz was employed to obtain 2D transversal images. The lumen boundaries in each transverse ultrasound image were detected by using three different techniques; an ellipse-fitting, a correlation mapping to visualize the decorrelation of blood flow, and the ellipse-fitting on the correlation map. When the results are compared, the third technique provides relatively good boundary extraction. The incomplete boundaries of arterial lumen caused by acoustic artifacts are somewhat resolved by adopting the correlation mapping and the distortion in the boundary detection near the bifurcation apex was largely reduced by using the ellipse-fitting technique. The 3D lumen geometry of a carotid artery was obtained by volumetric rendering of several 2D slices. For the 3D vasodilatation of the carotid bifurcation, lumen geometries at the contraction and expansion states were simultaneously depicted at various view angles. The present 3D reconstruction methods would be useful for efficient extraction and construction of the 3D lumen geometries of carotid bifurcations from 2D ultrasound images. PMID:24965564

  8. Improving JWST Coronagraphic Performance with Accurate Image Registration

    NASA Astrophysics Data System (ADS)

    Van Gorkom, Kyle; Pueyo, Laurent; Lajoie, Charles-Philippe; JWST Coronagraphs Working Group

    2016-06-01

    The coronagraphs on the James Webb Space Telescope (JWST) will enable high-contrast observations of faint objects at small separations from bright hosts, such as circumstellar disks, exoplanets, and quasar disks. Despite attenuation by the coronagraphic mask, bright speckles in the host’s point spread function (PSF) remain, effectively washing out the signal from the faint companion. Suppression of these bright speckles is typically accomplished by repeating the observation with a star that lacks a faint companion, creating a reference PSF that can be subtracted from the science image to reveal any faint objects. Before this reference PSF can be subtracted, however, the science and reference images must be aligned precisely, typically to 1/20 of a pixel. Here, we present several such algorithms for performing image registration on JWST coronagraphic images. Using both simulated and pre-flight test data (taken in cryovacuum), we assess (1) the accuracy of each algorithm at recovering misaligned scenes and (2) the impact of image registration on achievable contrast. Proper image registration, combined with post-processing techniques such as KLIP or LOCI, will greatly improve the performance of the JWST coronagraphs.

  9. Retinal image registration via feature-guided Gaussian mixture model.

    PubMed

    Liu, Chengyin; Ma, Jiayi; Ma, Yong; Huang, Jun

    2016-07-01

    Registration of retinal images taken at different times, from different perspectives, or with different modalities is a critical prerequisite for the diagnoses and treatments of various eye diseases. This problem can be formulated as registration of two sets of sparse feature points extracted from the given images, and it is typically solved by first creating a set of putative correspondences and then removing the false matches as well as estimating the spatial transformation between the image pairs or solved by estimating the correspondence and transformation jointly involving an iteration process. However, the former strategy suffers from missing true correspondences, and the latter strategy does not make full use of local appearance information, which may be problematic for low-quality retinal images due to a lack of reliable features. In this paper, we propose a feature-guided Gaussian mixture model (GMM) to address these issues. We formulate point registration as the estimation of a feature-guided mixture of densities: A GMM is fitted to one point set, such that both the centers and local features of the Gaussian densities are constrained to coincide with the other point set. The problem is solved under a unified maximum-likelihood framework together with an iterative expectation-maximization algorithm initialized by the confident feature correspondences, where the image transformation is modeled by an affine function. Extensive experiments on various retinal images show the robustness of our approach, which consistently outperforms other state-of-the-art methods, especially when the data is badly degraded. PMID:27409682

  10. Vectorial total variation-based regularization for variational image registration.

    PubMed

    Chumchob, Noppadol

    2013-11-01

    To use interdependence between the primary components of the deformation field for smooth and non-smooth registration problems, the channel-by-channel total variation- or standard vectorial total variation (SVTV)-based regularization has been extended to a more flexible and efficient technique, allowing high quality regularization procedures. Based on this method, this paper proposes a fast nonlinear multigrid (NMG) method for solving the underlying Euler-Lagrange system of two coupled second-order nonlinear partial differential equations. Numerical experiments using both synthetic and realistic images not only confirm that the recommended VTV-based regularization yields better registration qualities for a wide range of applications than those of the SVTV-based regularization, but also that the proposed NMG method is fast, accurate, and reliable in delivering visually-pleasing registration results. PMID:23893729

  11. Hierarchical model-based interferometric synthetic aperture radar image registration

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Huang, Haifeng; Dong, Zhen; Wu, Manqing

    2014-01-01

    With the rapid development of spaceborne interferometric synthetic aperture radar technology, classical image registration methods are incompetent for high-efficiency and high-accuracy masses of real data processing. Based on this fact, we propose a new method. This method consists of two steps: coarse registration that is realized by cross-correlation algorithm and fine registration that is realized by hierarchical model-based algorithm. Hierarchical model-based algorithm is a high-efficiency optimization algorithm. The key features of this algorithm are a global model that constrains the overall structure of the motion estimated, a local model that is used in the estimation process, and a coarse-to-fine refinement strategy. Experimental results from different kinds of simulated and real data have confirmed that the proposed method is very fast and has high accuracy. Comparing with a conventional cross-correlation method, the proposed method provides markedly improved performance.

  12. Comparative study on 3D-2D convertible integral imaging systems

    NASA Astrophysics Data System (ADS)

    Choi, Heejin; Kim, Joohwan; Kim, Yunhee; Lee, Byoungho

    2006-02-01

    In spite of significant improvements in three-dimensional (3D) display fields, the commercialization of a 3D-only display system is not achieved yet. The mainstream of display market is a high performance two-dimensional (2D) flat panel display (FPD) and the beginning of the high-definition (HD) broadcasting accelerates the opening of the golden age of HD FPDs. Therefore, a 3D display system needs to be able to display a 2D image with high quality. In this paper, two different 3D-2D convertible methods based on integral imaging are compared and categorized for its applications. One method uses a point light source array and a polymer-dispersed liquid crystal and one display panel. The other system adopts two display panels and a lens array. The former system is suitable for mobile applications while the latter is for home applications such as monitors and TVs.

  13. Tensor representation of color images and fast 2D quaternion discrete Fourier transform

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.; Agaian, Sos S.

    2015-03-01

    In this paper, a general, efficient, split algorithm to compute the two-dimensional quaternion discrete Fourier transform (2-D QDFT), by using the special partitioning in the frequency domain, is introduced. The partition determines an effective transformation, or color image representation in the form of 1-D quaternion signals which allow for splitting the N × M-point 2-D QDFT into a set of 1-D QDFTs. Comparative estimates revealing the efficiency of the proposed algorithms with respect to the known ones are given. In particular, a proposed method of calculating the 2r × 2r -point 2-D QDFT uses 18N2 less multiplications than the well-known column-row method and method of calculation based on the symplectic decomposition. The proposed algorithm is simple to apply and design, which makes it very practical in color image processing in the frequency domain.

  14. Automatic 2D-to-3D image conversion using 3D examples from the internet

    NASA Astrophysics Data System (ADS)

    Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.

    2012-03-01

    The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D

  15. Atherosclerosis imaging using 3D black blood TSE SPACE vs 2D TSE

    PubMed Central

    Wong, Stephanie K; Mobolaji-Iawal, Motunrayo; Arama, Leron; Cambe, Joy; Biso, Sylvia; Alie, Nadia; Fayad, Zahi A; Mani, Venkatesh

    2014-01-01

    AIM: To compare 3D Black Blood turbo spin echo (TSE) sampling perfection with application-optimized contrast using different flip angle evolution (SPACE) vs 2D TSE in evaluating atherosclerotic plaques in multiple vascular territories. METHODS: The carotid, aortic, and femoral arterial walls of 16 patients at risk for cardiovascular or atherosclerotic disease were studied using both 3D black blood magnetic resonance imaging SPACE and conventional 2D multi-contrast TSE sequences using a consolidated imaging approach in the same imaging session. Qualitative and quantitative analyses were performed on the images. Agreement of morphometric measurements between the two imaging sequences was assessed using a two-sample t-test, calculation of the intra-class correlation coefficient and by the method of linear regression and Bland-Altman analyses. RESULTS: No statistically significant qualitative differences were found between the 3D SPACE and 2D TSE techniques for images of the carotids and aorta. For images of the femoral arteries, however, there were statistically significant differences in all four qualitative scores between the two techniques. Using the current approach, 3D SPACE is suboptimal for femoral imaging. However, this may be due to coils not being optimized for femoral imaging. Quantitatively, in our study, higher mean total vessel area measurements for the 3D SPACE technique across all three vascular beds were observed. No significant differences in lumen area for both the right and left carotids were observed between the two techniques. Overall, a significant-correlation existed between measures obtained between the two approaches. CONCLUSION: Qualitative and quantitative measurements between 3D SPACE and 2D TSE techniques are comparable. 3D-SPACE may be a feasible approach in the evaluation of cardiovascular patients. PMID:24876923

  16. Parameterising root system growth models using 2D neutron radiography images

    NASA Astrophysics Data System (ADS)

    Schnepf, Andrea; Felderer, Bernd; Vontobel, Peter; Leitner, Daniel

    2013-04-01

    Root architecture is a key factor for plant acquisition of water and nutrients from soil. In particular in view of a second green revolution where the below ground parts of agricultural crops are important, it is essential to characterise and quantify root architecture and its effect on plant resource acquisition. Mathematical models can help to understand the processes occurring in the soil-plant system, they can be used to quantify the effect of root and rhizosphere traits on resource acquisition and the response to environmental conditions. In order to do so, root architectural models are coupled with a model of water and solute transport in soil. However, dynamic root architectural models are difficult to parameterise. Novel imaging techniques such as x-ray computed tomography, neutron radiography and magnetic resonance imaging enable the in situ visualisation of plant root systems. Therefore, these images facilitate the parameterisation of dynamic root architecture models. These imaging techniques are capable of producing 3D or 2D images. Moreover, 2D images are also available in the form of hand drawings or from images of standard cameras. While full 3D imaging tools are still limited in resolutions, 2D techniques are a more accurate and less expensive option for observing roots in their environment. However, analysis of 2D images has additional difficulties compared to the 3D case, because of overlapping roots. We present a novel algorithm for the parameterisation of root system growth models based on 2D images of root system. The algorithm analyses dynamic image data. These are a series of 2D images of the root system at different points in time. Image data has already been adjusted for missing links and artefacts and segmentation was performed by applying a matched filter response. From this time series of binary 2D images, we parameterise the dynamic root architecture model in the following way: First, a morphological skeleton is derived from the binary

  17. Video image stabilization and registration--plus

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor)

    2009-01-01

    A method of stabilizing a video image displayed in multiple video fields of a video sequence includes the steps of: subdividing a selected area of a first video field into nested pixel blocks; determining horizontal and vertical translation of each of the pixel blocks in each of the pixel block subdivision levels from the first video field to a second video field; and determining translation of the image from the first video field to the second video field by determining a change in magnification of the image from the first video field to the second video field in each of horizontal and vertical directions, and determining shear of the image from the first video field to the second video field in each of the horizontal and vertical directions.

  18. Simultaneous reconstruction of the activity image and registration of the CT image in TOF-PET

    NASA Astrophysics Data System (ADS)

    Rezaei, Ahmadreza; Michel, Christian; Casey, Michael E.; Nuyts, Johan

    2016-02-01

    Previously, maximum-likelihood methods have been proposed to jointly estimate the activity image and the attenuation image or the attenuation sinogram from time-of-flight (TOF) positron emission tomography (PET) data. In this contribution, we propose a method that addresses the possible alignment problem of the TOF-PET emission data and the computed tomography (CT) attenuation data, by combining reconstruction and registration. The method, called MLRR, iteratively reconstructs the activity image while registering the available CT-based attenuation image, so that the pair of activity and attenuation images maximise the likelihood of the TOF emission sinogram. The algorithm is slow to converge, but some acceleration could be achieved by using Nesterov’s momentum method and by applying a multi-resolution scheme for the non-rigid displacement estimation. The latter also helps to avoid local optima, although convergence to the global optimum cannot be guaranteed. The results are evaluated on 2D and 3D simulations as well as a respiratory gated clinical scan. Our experiments indicate that the proposed method is able to correct for possible misalignment of the CT-based attenuation image, and is therefore a very promising approach to suppressing attenuation artefacts in clinical PET/CT. When applied to respiratory gated data of a patient scan, it produced deformations that are compatible with breathing motion and which reduced the well known attenuation artefact near the dome of the liver. Since the method makes use of the energy-converted CT attenuation image, the scale problem of joint reconstruction is automatically solved.

  19. Robust optical and SAR multi-sensor image registration

    NASA Astrophysics Data System (ADS)

    Wu, Yingdan; Ming, Yang

    2015-10-01

    This paper proposes a robust matching method for the multi-sensor imagery. Firstly, the SIFT feature matching and relaxation matching method are integrated in the highest pyramid to derive the approximate relationship between the reference and slave image. Then, the normalized Mutual Information and multi-grid multi-level RANSAC algorithm are adopted to find the correct conjugate points. Iteratively perform above steps until the original image level, the facet- based transformation model is used to carry out the image registration. Experiments have been made, and the results show that the method in this paper can deliver large number of evenly distributed conjugate points and realize the accurate registration of optical and SAR multi-sensor imagery.

  20. Elastic image registration via rigid object motion induced deformation

    NASA Astrophysics Data System (ADS)

    Zheng, Xiaofen; Udupa, Jayaram K.; Hirsch, Bruce E.

    2011-03-01

    In this paper, we estimate the deformations induced on soft tissues by the rigid independent movements of hard objects and create an admixture of rigid and elastic adaptive image registration transformations. By automatically segmenting and independently estimating the movement of rigid objects in 3D images, we can maintain rigidity in bones and hard tissues while appropriately deforming soft tissues. We tested our algorithms on 20 pairs of 3D MRI datasets pertaining to a kinematic study of the flexibility of the ankle complex of normal feet as well as ankles affected by abnormalities in foot architecture and ligament injuries. The results show that elastic image registration via rigid object-induced deformation outperforms purely rigid and purely nonrigid approaches.

  1. 2D electron temperature diagnostic using soft x-ray imaging technique

    SciTech Connect

    Nishimura, K. Sanpei, A. Tanaka, H.; Ishii, G.; Kodera, R.; Ueba, R.; Himura, H.; Masamune, S.; Ohdachi, S.; Mizuguchi, N.

    2014-03-15

    We have developed a two-dimensional (2D) electron temperature (T{sub e}) diagnostic system for thermal structure studies in a low-aspect-ratio reversed field pinch (RFP). The system consists of a soft x-ray (SXR) camera with two pin holes for two-kinds of absorber foils, combined with a high-speed camera. Two SXR images with almost the same viewing area are formed through different absorber foils on a single micro-channel plate (MCP). A 2D T{sub e} image can then be obtained by calculating the intensity ratio for each element of the images. We have succeeded in distinguishing T{sub e} image in quasi-single helicity (QSH) from that in multi-helicity (MH) RFP states, where the former is characterized by concentrated magnetic fluctuation spectrum and the latter, by broad spectrum of edge magnetic fluctuations.

  2. SU-E-J-237: Image Feature Based DRR and Portal Image Registration

    SciTech Connect

    Wang, X; Chang, J

    2014-06-01

    Purpose: Two-dimensional (2D) matching of the kV X-ray and digitally reconstructed radiography (DRR) images is an important setup technique for image-guided radiotherapy (IGRT). In our clinics, mutual information based methods are used for this purpose on commercial linear accelerators, but with often needs for manual corrections. This work proved the feasibility that feature based image transform can be used to register kV and DRR images. Methods: The scale invariant feature transform (SIFT) method was implemented to detect the matching image details (or key points) between the kV and DRR images. These key points represent high image intensity gradients, and thus the scale invariant features. Due to the poor image contrast from our kV image, direct application of the SIFT method yielded many detection errors. To assist the finding of key points, the center coordinates of the kV and DRR images were read from the DICOM header, and the two groups of key points with similar relative positions to their corresponding centers were paired up. Using these points, a rigid transform (with scaling, horizontal and vertical shifts) was estimated. We also artificially introduced vertical and horizontal shifts to test the accuracy of our registration method on anterior-posterior (AP) and lateral pelvic images. Results: The results provided a satisfactory overlay of the transformed kV onto the DRR image. The introduced vs. detected shifts were fit into a linear regression. In the AP image experiments, linear regression analysis showed a slope of 1.15 and 0.98 with an R2 of 0.89 and 0.99 for the horizontal and vertical shifts, respectively. The results are 1.2 and 1.3 with R2 of 0.72 and 0.82 for the lateral image shifts. Conclusion: This work provided an alternative technique for kV to DRR alignment. Further improvements in the estimation accuracy and image contrast tolerance are underway.

  3. The ANACONDA algorithm for deformable image registration in radiotherapy

    SciTech Connect

    Weistrand, Ola; Svensson, Stina

    2015-01-15

    Purpose: The purpose of this work was to describe a versatile algorithm for deformable image registration with applications in radiotherapy and to validate it on thoracic 4DCT data as well as CT/cone beam CT (CBCT) data. Methods: ANAtomically CONstrained Deformation Algorithm (ANACONDA) combines image information (i.e., intensities) with anatomical information as provided by contoured image sets. The registration problem is formulated as a nonlinear optimization problem and solved with an in-house developed solver, tailored to this problem. The objective function, which is minimized during optimization, is a linear combination of four nonlinear terms: 1. image similarity term; 2. grid regularization term, which aims at keeping the deformed image grid smooth and invertible; 3. a shape based regularization term which works to keep the deformation anatomically reasonable when regions of interest are present in the reference image; and 4. a penalty term which is added to the optimization problem when controlling structures are used, aimed at deforming the selected structure in the reference image to the corresponding structure in the target image. Results: To validate ANACONDA, the authors have used 16 publically available thoracic 4DCT data sets for which target registration errors from several algorithms have been reported in the literature. On average for the 16 data sets, the target registration error is 1.17 ± 0.87 mm, Dice similarity coefficient is 0.98 for the two lungs, and image similarity, measured by the correlation coefficient, is 0.95. The authors have also validated ANACONDA using two pelvic cases and one head and neck case with planning CT and daily acquired CBCT. Each image has been contoured by a physician (radiation oncologist) or experienced radiation therapist. The results are an improvement with respect to rigid registration. However, for the head and neck case, the sample set is too small to show statistical significance. Conclusions: ANACONDA

  4. A method of image registration for small animal, multi-modality imaging.

    PubMed

    Chow, Patrick L; Stout, David B; Komisopoulou, Evangelia; Chatziioannou, Arion F

    2006-01-21

    Many research institutions have a full suite of preclinical tomographic scanners to answer biomedical questions in vivo. Routine multi-modality imaging requires robust registration of images generated by various tomographs. We have implemented a hardware registration method for preclinical imaging that is similar to that used in the combined positron emission tomography (PET)/computed tomography (CT) scanners in the clinic. We designed an imaging chamber which can be rigidly and reproducibly mounted on separate microPET and microCT scanners. We have also designed a three-dimensional grid phantom with 1288 lines that is used to generate the spatial transformation matrix from software registration using a 15-parameter perspective model. The imaging chamber works in combination with the registration phantom synergistically to achieve the image registration goal. We verified that the average registration error between two imaging modalities is 0.335 mm using an in vivo mouse bone scan. This paper also estimates the impact of image misalignment on PET quantitation using attenuation corrections generated from misregistered images. Our technique is expected to produce PET quantitation errors of less than 5%. The methods presented are robust and appropriate for routine use in high throughput animal imaging facilities. PMID:16394345

  5. 3D multiple-point statistics simulation using 2D training images

    NASA Astrophysics Data System (ADS)

    Comunian, A.; Renard, P.; Straubhaar, J.

    2012-03-01

    One of the main issues in the application of multiple-point statistics (MPS) to the simulation of three-dimensional (3D) blocks is the lack of a suitable 3D training image. In this work, we compare three methods of overcoming this issue using information coming from bidimensional (2D) training images. One approach is based on the aggregation of probabilities. The other approaches are novel. One relies on merging the lists obtained using the impala algorithm from diverse 2D training images, creating a list of compatible data events that is then used for the MPS simulation. The other (s2Dcd) is based on sequential simulations of 2D slices constrained by the conditioning data computed at the previous simulation steps. These three methods are tested on the reproduction of two 3D images that are used as references, and on a real case study where two training images of sedimentary structures are considered. The tests show that it is possible to obtain 3D MPS simulations with at least two 2D training images. The simulations obtained, in particular those obtained with the s2Dcd method, are close to the references, according to a number of comparison criteria. The CPU time required to simulate with the method s2Dcd is from two to four orders of magnitude smaller than the one required by a MPS simulation performed using a 3D training image, while the results obtained are comparable. This computational efficiency and the possibility of using MPS for 3D simulation without the need for a 3D training image facilitates the inclusion of MPS in Monte Carlo, uncertainty evaluation, and stochastic inverse problems frameworks.

  6. Snapshot 2D tomography via coded aperture x-ray scatter imaging

    PubMed Central

    MacCabe, Kenneth P.; Holmgren, Andrew D.; Tornai, Martin P.; Brady, David J.

    2015-01-01

    This paper describes a fan beam coded aperture x-ray scatter imaging system which acquires a tomographic image from each snapshot. This technique exploits cylindrical symmetry of the scattering cross section to avoid the scanning motion typically required by projection tomography. We use a coded aperture with a harmonic dependence to determine range, and a shift code to determine cross-range. Here we use a forward-scatter configuration to image 2D objects and use serial exposures to acquire tomographic video of motion within a plane. Our reconstruction algorithm also estimates the angular dependence of the scattered radiance, a step toward materials imaging and identification. PMID:23842254

  7. Combining 2D synchrosqueezed wave packet transform with optimization for crystal image analysis

    NASA Astrophysics Data System (ADS)

    Lu, Jianfeng; Wirth, Benedikt; Yang, Haizhao

    2016-04-01

    We develop a variational optimization method for crystal analysis in atomic resolution images, which uses information from a 2D synchrosqueezed transform (SST) as input. The synchrosqueezed transform is applied to extract initial information from atomic crystal images: crystal defects, rotations and the gradient of elastic deformation. The deformation gradient estimate is then improved outside the identified defect region via a variational approach, to obtain more robust results agreeing better with the physical constraints. The variational model is optimized by a nonlinear projected conjugate gradient method. Both examples of images from computer simulations and imaging experiments are analyzed, with results demonstrating the effectiveness of the proposed method.

  8. Avoiding symmetry-breaking spatial non-uniformity in deformable image registration via a quasi-volume-preserving constraint.

    PubMed

    Aganj, Iman; Reuter, Martin; Sabuncu, Mert R; Fischl, Bruce

    2015-02-01

    The choice of a reference image typically influences the results of deformable image registration, thereby making it asymmetric. This is a consequence of a spatially non-uniform weighting in the cost function integral that leads to general registration inaccuracy. The inhomogeneous integral measure--which is the local volume change in the transformation, thus varying through the course of the registration--causes image regions to contribute differently to the objective function. More importantly, the optimization algorithm is allowed to minimize the cost function by manipulating the volume change, instead of aligning the images. The approaches that restore symmetry to deformable registration successfully achieve inverse-consistency, but do not eliminate the regional bias that is the source of the error. In this work, we address the root of the problem: the non-uniformity of the cost function integral. We introduce a new quasi-volume-preserving constraint that allows for volume change only in areas with well-matching image intensities, and show that such a constraint puts a bound on the error arising from spatial non-uniformity. We demonstrate the advantages of adding the proposed constraint to standard (asymmetric and symmetrized) demons and diffeomorphic demons algorithms through experiments on synthetic images, and real X-ray and 2D/3D brain MRI data. Specifically, the results show that our approach leads to image alignment with more accurate matching of manually defined neuroanatomical structures, better tradeoff between image intensity matching and registration-induced distortion, improved native symmetry, and lower susceptibility to local optima. In summary, the inclusion of this space- and time-varying constraint leads to better image registration along every dimension that we have measured it. PMID:25449738

  9. A comparison of 2D and 3D digital image correlation for a membrane under inflation

    NASA Astrophysics Data System (ADS)

    Murienne, Barbara J.; Nguyen, Thao D.

    2016-02-01

    Three-dimensional (3D) digital image correlation (DIC) is becoming widely used to characterize the behavior of structures undergoing 3D deformations. However, the use of 3D-DIC can be challenging under certain conditions, such as high magnification, and therefore small depth of field, or a highly controlled environment with limited access for two-angled cameras. The purpose of this study is to compare 2D-DIC and 3D-DIC for the same inflation experiment and evaluate whether 2D-DIC can be used when conditions discourage the use of a stereo-vision system. A latex membrane was inflated vertically to 5.41 kPa (reference pressure), then to 7.87 kPa (deformed pressure). A two-camera stereo-vision system acquired top-down images of the membrane, while a single camera system simultaneously recorded images of the membrane in profile. 2D-DIC and 3D-DIC were used to calculate horizontal (in the membrane plane) and vertical (out of the membrane plane) displacements, and meridional strain. Under static conditions, the baseline uncertainty in horizontal displacement and strain were smaller for 3D-DIC than 2D-DIC. However, the opposite was observed for the vertical displacement, for which 2D-DIC had a smaller baseline uncertainty. The baseline absolute error in vertical displacement and strain were similar for both DIC methods, but it was larger for 2D-DIC than 3D-DIC for the horizontal displacement. Under inflation, the variability in the measurements were larger than under static conditions for both DIC methods. 2D-DIC showed a smaller variability in displacements than 3D-DIC, especially for the vertical displacement, but a similar strain uncertainty. The absolute difference in the average displacements and strain between 3D-DIC and 2D-DIC were in the range of the 3D-DIC variability. Those findings suggest that 2D-DIC might be used as an alternative to 3D-DIC to study the inflation response of materials under certain conditions.

  10. A new usage of ASIFT for the range image registration

    NASA Astrophysics Data System (ADS)

    Liu, Chun-Yang; Li, Dong; Tian, Jin-Dong

    2014-11-01

    This paper addresses the range image registration problem for views having overlapping area and which may include substantial noise. The current state of the art in range image registration is best represented by the well-known iterative closest point (ICP) algorithm and numerous variations on it. Although this method is effective in many domains, it nevertheless suffers from two key limitations: It requires prealignment of the range surfaces to a reasonable starting point and it is not robust to outliers arising either from noise or low surface overlap. This paper proposes a new approach that avoids these problems for precision range image registration, by using a new, robust method based on ASIFT followed by ICP. Up to now, this approach has been evaluated by experiment. We define the fitness function to calculate the time for the convergence stage of ICP, because the time required is very important. ASIFT are capable of image matching even when there is fully affine variant. The novel ICP search algorithm we present following ASIFT offers much faster convergence than prior ICP methods, and ensures more precise alignments, even in the presence of significant noise, than mean squared error or other well-known robust cost functions.

  11. Estimation of lung lobar sliding using image registration

    NASA Astrophysics Data System (ADS)

    Amelon, Ryan; Cao, Kunlin; Reinhardt, Joseph M.; Christensen, Gary E.; Raghavan, Madhavan

    2012-03-01

    MOTIVATION: The lobes of the lungs slide relative to each other during breathing. Quantifying lobar sliding can aid in better understanding lung function, better modeling of lung dynamics, and a better understanding of the limits of image registration performance near fissures. We have developed a method to estimate lobar sliding in the lung from image registration of CT scans. METHODS: Six human lungs were analyzed using CT scans spanning functional residual capacity (FRC) to total lung capacity (TLC). The lung lobes were segmented and registered on a lobe-by-lobe basis. The displacement fields from the independent lobe registrations were then combined into a single image. This technique allows for displacement discontinuity at lobar boundaries. The displacement field was then analyzed as a continuum by forming finite elements from the voxel grid of the FRC image. Elements at a discontinuity will appear to have undergone significantly elevated 'shear stretch' compared to those within the parenchyma. Shear stretch is shown to be a good measure of sliding magnitude in this context. RESULTS: The sliding map clearly delineated the fissures of the lung. The fissure between the right upper and right lower lobes showed the greatest sliding in all subjects while the fissure between the right upper and right middle lobe showed the least sliding.

  12. 2D Doppler backscattering using synthetic aperture microwave imaging of MAST edge plasmas

    NASA Astrophysics Data System (ADS)

    Thomas, D. A.; Brunner, K. J.; Freethy, S. J.; Huang, B. K.; Shevchenko, V. F.; Vann, R. G. L.

    2016-02-01

    Doppler backscattering (DBS) is already established as a powerful diagnostic; its extension to 2D enables imaging of turbulence characteristics from an extended region of the cut-off surface. The Synthetic Aperture Microwave Imaging (SAMI) diagnostic has conducted proof-of-principle 2D DBS experiments of MAST edge plasma. SAMI actively probes the plasma edge using a wide (±40° vertical and horizontal) and tuneable (10-34.5 GHz) beam. The Doppler backscattered signal is digitised in vector form using an array of eight Vivaldi PCB antennas. This allows the receiving array to be focused in any direction within the field of view simultaneously to an angular range of 6-24° FWHM at 10-34.5 GHz. This capability is unique to SAMI and is a novel way of conducting DBS experiments. In this paper the feasibility of conducting 2D DBS experiments is explored. Initial observations of phenomena previously measured by conventional DBS experiments are presented; such as momentum injection from neutral beams and an abrupt change in power and turbulence velocity coinciding with the onset of H-mode. In addition, being able to carry out 2D DBS imaging allows a measurement of magnetic pitch angle to be made; preliminary results are presented. Capabilities gained through steering a beam using a phased array and the limitations of this technique are discussed.

  13. 3D registration through pseudo x-ray image generation.

    PubMed

    Domergue, G; Viant, W J

    2000-01-01

    One of the less effective processes within current Computer Assisted Surgery systems, utilizing pre-operative planning, is the registration of the plan with the intra-operative position of the patient. The technique described in this paper requires no digitisation of anatomical features or fiducial markers but instead relies on image matching between pseudo and real x-ray images generated by a virtual and a real image intensifier respectively. The technique is an extension to the work undertaken by Weese [1]. PMID:10977585

  14. Registration of DRRs and portal images for verification of stereotactic body radiotherapy: a feasibility study in lung cancer treatment

    NASA Astrophysics Data System (ADS)

    Künzler, Thomas; Grezdo, Jozef; Bogner, Joachim; Birkfellner, Wolfgang; Georg, Dietmar

    2007-04-01

    at the periphery of the lung, close to backbone or diaphragm. Moreover, tumour movement during shallow breathing strongly influences image acquisition for patient positioning. Recapitulating, 2D/3D image registration for lung tumours is an attractive alternative compared to conventional CT verification of the tumour position. Nevertheless, size and location of the tumour are limiting parameters for an accurate registration process.

  15. Automatic registration of terrestrial point clouds based on panoramic reflectance images and efficient BaySAC

    NASA Astrophysics Data System (ADS)

    Kang, Zhizhong

    2013-10-01

    This paper presents a new approach to automatic registration of terrestrial laser scanning (TLS) point clouds utilizing a novel robust estimation method by an efficient BaySAC (BAYes SAmpling Consensus). The proposed method directly generates reflectance images from 3D point clouds, and then using SIFT algorithm extracts keypoints to identify corresponding image points. The 3D corresponding points, from which transformation parameters between point clouds are computed, are acquired by mapping the 2D ones onto the point cloud. To remove false accepted correspondences, we implement a conditional sampling method to select the n data points with the highest inlier probabilities as a hypothesis set and update the inlier probabilities of each data point using simplified Bayes' rule for the purpose of improving the computation efficiency. The prior probability is estimated by the verification of the distance invariance between correspondences. The proposed approach is tested on four data sets acquired by three different scanners. The results show that, comparing with the performance of RANSAC, BaySAC leads to less iterations and cheaper computation cost when the hypothesis set is contaminated with more outliers. The registration results also indicate that, the proposed algorithm can achieve high registration accuracy on all experimental datasets.

  16. Using Membrane Computing for Obtaining Homology Groups of Binary 2D Digital Images

    NASA Astrophysics Data System (ADS)

    Christinal, Hepzibah A.; Díaz-Pernil, Daniel; Jurado, Pedro Real

    Membrane Computing is a new paradigm inspired from cellular communication. Until now, P systems have been used in research areas like modeling chemical process, several ecosystems, etc. In this paper, we apply P systems to Computational Topology within the context of the Digital Image. We work with a variant of P systems called tissue-like P systems to calculate in a general maximally parallel manner the homology groups of 2D images. In fact, homology computation for binary pixel-based 2D digital images can be reduced to connected component labeling of white and black regions. Finally, we use a software called Tissue Simulator to show with some examples how these systems work.

  17. A fast rigid-registration method of inferior limb X-ray image and 3D CT images for TKA surgery

    NASA Astrophysics Data System (ADS)

    Ito, Fumihito; O. D. A, Prima; Uwano, Ikuko; Ito, Kenzo

    2010-03-01

    In this paper, we propose a fast rigid-registration method of inferior limb X-ray films (two-dimensional Computed Radiography (CR) images) and three-dimensional Computed Tomography (CT) images for Total Knee Arthroplasty (TKA) surgery planning. The position of the each bone, such as femur and tibia (shin bone), in X-ray film and 3D CT images is slightly different, and we must pay attention how to use the two different images, since X-ray film image is captured in the standing position, and 3D CT is captured in decubitus (face up) position, respectively. Though the conventional registration mainly uses cross-correlation function between two images,and utilizes optimization techniques, it takes enormous calculation time and it is difficult to use it in interactive operations. In order to solve these problems, we calculate the center line (bone axis) of femur and tibia (shin bone) automatically, and we use them as initial positions for the registration. We evaluate our registration method by using three patient's image data, and we compare our proposed method and a conventional registration, which uses down-hill simplex algorithm. The down-hill simplex method is an optimization algorithm that requires only function evaluations, and doesn't need the calculation of derivatives. Our registration method is more effective than the downhill simplex method in computational time and the stable convergence. We have developed the implant simulation system on a personal computer, in order to support the surgeon in a preoperative planning of TKA. Our registration method is implemented in the simulation system, and user can manipulate 2D/3D translucent templates of implant components on X-ray film and 3D CT images.

  18. Concepts for on-board satellite image registration, volume 1

    NASA Technical Reports Server (NTRS)

    Ruedger, W. H.; Daluge, D. R.; Aanstoos, J. V.

    1980-01-01

    The NASA-NEEDS program goals present a requirement for on-board signal processing to achieve user-compatible, information-adaptive data acquisition. One very specific area of interest is the preprocessing required to register imaging sensor data which have been distorted by anomalies in subsatellite-point position and/or attitude control. The concepts and considerations involved in using state-of-the-art positioning systems such as the Global Positioning System (GPS) in concert with state-of-the-art attitude stabilization and/or determination systems to provide the required registration accuracy are discussed with emphasis on assessing the accuracy to which a given image picture element can be located and identified, determining those algorithms required to augment the registration procedure and evaluating the technology impact on performing these procedures on-board the satellite.

  19. Adaptive registration of diffusion tensor images on lie groups

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Chen, LeiTing; Cai, HongBin; Qiu, Hang; Fei, Nanxi

    2016-08-01

    With diffusion tensor imaging (DTI), more exquisite information on tissue microstructure is provided for medical image processing. In this paper, we present a locally adaptive topology preserving method for DTI registration on Lie groups. The method aims to obtain more plausible diffeomorphisms for spatial transformations via accurate approximation for the local tangent space on the Lie group manifold. In order to capture an exact geometric structure of the Lie group, the local linear approximation is efficiently optimized by using the adaptive selection of the local neighborhood sizes on the given set of data points. Furthermore, numerical comparative experiments are conducted on both synthetic data and real DTI data to demonstrate that the proposed method yields a higher degree of topology preservation on a dense deformation tensor field while improving the registration accuracy.

  20. Adaptive registration of diffusion tensor images on lie groups

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Chen, LeiTing; Cai, HongBin; Qiu, Hang; Fei, Nanxi

    2016-06-01

    With diffusion tensor imaging (DTI), more exquisite information on tissue microstructure is provided for medical image processing. In this paper, we present a locally adaptive topology preserving method for DTI registration on Lie groups. The method aims to obtain more plausible diffeomorphisms for spatial transformations via accurate approximation for the local tangent space on the Lie group manifold. In order to capture an exact geometric structure of the Lie group, the local linear approximation is efficiently optimized by using the adaptive selection of the local neighborhood sizes on the given set of data points. Furthermore, numerical comparative experiments are conducted on both synthetic data and real DTI data to demonstrate that the proposed method yields a higher degree of topology preservation on a dense deformation tensor field while improving the registration accuracy.

  1. 2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.

    PubMed

    Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

    2014-01-01

    registration techniques. Different strategies for automatic serial image registration applied to MS datasets are outlined in detail. The third image modality is histology driven, i.e. a digital scan of the histological stained slices in high-resolution. After fusion of reconstructed scan images and MRI the slice-related coordinates of the mass spectra can be propagated into 3D-space. After image registration of scan images and histological stained images, the anatomical information from histology is fused with the mass spectra from MALDI-MSI. As a result of the described pipeline we have a set of 3 dimensional images representing the same anatomies, i.e. the reconstructed slice scans, the spectral images as well as corresponding clustering results, and the acquired MRI. Great emphasis is put on the fact that the co-registered MRI providing anatomical details improves the interpretation of 3D MALDI images. The ability to relate mass spectrometry derived molecular information with in vivo and in vitro imaging has potentially important implications. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. PMID:23467008

  2. Diagnostic possibilities with multidimensional images in head and neck area using efficient registration and visualization methods

    NASA Astrophysics Data System (ADS)

    Zeilhofer, Hans-Florian U.; Krol, Zdzislaw; Sader, Robert; Hoffmann, Karl-Heinz; Gerhardt, Paul; Schweiger, Markus; Horch, Hans-Henning

    1997-05-01

    For several diseases in the head and neck area different imaging modalities are applied to the same patient.Each of these image data sets has its specific advantages and disadvantages. The combination of different methods allows to make the best use of the advantageous properties of each method while minimizing the impact of its negative aspects. Soft tissue alterations can be judged better in an MRI image while it may be unrecognizable in the relating CT. Bone tissue, on the other hand, is optimally imaged in CT. Inflammatory nuclei of the bone can be detected best by their increased signal in SPECT. Only the combination of all modalities let the physical come to an exact statement on pathological processes that involve multiple tissue structures. Several surfaces and voxel based matching functions we have tested allowed a precise merging by means of numerical optimization methods like e.g. simulated annealing without the complicated assertion of fiducial markers or the localization landmarks in 2D cross sectional slice images. The quality of the registration depends on the choice of the optimization procedure according to the complexity of the matching function landscape. Precise correlation of the multimodal head and neck area images together with its 2D and 3D presentation techniques provides a valuable tool for physicians.

  3. Improving multispectral satellite image compression using onboard subpixel registration

    NASA Astrophysics Data System (ADS)

    Albinet, Mathieu; Camarero, Roberto; Isnard, Maxime; Poulet, Christophe; Perret, Jokin

    2013-09-01

    Future CNES earth observation missions will have to deal with an ever increasing telemetry data rate due to improvements in resolution and addition of spectral bands. Current CNES image compressors implement a discrete wavelet transform (DWT) followed by a bit plane encoding (BPE) but only on a mono spectral basis and do not profit from the multispectral redundancy of the observed scenes. Recent CNES studies have proven a substantial gain on the achievable compression ratio, +20% to +40% on selected scenarios, by implementing a multispectral compression scheme based on a Karhunen Loeve transform (KLT) followed by the classical DWT+BPE. But such results can be achieved only on perfectly registered bands; a default of registration as low as 0.5 pixel ruins all the benefits of multispectral compression. In this work, we first study the possibility to implement a multi-bands subpixel onboard registration based on registration grids generated on-the-fly by the satellite attitude control system and simplified resampling and interpolation techniques. Indeed bands registration is usually performed on ground using sophisticated techniques too computationally intensive for onboard use. This fully quantized algorithm is tuned to meet acceptable registration performances within stringent image quality criteria, with the objective of onboard real-time processing. In a second part, we describe a FPGA implementation developed to evaluate the design complexity and, by extrapolation, the data rate achievable on a spacequalified ASIC. Finally, we present the impact of this approach on the processing chain not only onboard but also on ground and the impacts on the design of the instrument.

  4. Synthetic aperture radar/LANDSAT MSS image registration

    NASA Technical Reports Server (NTRS)

    Maurer, H. E. (Editor); Oberholtzer, J. D. (Editor); Anuta, P. E. (Editor)

    1979-01-01

    Algorithms and procedures necessary to merge aircraft synthetic aperture radar (SAR) and LANDSAT multispectral scanner (MSS) imagery were determined. The design of a SAR/LANDSAT data merging system was developed. Aircraft SAR images were registered to the corresponding LANDSAT MSS scenes and were the subject of experimental investigations. Results indicate that the registration of SAR imagery with LANDSAT MSS imagery is feasible from a technical viewpoint, and useful from an information-content viewpoint.

  5. Registration scheme suitable to Mueller matrix imaging for biomedical applications

    NASA Astrophysics Data System (ADS)

    Guyot, Steve; Anastasiadou, Makrina; Deléchelle, Eric; de Martino, Antonello

    2007-06-01

    Most Mueller matrix imaging polarimeters implement sequential acquisition of at least 16 raw images of the same object with different incident and detected light polarizations. When this technique is implemented in vivo, the unavoidable motions of the subject may shift and distort the raw images to an extent such that the final Mueller images cannot be extracted. We describe a registration algorithm which solves this problem for the typical conditions of in vivo imaging, e.g. with spatially inhomogeneous medium to strong depolarization. The algorithm, based on the so called “optical flow,” is validated experimentally by comparing the Mueller images of a pig skin sample taken in static and in dynamic conditions.

  6. Explicit B-spline regularization in diffeomorphic image registration

    PubMed Central

    Tustison, Nicholas J.; Avants, Brian B.

    2013-01-01

    Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline “flavored” diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools. PMID:24409140

  7. Diffusion tensor image registration using tensor geometry and orientation features.

    PubMed

    Yang, Jinzhong; Shen, Dinggang; Davatzikos, Christos; Verma, Ragini

    2008-01-01

    This paper presents a method for deformable registration of diffusion tensor (DT) images that integrates geometry and orientation features into a hierarchical matching framework. The geometric feature is derived from the structural geometry of diffusion and characterizes the shape of the tensor in terms of prolateness, oblateness, and sphericity of the tensor. Local spatial distributions of the prolate, oblate, and spherical geometry are used to create an attribute vector of geometric feature for matching. The orientation feature improves the matching of the WM fiber tracts by taking into account the statistical information of underlying fiber orientations. These features are incorporated into a hierarchical deformable registration framework to develop a diffusion tensor image registration algorithm. Extensive experiments on simulated and real brain DT data establish the superiority of this algorithm for deformable matching of diffusion tensors, thereby aiding in atlas creation. The robustness of the method makes it potentially useful for group-based analysis of DT images acquired in large studies to identify disease-induced and developmental changes. PMID:18982691

  8. Regional lung function and mechanics using image registration

    NASA Astrophysics Data System (ADS)

    Ding, Kai

    The main function of the respiratory system is gas exchange. Since many disease or injury conditions can cause biomechanical or material property changes that can alter lung function, there is a great interest in measuring regional lung function and mechanics. In this thesis, we present a technique that uses multiple respiratory-gated CT images of the lung acquired at different levels of inflation with both breath-hold static scans and retrospectively reconstructed 4D dynamic scans, along with non-rigid 3D image registration, to make local estimates of lung tissue function and mechanics. We validate our technique using anatomical landmarks and functional Xe-CT estimated specific ventilation. The major contributions of this thesis include: (1) developing the registration derived regional expansion estimation approach in breath-hold static scans and dynamic 4DCT scans, (2) developing a method to quantify lobar sliding from image registration derived displacement field, (3) developing a method for measurement of radiation-induced pulmonary function change following a course of radiation therapy, (4) developing and validating different ventilation measures in 4DCT. The ability of our technique to estimate regional lung mechanics and function as a surrogate of the Xe-CT ventilation imaging for the entire lung from quickly and easily obtained respiratory-gated images, is a significant contribution to functional lung imaging because of the potential increase in resolution, and large reductions in imaging time, radiation, and contrast agent exposure. Our technique may be useful to detect and follow the progression of lung disease such as COPD, may be useful as a planning tool during RT planning, may be useful for tracking the progression of toxicity to nearby normal tissue during RT, and can be used to evaluate the effectiveness of a treatment post-therapy.

  9. Physical Constraint Finite Element Model for Medical Image Registration

    PubMed Central

    Zhang, Jingya; Wang, Jiajun; Wang, Xiuying; Gao, Xin; Feng, Dagan

    2015-01-01

    Due to being derived from linear assumption, most elastic body based non-rigid image registration algorithms are facing challenges for soft tissues with complex nonlinear behavior and with large deformations. To take into account the geometric nonlinearity of soft tissues, we propose a registration algorithm on the basis of Newtonian differential equation. The material behavior of soft tissues is modeled as St. Venant-Kirchhoff elasticity, and the nonlinearity of the continuum represents the quadratic term of the deformation gradient under the Green- St.Venant strain. In our algorithm, the elastic force is formulated as the derivative of the deformation energy with respect to the nodal displacement vectors of the finite element; the external force is determined by the registration similarity gradient flow which drives the floating image deforming to the equilibrium condition. We compared our approach to three other models: 1) the conventional linear elastic finite element model (FEM); 2) the dynamic elastic FEM; 3) the robust block matching (RBM) method. The registration accuracy was measured using three similarities: MSD (Mean Square Difference), NC (Normalized Correlation) and NMI (Normalized Mutual Information), and was also measured using the mean and max distance between the ground seeds and corresponding ones after registration. We validated our method on 60 image pairs including 30 medical image pairs with artificial deformation and 30 clinical image pairs for both the chest chemotherapy treatment in different periods and brain MRI normalization. Our method achieved a distance error of 0.320±0.138 mm in x direction and 0.326±0.111 mm in y direction, MSD of 41.96±13.74, NC of 0.9958±0.0019, NMI of 1.2962±0.0114 for images with large artificial deformations; and average NC of 0.9622±0.008 and NMI of 1.2764±0.0089 for the real clinical cases. Student’s t-test demonstrated that our model statistically outperformed the other methods in comparison (p

  10. Localization and tracking of aortic valve prosthesis in 2D fluoroscopic image sequences

    NASA Astrophysics Data System (ADS)

    Karar, M.; Chalopin, C.; Merk, D. R.; Jacobs, S.; Walther, T.; Burgert, O.; Falk, V.

    2009-02-01

    This paper presents a new method for localization and tracking of the aortic valve prosthesis (AVP) in 2D fluoroscopic image sequences to assist the surgeon to reach the safe zone of implantation during transapical aortic valve implantation. The proposed method includes four main steps: First, the fluoroscopic images are preprocessed using a morphological reconstruction and an adaptive Wiener filter to enhance the AVP edges. Second, a target window, defined by a user on the first image of the sequences which includes the AVP, is tracked in all images using a template matching algorithm. In a third step the corners of the AVP are extracted based on the AVP dimensions and orientation in the target window. Finally, the AVP model is generated in the fluoroscopic image sequences. Although the proposed method is not yet validated intraoperatively, it has been applied to different fluoroscopic image sequences with promising results.

  11. Advances and challenges in deformable image registration: From image fusion to complex motion modelling.

    PubMed

    Schnabel, Julia A; Heinrich, Mattias P; Papież, Bartłomiej W; Brady, Sir J Michael

    2016-10-01

    Over the past 20 years, the field of medical image registration has significantly advanced from multi-modal image fusion to highly non-linear, deformable image registration for a wide range of medical applications and imaging modalities, involving the compensation and analysis of physiological organ motion or of tissue changes due to growth or disease patterns. While the original focus of image registration has predominantly been on correcting for rigid-body motion of brain image volumes acquired at different scanning sessions, often with different modalities, the advent of dedicated longitudinal and cross-sectional brain studies soon necessitated the development of more sophisticated methods that are able to detect and measure local structural or functional changes, or group differences. Moving outside of the brain, cine imaging and dynamic imaging required the development of deformable image registration to directly measure or compensate for local tissue motion. Since then, deformable image registration has become a general enabling technology. In this work we will present our own contributions to the state-of-the-art in deformable multi-modal fusion and complex motion modelling, and then discuss remaining challenges and provide future perspectives to the field. PMID:27364430

  12. Investigation of the effect of subcutaneous fat on image quality performance of 2D conventional imaging and tissue harmonic imaging.

    PubMed

    Browne, Jacinta E; Watson, Amanda J; Hoskins, Peter R; Elliott, Alex T

    2005-07-01

    Tissue harmonic imaging (THI) has been reported to improve contrast resolution, tissue differentiation and overall image quality in clinical examinations. However, a study carried out previously by the authors (Brown et al. 2004) found improvements only in spatial resolution and not in contrast resolution or anechoic target detection. This result may have been due to the homogeneity of the phantom. Biologic tissues are generally inhomogeneous and THI has been reported to improve image quality in the presence of large amounts of subcutaneous fat. The aims of the study were to simulate the distortion caused by subcutaneous fat to image quality and thus investigate further the improvements reported in anechoic target detection and contrast resolution performance with THI compared with 2D conventional imaging. In addition, the effect of three different types of fat-mimicking layer on image quality was examined. The abdominal transducer of two ultrasound scanners with 2D conventional imaging and THI were tested, the 4C1 (Aspen-Acuson, Siemens Co., CA, USA) and the C5-2 (ATL HDI 5000, ATL/Philips, Amsterdam, The Netherlands). An ex vivo subcutaneous pig fat layer was used to replicate beam distortion and phase aberration seen clinically in the presence of subcutaneous fat. Three different types of fat-mimicking layers (olive oil, lard and lard with fish oil capsules) were evaluated. The subcutaneous pig fat layer demonstrated an improvement in anechoic target detection with THI compared with 2D conventional imaging, but no improvement was demonstrated in contrast resolution performance; a similar result was found in a previous study conducted by this research group (Brown et al. 2004) while using this tissue-mimicking phantom without a fat layer. Similarly, while using the layers of olive oil, lard and lard with fish oil capsules, improvements due to THI were found in anechoic target detection but, again, no improvements were found for contrast resolution for any of the

  13. Multimodality imaging combination in small animal via point-based registration

    NASA Astrophysics Data System (ADS)

    Yang, C. C.; Wu, T. H.; Lin, M. H.; Huang, Y. H.; Guo, W. Y.; Chen, C. L.; Wang, T. C.; Yin, W. H.; Lee, J. S.

    2006-12-01

    We present a system of image co-registration in small animal study. Marker-based registration is chosen because of its considerable advantage that the fiducial feature is independent of imaging modality. We also experimented with different scanning protocols and different fiducial marker sizes to improve registration accuracy. Co-registration was conducted using rat phantom fixed by stereotactic frame. Overall, the co-registration accuracy was in sub-millimeter level and close to intrinsic system error. Therefore, we conclude that the system is an accurate co-registration method to be used in small animal studies.

  14. Image registration of MR and CT images using a frameless fiducial marker system.

    PubMed

    Kremser, C; Plangger, C; Bösecke, R; Pallua, A; Aichner, F; Felber, S R

    1997-01-01

    A new system of fiducial stereotactic markers that can easily be adapted to various imaging modalities without losing image registration was developed and tested. Utilizing MR and CT imaging the accuracy of the new system was evaluated with phantom studies and preliminary patient studies. The markers are clearly visible without artifacts on both imaging modalities. The clear delineation of the marker dots on the images enables an accurate automated marker detection. Using the marker system, image registration was found to yield an accuracy of up to 1 mm, depending on the imaging modality and the employed marker arrangement. The presented marker system shall improve patient comfort in comparison to conventional fixed stereotactic frames if repeated, highly accurate registrations are necessary over longer periods. PMID:9254002

  15. A computationally efficient method for automatic registration of orthogonal x-ray images with volumetric CT data

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Varley, Martin R.; Shark, Lik-Kwan; Shentall, Glyn S.; Kirby, Mike C.

    2008-02-01

    The paper presents a computationally efficient 3D-2D image registration algorithm for automatic pre-treatment validation in radiotherapy. The novel aspects of the algorithm include (a) a hybrid cost function based on partial digitally reconstructed radiographs (DRRs) generated along projected anatomical contours and a level set term for similarity measurement; and (b) a fast search method based on parabola fitting and sensitivity-based search order. Using CT and orthogonal x-ray images from a skull and a pelvis phantom, the proposed algorithm is compared with the conventional ray-casting full DRR based registration method. Not only is the algorithm shown to be computationally more efficient with registration time being reduced by a factor of 8, but also the algorithm is shown to offer 50% higher capture range allowing the initial patient displacement up to 15 mm (measured by mean target registration error). For the simulated data, high registration accuracy with average errors of 0.53 mm ± 0.12 mm for translation and 0.61° ± 0.29° for rotation within the capture range has been achieved. For the tested phantom data, the algorithm has also shown to be robust without being affected by artificial markers in the image.

  16. Four dimensional deformable image registration using trajectory modeling

    PubMed Central

    Castillo, Edward; Castillo, Richard; Martinez, Josue; Shenoy, Maithili; Guerrero, Thomas

    2013-01-01

    A four-dimensional deformable image registration (4D DIR) algorithm, referred to as 4D local trajectory modeling (4DLTM), is presented and applied to thoracic 4D computed tomography (4DCT) image sets. The theoretical framework on which this algorithm is built exploits the incremental continuity present in 4DCT component images to calculate a dense set of parameterized voxel trajectories through space as functions of time. The spatial accuracy of the 4DLTM algorithm is compared with an alternative registration approach in which component phase to phase (CPP) DIR is utilized to determine the full displacement between maximum inhale and exhale images. A publically available DIR reference database (http://www.dir-lab.com) is utilized for the spatial accuracy assessment. The database consists of ten 4DCT image sets and corresponding manually identified landmark points between the maximum phases. A subset of points are propagated through the expiratory 4DCT component images. Cubic polynomials were found to provide sufficient flexibility and spatial accuracy for describing the point trajectories through the expiratory phases. The resulting average spatial error between the maximum phases was 1.25 mm for the 4DLTM and 1.44 mm for the CPP. The 4DLTM method captures the long-range motion between 4DCT extremes with high spatial accuracy. PMID:20009196

  17. Preliminary work of real-time ultrasound imaging system for 2-D array transducer.

    PubMed

    Li, Xu; Yang, Jiali; Ding, Mingyue; Yuchi, Ming

    2015-01-01

    Ultrasound (US) has emerged as a non-invasive imaging modality that can provide anatomical structure information in real time. To enable the experimental analysis of new 2-D array ultrasound beamforming methods, a pre-beamformed parallel raw data acquisition system was developed for 3-D data capture of 2D array transducer. The transducer interconnection adopted the row-column addressing (RCA) scheme, where the columns and rows were active in sequential for transmit and receive events, respectively. The DAQ system captured the raw data in parallel and the digitized data were fed through the field programmable gate array (FPGA) to implement the pre-beamforming. Finally, 3-D images were reconstructed through the devised platform in real-time. PMID:26405923

  18. Interpretation of Line-Integrated Signals from 2-D Phase Contrast Imaging on LHD

    NASA Astrophysics Data System (ADS)

    Michael, Clive; Tanaka, Kenji; Vyacheslavov, Leonid; Sanin, Andrei; Kawahata, Kazuo; Okajima, S.

    Two dimensional (2D) phase contrast imaging (PCI) is an excellent method to measure core and edge turbulence with good spatial resolution (Δρ ˜ 0.1). General analytical consideration is given to the signal interpretation of the line-integrated signals, with specific application to images from 2D PCI. It is shown that the Fourier components of fluctuations having any non-zero component propagating along the line of sight are not detected. The ramifications of this constraint are discussed, including consideration of the angle between the sight line and flux surface normal. In the experimental geometry, at the point where the flux surfaces are tangent to the sight line, it is shown that it may be possible to detect large poloidally extended (though with small radial wavelength) structures, such as GAMS. The spatial localization technique of this diagnostic is illustrated with experimental data.

  19. Radiometer uncertainty equation research of 2D planar scanning PMMW imaging system

    NASA Astrophysics Data System (ADS)

    Hu, Taiyang; Xu, Jianzhong; Xiao, Zelong

    2009-07-01

    With advances in millimeter-wave technology, passive millimeter-wave (PMMW) imaging technology has received considerable concerns, and it has established itself in a wide range of military and civil practical applications, such as in the areas of remote sensing, blind landing, precision guidance and security inspection. Both the high transparency of clothing at millimeter wavelengths and the spatial resolution required to generate adequate images combine to make imaging at millimeter wavelengths a natural approach of screening people for concealed contraband detection. And at the same time, the passive operation mode does not present a safety hazard to the person who is under inspection. Based on the description to the design and engineering implementation of a W-band two-dimensional (2D) planar scanning imaging system, a series of scanning methods utilized in PMMW imaging are generally compared and analyzed, followed by a discussion on the operational principle of the mode of 2D planar scanning particularly. Furthermore, it is found that the traditional radiometer uncertainty equation, which is derived from a moving platform, does not hold under this 2D planar scanning mode due to the fact that there is no absolute connection between the scanning rates in horizontal direction and vertical direction. Consequently, an improved radiometer uncertainty equation is carried out in this paper, by means of taking the total time spent on scanning and imaging into consideration, with the purpose of solving the problem mentioned above. In addition, the related factors which affect the quality of radiometric images are further investigated under the improved radiometer uncertainty equation, and ultimately some original results are presented and analyzed to demonstrate the significance and validity of this new methodology.

  20. TU-A-19A-01: Image Registration I: Deformable Image Registration, Contour Propagation and Dose Mapping: 101 and 201

    SciTech Connect

    Kessler, M

    2014-06-15

    Deformable image registration, contour propagation and dose mapping have become common, possibly essential tools for modern image-guided radiation therapy. Historically, these tools have been largely developed at academic medical centers and used in a rather limited and well controlled fashion. Today these tools are now available to the radiotherapy community at large, both as stand-alone applications and as integrated components of both treatment planning and treatment delivery systems. Unfortunately, the details of how these tools work and their limitations are not generally documented or described by the vendors that provide them. Although “it looks right”, determining that unphysical deformations may have occurred is crucial. Because of this, understanding how and when to use, and not use these tools to support everyday clinical decisions is far from straight forward. The goal of this session will be to present both the theory (basic and advanced) and practical clinical use of deformable image registration, contour propagation and dose mapping. To the extent possible, the “secret sauce” that different vendor use to produce reasonable/acceptable results will be described. A detailed explanation of the possible sources of errors and actual examples of these will be presented. Knowing the underlying principles of the process and understanding the confounding factors will help the practicing medical physicist be better able to make decisions (about making decisions) using these tools available. Learning Objectives: Understand the basic (101) and advanced (201) principles of deformable image registration, contour propagation and dose mapping data mapping. Understand the sources and impact of errors in registration and data mapping and the methods for evaluating the performance of these tools. Understand the clinical use and value of these tools, especially when used as a “black box”.

  1. An improved SIFT algorithm based on KFDA in image registration

    NASA Astrophysics Data System (ADS)

    Chen, Peng; Yang, Lijuan; Huo, Jinfeng

    2016-03-01

    As a kind of stable feature matching algorithm, SIFT has been widely used in many fields. In order to further improve the robustness of the SIFT algorithm, an improved SIFT algorithm with Kernel Discriminant Analysis (KFDA-SIFT) is presented for image registration. The algorithm uses KFDA to SIFT descriptors for feature extraction matrix, and uses the new descriptors to conduct the feature matching, finally chooses RANSAC to deal with the matches for further purification. The experiments show that the presented algorithm is robust to image changes in scale, illumination, perspective, expression and tiny pose with higher matching accuracy.

  2. MIND Demons for MR-to-CT Deformable Image Registration In Image-Guided Spine Surgery

    PubMed Central

    Reaungamornrat, S.; De Silva, T.; Uneri, A.; Wolinsky, J.-P.; Khanna, A. J.; Kleinszig, G.; Vogt, S.; Prince, J. L.; Siewerdsen, J. H.

    2016-01-01

    Purpose Localization of target anatomy and critical structures defined in preoperative MR images can be achieved by means of multi-modality deformable registration to intraoperative CT. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. Method The method, called MIND Demons, solves for the deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the velocity fields and the diffeomorphisms, a modality-insensitive similarity function suitable to multi-modality images, and constraints on geodesics in Lagrangian coordinates. Direct optimization (without relying on an exponential map of stationary velocity fields used in conventional diffeomorphic Demons) is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, in phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to conventional mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, and normalized MI (NMI) Demons. Result The method yielded sub-voxel invertibility (0.006 mm) and nonsingular spatial Jacobians with capability to preserve local orientation and topology. It demonstrated improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.5 mm compared to 10.9, 2.3, and 4.6 mm for MI FFD, LMI FFD, and NMI Demons methods, respectively. Validation in clinical studies demonstrated realistic deformation with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. Conclusions A modality-independent deformable registration method has been developed to estimate a viscoelastic diffeomorphic map between preoperative MR and intraoperative CT. The

  3. MIND Demons for MR-to-CT deformable image registration in image-guided spine surgery

    NASA Astrophysics Data System (ADS)

    Reaungamornrat, S.; De Silva, T.; Uneri, A.; Wolinsky, J.-P.; Khanna, A. J.; Kleinszig, G.; Vogt, S.; Prince, J. L.; Siewerdsen, J. H.

    2016-03-01

    Purpose: Localization of target anatomy and critical structures defined in preoperative MR images can be achieved by means of multi-modality deformable registration to intraoperative CT. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. Method: The method, called MIND Demons, solves for the deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the velocity fields and the diffeomorphisms, a modality-insensitive similarity function suitable to multi-modality images, and constraints on geodesics in Lagrangian coordinates. Direct optimization (without relying on an exponential map of stationary velocity fields used in conventional diffeomorphic Demons) is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, in phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to conventional mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, and normalized MI (NMI) Demons. Result: The method yielded sub-voxel invertibility (0.006 mm) and nonsingular spatial Jacobians with capability to preserve local orientation and topology. It demonstrated improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.5 mm compared to 10.9, 2.3, and 4.6 mm for MI FFD, LMI FFD, and NMI Demons methods, respectively. Validation in clinical studies demonstrated realistic deformation with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. Conclusions: A modality-independent deformable registration method has been developed to estimate a

  4. A one-bit approach for image registration

    NASA Astrophysics Data System (ADS)

    Nguyen, An Hung; Pickering, Mark; Lambert, Andrew

    2015-02-01

    Motion estimation or optic flow computation for automatic navigation and obstacle avoidance programs running on Unmanned Aerial Vehicles (UAVs) is a challenging task. These challenges come from the requirements of real-time processing speed and small light-weight image processing hardware with very limited resources (especially memory space) embedded on the UAVs. Solutions towards both simplifying computation and saving hardware resources have recently received much interest. This paper presents an approach for image registration using binary images which addresses these two requirements. This approach uses translational information between two corresponding patches of binary images to estimate global motion. These low bit-resolution images require a very small amount of memory space to store them and allow simple logic operations such as XOR and AND to be used instead of more complex computations such as subtractions and multiplications.

  5. Imaging collective magnonic modes in 2D arrays of magnetic nanoelements.

    PubMed

    Kruglyak, V V; Keatley, P S; Neudert, A; Hicken, R J; Childress, J R; Katine, J A

    2010-01-15

    We have used time resolved scanning Kerr microscopy to image collective spin wave modes within a 2D array of magnetic nanoelements. Long wavelength spin waves are confined within the array as if it was a continuous element of the same size but with effective material properties determined by the structure of the array and its constituent nanoelements. The array is an example of a magnonic metamaterial, the demonstration of which provides new opportunities within the emerging field of magnonics. PMID:20366622

  6. Imaging Collective Magnonic Modes in 2D Arrays of Magnetic Nanoelements

    NASA Astrophysics Data System (ADS)

    Kruglyak, V. V.; Keatley, P. S.; Neudert, A.; Hicken, R. J.; Childress, J. R.; Katine, J. A.

    2010-01-01

    We have used time resolved scanning Kerr microscopy to image collective spin wave modes within a 2D array of magnetic nanoelements. Long wavelength spin waves are confined within the array as if it was a continuous element of the same size but with effective material properties determined by the structure of the array and its constituent nanoelements. The array is an example of a magnonic metamaterial, the demonstration of which provides new opportunities within the emerging field of magnonics.

  7. Gender and ethnicity specific generic elastic models from a single 2D image for novel 2D pose face synthesis and recognition.

    PubMed

    Heo, Jingu; Savvides, Marios

    2012-12-01

    In this paper, we propose a novel method for generating a realistic 3D human face from a single 2D face image for the purpose of synthesizing new 2D face images at arbitrary poses using gender and ethnicity specific models. We employ the Generic Elastic Model (GEM) approach, which elastically deforms a generic 3D depth-map based on the sparse observations of an input face image in order to estimate the depth of the face image. Particularly, we show that Gender and Ethnicity specific GEMs (GE-GEMs) can approximate the 3D shape of the input face image more accurately, achieving a better generalization of 3D face modeling and reconstruction compared to the original GEM approach. We qualitatively validate our method using publicly available databases by showing each reconstructed 3D shape generated from a single image and new synthesized poses of the same person at arbitrary angles. For quantitative comparisons, we compare our synthesized results against 3D scanned data and also perform face recognition using synthesized images generated from a single enrollment frontal image. We obtain promising results for handling pose and expression changes based on the proposed method. PMID:22201062

  8. Fully automatic detection of the vertebrae in 2D CT images

    NASA Astrophysics Data System (ADS)

    Graf, Franz; Kriegel, Hans-Peter; Schubert, Matthias; Strukelj, Michael; Cavallaro, Alexander

    2011-03-01

    Knowledge about the vertebrae is a valuable source of information for several annotation tasks. In recent years, the research community spent a considerable effort for detecting, segmenting and analyzing the vertebrae and the spine in various image modalities like CT or MR. Most of these methods rely on prior knowledge like the location of the vertebrae or other initial information like the manual detection of the spine. Furthermore, the majority of these methods require a complete volume scan. With the existence of use cases where only a single slice is available, there arises a demand for methods allowing the detection of the vertebrae in 2D images. In this paper, we propose a fully automatic and parameterless algorithm for detecting the vertebrae in 2D CT images. Our algorithm starts with detecting candidate locations by taking the density of bone-like structures into account. Afterwards, the candidate locations are extended into candidate regions for which certain image features are extracted. The resulting feature vectors are compared to a sample set of previously annotated and processed images in order to determine the best candidate region. In a final step, the result region is readjusted until convergence to a locally optimal position. Our new method is validated on a real world data set of more than 9 329 images of 34 patients being annotated by a clinician in order to provide a realistic ground truth.

  9. Image restoration using 2D autoregressive texture model and structure curve construction

    NASA Astrophysics Data System (ADS)

    Voronin, V. V.; Marchuk, V. I.; Petrosov, S. P.; Svirin, I.; Agaian, S.; Egiazarian, K.

    2015-05-01

    In this paper an image inpainting approach based on the construction of a composite curve for the restoration of the edges of objects in an image using the concepts of parametric and geometric continuity is presented. It is shown that this approach allows to restore the curved edges and provide more flexibility for curve design in damaged image by interpolating the boundaries of objects by cubic splines. After edge restoration stage, a texture restoration using 2D autoregressive texture model is carried out. The image intensity is locally modeled by a first spatial autoregressive model with support in a strongly causal prediction region on the plane. Model parameters are estimated by Yule-Walker method. Several examples considered in this paper show the effectiveness of the proposed approach for large objects removal as well as recovery of small regions on several test images.

  10. Image compression and encryption scheme based on 2D compressive sensing and fractional Mellin transform

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Li, Haolin; Wang, Di; Pan, Shumin; Zhou, Zhihong

    2015-05-01

    Most of the existing image encryption techniques bear security risks for taking linear transform or suffer encryption data expansion for adopting nonlinear transformation directly. To overcome these difficulties, a novel image compression-encryption scheme is proposed by combining 2D compressive sensing with nonlinear fractional Mellin transform. In this scheme, the original image is measured by measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the nonlinear fractional Mellin transform. The measurement matrices are controlled by chaos map. The Newton Smoothed l0 Norm (NSL0) algorithm is adopted to obtain the decryption image. Simulation results verify the validity and the reliability of this scheme.

  11. A review of biomechanically informed breast image registration.

    PubMed

    Hipwell, John H; Vavourakis, Vasileios; Han, Lianghao; Mertzanidou, Thomy; Eiben, Björn; Hawkes, David J

    2016-01-21

    Breast radiology encompasses the full range of imaging modalities from routine imaging via x-ray mammography, magnetic resonance imaging and ultrasound (both two- and three-dimensional), to more recent technologies such as digital breast tomosynthesis, and dedicated breast imaging systems for positron emission mammography and ultrasound tomography. In addition new and experimental modalities, such as Photoacoustics, Near Infrared Spectroscopy and Electrical Impedance Tomography etc, are emerging. The breast is a highly deformable structure however, and this greatly complicates visual comparison of imaging modalities for the purposes of breast screening, cancer diagnosis (including image guided biopsy), tumour staging, treatment monitoring, surgical planning and simulation of the effects of surgery and wound healing etc. Due primarily to the challenges posed by these gross, non-rigid deformations, development of automated methods which enable registration, and hence fusion, of information within and across breast imaging modalities, and between the images and the physical space of the breast during interventions, remains an active research field which has yet to translate suitable methods into clinical practice. This review describes current research in the field of breast biomechanical modelling and identifies relevant publications where the resulting models have been incorporated into breast image registration and simulation algorithms. Despite these developments there remain a number of issues that limit clinical application of biomechanical modelling. These include the accuracy of constitutive modelling, implementation of representative boundary conditions, failure to meet clinically acceptable levels of computational cost, challenges associated with automating patient-specific model generation (i.e. robust image segmentation and mesh generation) and the complexity of applying biomechanical modelling methods in routine clinical practice. PMID:26733349

  12. A review of biomechanically informed breast image registration

    NASA Astrophysics Data System (ADS)

    Hipwell, John H.; Vavourakis, Vasileios; Han, Lianghao; Mertzanidou, Thomy; Eiben, Björn; Hawkes, David J.

    2016-01-01

    Breast radiology encompasses the full range of imaging modalities from routine imaging via x-ray mammography, magnetic resonance imaging and ultrasound (both two- and three-dimensional), to more recent technologies such as digital breast tomosynthesis, and dedicated breast imaging systems for positron emission mammography and ultrasound tomography. In addition new and experimental modalities, such as Photoacoustics, Near Infrared Spectroscopy and Electrical Impedance Tomography etc, are emerging. The breast is a highly deformable structure however, and this greatly complicates visual comparison of imaging modalities for the purposes of breast screening, cancer diagnosis (including image guided biopsy), tumour staging, treatment monitoring, surgical planning and simulation of the effects of surgery and wound healing etc. Due primarily to the challenges posed by these gross, non-rigid deformations, development of automated methods which enable registration, and hence fusion, of information within and across breast imaging modalities, and between the images and the physical space of the breast during interventions, remains an active research field which has yet to translate suitable methods into clinical practice. This review describes current research in the field of breast biomechanical modelling and identifies relevant publications where the resulting models have been incorporated into breast image registration and simulation algorithms. Despite these developments there remain a number of issues that limit clinical application of biomechanical modelling. These include the accuracy of constitutive modelling, implementation of representative boundary conditions, failure to meet clinically acceptable levels of computational cost, challenges associated with automating patient-specific model generation (i.e. robust image segmentation and mesh generation) and the complexity of applying biomechanical modelling methods in routine clinical practice.

  13. On-line range images registration with GPGPU

    NASA Astrophysics Data System (ADS)

    Będkowski, J.; Naruniec, J.

    2013-03-01

    This paper concerns implementation of algorithms in the two important aspects of modern 3D data processing: data registration and segmentation. Solution proposed for the first topic is based on the 3D space decomposition, while the latter on image processing and local neighbourhood search. Data processing is implemented by using NVIDIA compute unified device architecture (NIVIDIA CUDA) parallel computation. The result of the segmentation is a coloured map where different colours correspond to different objects, such as walls, floor and stairs. The research is related to the problem of collecting 3D data with a RGB-D camera mounted on a rotated head, to be used in mobile robot applications. Performance of the data registration algorithm is aimed for on-line processing. The iterative closest point (ICP) approach is chosen as a registration method. Computations are based on the parallel fast nearest neighbour search. This procedure decomposes 3D space into cubic buckets and, therefore, the time of the matching is deterministic. First technique of the data segmentation uses accele-rometers integrated with a RGB-D sensor to obtain rotation compensation and image processing method for defining pre-requisites of the known categories. The second technique uses the adapted nearest neighbour search procedure for obtaining normal vectors for each range point.

  14. Multiple-perturbation two-dimensional (2D) correlation analysis for spectroscopic imaging data

    NASA Astrophysics Data System (ADS)

    Shinzawa, Hideyuki; Hashimoto, Kosuke; Sato, Hidetoshi; Kanematsu, Wataru; Noda, Isao

    2014-07-01

    A series of data analysis techniques, including multiple-perturbation two-dimensional (2D) correlation spectroscopy and kernel analysis, were used to demonstrate how these techniques can sort out convoluted information content underlying spectroscopic imaging data. A set of Raman spectra of polymer blends consisting of poly(methyl methacrylate) (PMMA) and polyethylene glycol (PEG) were collected under varying spatial coordinates and subjected to multiple-perturbation 2D correlation analysis and kernel analysis by using the coordinates as perturbation variables. Cross-peaks appearing in asynchronous correlation spectra indicated that the change in the spectral intensity of the free Cdbnd O band of the PMMA band occurs before that of the Cdbnd O⋯Hsbnd O band arising from the molecular interaction between PMMA and PEG. Kernel matrices, generated by carrying out 2D correlation analysis on principal component analysis (PCA) score images, revealed subtle but important discrepancy between the patterns of the images, providing additional interpretation to the PCA in an intuitively understandable manner. Consequently, the results provided apparent spectroscopic evidence that PMMA and PEG in the blends are partially miscible at the molecular level, allowing the PMMAs to respond to the perturbations in different manner.

  15. Filters in 2D and 3D Cardiac SPECT Image Processing

    PubMed Central

    Ploussi, Agapi; Synefia, Stella

    2014-01-01

    Nuclear cardiac imaging is a noninvasive, sensitive method providing information on cardiac structure and physiology. Single photon emission tomography (SPECT) evaluates myocardial perfusion, viability, and function and is widely used in clinical routine. The quality of the tomographic image is a key for accurate diagnosis. Image filtering, a mathematical processing, compensates for loss of detail in an image while reducing image noise, and it can improve the image resolution and limit the degradation of the image. SPECT images are then reconstructed, either by filter back projection (FBP) analytical technique or iteratively, by algebraic methods. The aim of this study is to review filters in cardiac 2D, 3D, and 4D SPECT applications and how these affect the image quality mirroring the diagnostic accuracy of SPECT images. Several filters, including the Hanning, Butterworth, and Parzen filters, were evaluated in combination with the two reconstruction methods as well as with a specified MatLab program. Results showed that for both 3D and 4D cardiac SPECT the Butterworth filter, for different critical frequencies and orders, produced the best results. Between the two reconstruction methods, the iterative one might be more appropriate for cardiac SPECT, since it improves lesion detectability due to the significant improvement of image contrast. PMID:24804144

  16. Robust image registration using adaptive coherent point drift method

    NASA Astrophysics Data System (ADS)

    Yang, Lijuan; Tian, Zheng; Zhao, Wei; Wen, Jinhuan; Yan, Weidong

    2016-04-01

    Coherent point drift (CPD) method is a powerful registration tool under the framework of the Gaussian mixture model (GMM). However, the global spatial structure of point sets is considered only without other forms of additional attribute information. The equivalent simplification of mixing parameters and the manual setting of the weight parameter in GMM make the CPD method less robust to outlier and have less flexibility. An adaptive CPD method is proposed to automatically determine the mixing parameters by embedding the local attribute information of features into the construction of GMM. In addition, the weight parameter is treated as an unknown parameter and automatically determined in the expectation-maximization algorithm. In image registration applications, the block-divided salient image disk extraction method is designed to detect sparse salient image features and local self-similarity is used as attribute information to describe the local neighborhood structure of each feature. The experimental results on optical images and remote sensing images show that the proposed method can significantly improve the matching performance.

  17. Fluid Registration of Diffusion Tensor Images Using Information Theory

    PubMed Central

    Chiang, Ming-Chang; Leow, Alex D.; Klunder, Andrea D.; Dutton, Rebecca A.; Barysheva, Marina; Rose, Stephen E.; McMahon, Katie L.; de Zubicaray, Greig I.; Toga, Arthur W.; Thompson, Paul M.

    2008-01-01

    We apply an information-theoretic cost metric, the symmetrized Kullback-Leibler (sKL) divergence, or J-divergence, to fluid registration of diffusion tensor images. The difference between diffusion tensors is quantified based on the sKL-divergence of their associated probability density functions (PDFs). Three-dimensional DTI data from 34 subjects were fluidly registered to an optimized target image. To allow large image deformations but preserve image topology, we regularized the flow with a large-deformation diffeomorphic mapping based on the kinematics of a Navier-Stokes fluid. A driving force was developed to minimize the J-divergence between the deforming source and target diffusion functions, while reorienting the flowing tensors to preserve fiber topography. In initial experiments, we showed that the sKL-divergence based on full diffusion PDFs is adaptable to higher-order diffusion models, such as high angular resolution diffusion imaging (HARDI). The sKL-divergence was sensitive to subtle differences between two diffusivity profiles, showing promise for nonlinear registration applications and multisubject statistical analysis of HARDI data. PMID:18390342

  18. Maximum-likelihood registration of range images with missing data.

    PubMed

    Sharp, Gregory C; Lee, Sang W; Wehe, David K

    2008-01-01

    Missing data are common in range images, due to geometric occlusions, limitations in the sensor field of view, poor reflectivity, depth discontinuities, and cast shadows. Using registration to align these data often fails, because points without valid correspondences can be incorrectly matched. This paper presents a maximum likelihood method for registration of scenes with unmatched or missing data. Using ray casting, correspondences are formed between valid and missing points in each view. These correspondences are used to classify points by their visibility properties, including occlusions, field of view, and shadow regions. The likelihood of each point match is then determined using statistical properties of the sensor, such as noise and outlier distributions. Experiments demonstrate a high rates of convergence on complex scenes with varying degrees of overlap. PMID:18000329

  19. Image denoising with 2D scale-mixing complex wavelet transforms.

    PubMed

    Remenyi, Norbert; Nicolis, Orietta; Nason, Guy; Vidakovic, Brani

    2014-12-01

    This paper introduces an image denoising procedure based on a 2D scale-mixing complex-valued wavelet transform. Both the minimal (unitary) and redundant (maximum overlap) versions of the transform are used. The covariance structure of white noise in wavelet domain is established. Estimation is performed via empirical Bayesian techniques, including versions that preserve the phase of the complex-valued wavelet coefficients and those that do not. The new procedure exhibits excellent quantitative and visual performance, which is demonstrated by simulation on standard test images. PMID:25312931

  20. 2D-CELL: image processing software for extraction and analysis of 2-dimensional cellular structures

    NASA Astrophysics Data System (ADS)

    Righetti, F.; Telley, H.; Leibling, Th. M.; Mocellin, A.

    1992-01-01

    2D-CELL is a software package for the processing and analyzing of photographic images of cellular structures in a largely interactive way. Starting from a binary digitized image, the programs extract the line network (skeleton) of the structure and determine the graph representation that best models it. Provision is made for manually correcting defects such as incorrect node positions or dangling bonds. Then a suitable algorithm retrieves polygonal contours which define individual cells — local boundary curvatures are neglected for simplicity. Using elementary analytical geometry relations, a range of metric and topological parameters describing the population are then computed, organized into statistical distributions and graphically displayed.

  1. 2D image classification for 3D anatomy localization: employing deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    de Vos, Bob D.; Wolterink, Jelmer M.; de Jong, Pim A.; Viergever, Max A.; Išgum, Ivana

    2016-03-01

    Localization of anatomical regions of interest (ROIs) is a preprocessing step in many medical image analysis tasks. While trivial for humans, it is complex for automatic methods. Classic machine learning approaches require the challenge of hand crafting features to describe differences between ROIs and background. Deep convolutional neural networks (CNNs) alleviate this by automatically finding hierarchical feature representations from raw images. We employ this trait to detect anatomical ROIs in 2D image slices in order to localize them in 3D. In 100 low-dose non-contrast enhanced non-ECG synchronized screening chest CT scans, a reference standard was defined by manually delineating rectangular bounding boxes around three anatomical ROIs -- heart, aortic arch, and descending aorta. Every anatomical ROI was automatically identified using a combination of three CNNs, each analyzing one orthogonal image plane. While single CNNs predicted presence or absence of a specific ROI in the given plane, the combination of their results provided a 3D bounding box around it. Classification performance of each CNN, expressed in area under the receiver operating characteristic curve, was >=0.988. Additionally, the performance of ROI localization was evaluated. Median Dice scores for automatically determined bounding boxes around the heart, aortic arch, and descending aorta were 0.89, 0.70, and 0.85 respectively. The results demonstrate that accurate automatic 3D localization of anatomical structures by CNN-based 2D image classification is feasible.

  2. Non-rigid target tracking in 2D ultrasound images using hierarchical grid interpolation

    NASA Astrophysics Data System (ADS)

    Royer, Lucas; Babel, Marie; Krupa, Alexandre

    2014-03-01

    In this paper, we present a new non-rigid target tracking method within 2D ultrasound (US) image sequence. Due to the poor quality of US images, the motion tracking of a tumor or cyst during needle insertion is considered as an open research issue. Our approach is based on well-known compression algorithm in order to make our method work in real-time which is a necessary condition for many clinical applications. Toward that end, we employed a dedicated hierarchical grid interpolation algorithm (HGI) which can represent a large variety of deformations compared to other motion estimation algorithms such as Overlapped Block Motion Compensation (OBMC), or Block Motion Algorithm (BMA). The sum of squared difference of image intensity is selected as similarity criterion because it provides a good trade-off between computation time and motion estimation quality. Contrary to the others methods proposed in the literature, our approach has the ability to distinguish both rigid and non-rigid motions which are observed in ultrasound image modality. Furthermore, this technique does not take into account any prior knowledge about the target, and limits the user interaction which usually complicates the medical validation process. Finally, a technique aiming at identifying the main phases of a periodic motion (e.g. breathing motion) is introduced. The new approach has been validated from 2D ultrasound images of real human tissues which undergo rigid and non-rigid deformations.

  3. Breast density measurement: 3D cone beam computed tomography (CBCT) images versus 2D digital mammograms

    NASA Astrophysics Data System (ADS)

    Han, Tao; Lai, Chao-Jen; Chen, Lingyun; Liu, Xinming; Shen, Youtao; Zhong, Yuncheng; Ge, Shuaiping; Yi, Ying; Wang, Tianpeng; Yang, Wei T.; Shaw, Chris C.

    2009-02-01

    Breast density has been recognized as one of the major risk factors for breast cancer. However, breast density is currently estimated using mammograms which are intrinsically 2D in nature and cannot accurately represent the real breast anatomy. In this study, a novel technique for measuring breast density based on the segmentation of 3D cone beam CT (CBCT) images was developed and the results were compared to those obtained from 2D digital mammograms. 16 mastectomy breast specimens were imaged with a bench top flat-panel based CBCT system. The reconstructed 3D CT images were corrected for the cupping artifacts and then filtered to reduce the noise level, followed by using threshold-based segmentation to separate the dense tissue from the adipose tissue. For each breast specimen, volumes of the dense tissue structures and the entire breast were computed and used to calculate the volumetric breast density. BI-RADS categories were derived from the measured breast densities and compared with those estimated from conventional digital mammograms. The results show that in 10 of 16 cases the BI-RADS categories derived from the CBCT images were lower than those derived from the mammograms by one category. Thus, breasts considered as dense in mammographic examinations may not be considered as dense with the CBCT images. This result indicates that the relation between breast cancer risk and true (volumetric) breast density needs to be further investigated.

  4. State estimation and absolute image registration for geosynchronous satellites

    NASA Technical Reports Server (NTRS)

    Nankervis, R.; Koch, D. W.; Sielski, H.

    1980-01-01

    Spacecraft state estimation and the absolute registration of Earth images acquired by cameras onboard geosynchronous satellites are described. The basic data type of the procedure consists of line and element numbers of image points called landmarks whose geodetic coordinates, relative to United States Geodetic Survey topographic maps, are known. A conventional least squares process is used to estimate navigational parameters and camera pointing biases from observed minus computed landmark line and element numbers. These estimated parameters along with orbit and attitude dynamic models are used to register images, using an automated grey level correlation technique, inside the span represented by the landmark data. In addition, the dynamic models can be employed to register images outside of the data span in a near real time mode. An important application of this mode is in support of meteorological studies where rapid data reduction is required for the rapid tracking and predicting of dynamic phenomena.

  5. Ridge-based retinal image registration algorithm involving OCT fundus images

    NASA Astrophysics Data System (ADS)

    Li, Ying; Gregori, Giovanni; Knighton, Robert W.; Lujan, Brandon J.; Rosenfeld, Philip J.; Lam, Byron L.

    2011-03-01

    This paper proposes an algorithm for retinal image registration involving OCT fundus images (OFIs). The first application of the algorithm is to register OFIs with color fundus photographs; such registration between multimodal retinal images can help correlate features across imaging modalities, which is important for both clinical and research purposes. The second application is to perform the montage of several OFIs, which allows us to construct 3D OCT images over a large field of view out of separate OCT datasets. We use blood vessel ridges as registration features. The brute force search and an Iterative Closest Point (ICP) algorithm are employed for image pair registration. Global alignment to minimize the distance between matching pixel pairs is used to obtain the montage of OFIs. Quality of OFIs is the big limitation factor of the registration algorithm. In the first experiment, the effect of manual OFI enhancement on registration was evaluated for the affine model on 11 image pairs from diseased eyes. The average root mean square error (RMSE) decreases from 58 μm to 40 μm. This indicates that the registration algorithm is robust to manual enhancement. In the second experiment for the montage of OFIs, the algorithm was tested on 6 sets from healthy eyes and 6 sets from diseased eyes, each set having 8 partially overlapping SD-OCT images. Visual evaluation showed that the montage performance was acceptable for normal cases, and not good for abnormal cases due to low visibility of blood vessels. The average RMSE for a typical montage case from a healthy eye is 2.3 pixels (69 μm).

  6. 3D prostate segmentation of ultrasound images combining longitudinal image registration and machine learning

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Fei, Baowei

    2012-02-01

    We developed a three-dimensional (3D) segmentation method for transrectal ultrasound (TRUS) images, which is based on longitudinal image registration and machine learning. Using longitudinal images of each individual patient, we register previously acquired images to the new images of the same subject. Three orthogonal Gabor filter banks were used to extract texture features from each registered image. Patient-specific Gabor features from the registered images are used to train kernel support vector machines (KSVMs) and then to segment the newly acquired prostate image. The segmentation method was tested in TRUS data from five patients. The average surface distance between our and manual segmentation is 1.18 +/- 0.31 mm, indicating that our automatic segmentation method based on longitudinal image registration is feasible for segmenting the prostate in TRUS images.

  7. Evaluation of a robotic arm for echocardiography to X-ray image registration during cardiac catheterization procedures.

    PubMed

    Ma, Yingliang; Penney, Graeme P; Bos, Dennis; Frissen, Peter; de Fockert, George; King, Andy; Gao, Gang; Yao, Cheng; Totman, John; Ginks, Matthew; Rinaldi, C; Razavi, Reza; Rhode, Kawal S

    2009-01-01

    We present an initial evaluation of a robotic arm for positioning a 3D echo probe during cardiac catheterization procedures. By tracking the robotic arm, X-ray table and X-ray C-arm, we are able to register the 3D echo images with live 2D X-ray images. In addition, we can also use tracking data from the robotic arm combined with system calibrations to create extended field of view 3D echo images. Both these features can be used for roadmapping to guide cardiac catheterization procedures. We have carried out a validation experiment of our registration method using a cross-wire phantom. Results show our method to be accurate to 3.5 mm. We have successfully demonstrated the creation of the extended field of view data on 2 healthy volunteers and the registration of echo and X-ray data on 1 patient undergoing a pacing study. PMID:19964867

  8. Deep Tissue Photoacoustic Imaging Using a Miniaturized 2-D Capacitive Micromachined Ultrasonic Transducer Array

    PubMed Central

    Kothapalli, Sri-Rajasekhar; Ma, Te-Jen; Vaithilingam, Srikant; Oralkan, Ömer

    2014-01-01

    In this paper, we demonstrate 3-D photoacoustic imaging (PAI) of light absorbing objects embedded as deep as 5 cm inside strong optically scattering phantoms using a miniaturized (4 mm × 4 mm × 500 µm), 2-D capacitive micromachined ultrasonic transducer (CMUT) array of 16 × 16 elements with a center frequency of 5.5 MHz. Two-dimensional tomographic images and 3-D volumetric images of the objects placed at different depths are presented. In addition, we studied the sensitivity of CMUT-based PAI to the concentration of indocyanine green dye at 5 cm depth inside the phantom. Under optimized experimental conditions, the objects at 5 cm depth can be imaged with SNR of about 35 dB and a spatial resolution of approximately 500 µm. Results demonstrate that CMUTs with integrated front-end amplifier circuits are an attractive choice for achieving relatively high depth sensitivity for PAI. PMID:22249594

  9. A software tool for automatic classification and segmentation of 2D/3D medical images

    NASA Astrophysics Data System (ADS)

    Strzelecki, Michal; Szczypinski, Piotr; Materka, Andrzej; Klepaczko, Artur

    2013-02-01

    Modern medical diagnosis utilizes techniques of visualization of human internal organs (CT, MRI) or of its metabolism (PET). However, evaluation of acquired images made by human experts is usually subjective and qualitative only. Quantitative analysis of MR data, including tissue classification and segmentation, is necessary to perform e.g. attenuation compensation, motion detection, and correction of partial volume effect in PET images, acquired with PET/MR scanners. This article presents briefly a MaZda software package, which supports 2D and 3D medical image analysis aiming at quantification of image texture. MaZda implements procedures for evaluation, selection and extraction of highly discriminative texture attributes combined with various classification, visualization and segmentation tools. Examples of MaZda application in medical studies are also provided.

  10. A two-step Hilbert transform method for 2D image reconstruction.

    PubMed

    Noo, Frédéric; Clackdoyle, Rolf; Pack, Jed D

    2004-09-01

    The paper describes a new accurate two-dimensional (2D) image reconstruction method consisting of two steps. In the first step, the backprojected image is formed after taking the derivative of the parallel projection data. In the second step, a Hilbert filtering is applied along certain lines in the differentiated backprojection (DBP) image. Formulae for performing the DBP step in fanbeam geometry are also presented. The advantage of this two-step Hilbert transform approach is that in certain situations, regions of interest (ROIs) can be reconstructed from truncated projection data. Simulation results are presented that illustrate very similar reconstructed image quality using the new method compared to standard filtered backprojection, and that show the capability to correctly handle truncated projections. In particular, a simulation is presented of a wide patient whose projections are truncated laterally yet for which highly accurate ROI reconstruction is obtained. PMID:15470913

  11. 2D dose distribution images of a hybrid low field MRI-γ detector

    NASA Astrophysics Data System (ADS)

    Abril, A.; Agulles-Pedrós, L.

    2016-07-01

    The proposed hybrid system is a combination of a low field MRI and dosimetric gel as a γ detector. The readout system is based on the polymerization process induced by the gel radiation. A gel dose map is obtained which represents the functional part of hybrid image alongside with the anatomical MRI one. Both images should be taken while the patient with a radiopharmaceutical is located inside the MRI system with a gel detector matrix. A relevant aspect of this proposal is that the dosimetric gel has never been used to acquire medical images. The results presented show the interaction of the 99mTc source with the dosimetric gel simulated in Geant4. The purpose was to obtain the planar γ 2D-image. The different source configurations are studied to explore the ability of the gel as radiation detector through the following parameters; resolution, shape definition and radio-pharmaceutical concentration.

  12. Electron Microscopy: From 2D to 3D Images with Special Reference to Muscle

    PubMed Central

    2015-01-01

    This is a brief and necessarily very sketchy presentation of the evolution in electron microscopy (EM) imaging that was driven by the necessity of extracting 3-D views from the essentially 2-D images produced by the electron beam. The lens design of standard transmission electron microscope has not been greatly altered since its inception. However, technical advances in specimen preparation, image collection and analysis gradually induced an astounding progression over a period of about 50 years. From the early images that redefined tissues, cell and cell organelles at the sub-micron level, to the current nano-resolution reconstructions of organelles and proteins the step is very large. The review is written by an investigator who has followed the field for many years, but often from the sidelines, and with great wonder. Her interest in muscle ultrastructure colors the writing. More specific detailed reviews are presented in this issue. PMID:26913146

  13. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  14. A 3D Feature Descriptor Recovered from a Single 2D Palmprint Image.

    PubMed

    Zheng, Qian; Kumar, Ajay; Pan, Gang

    2016-06-01

    Design and development of efficient and accurate feature descriptors is critical for the success of many computer vision applications. This paper proposes a new feature descriptor, referred to as DoN, for the 2D palmprint matching. The descriptor is extracted for each point on the palmprint. It is based on the ordinal measure which partially describes the difference of the neighboring points' normal vectors. DoN has at least two advantages: 1) it describes the 3D information, which is expected to be highly stable under commonly occurring illumination variations during contactless imaging; 2) the size of DoN for each point is only one bit, which is computationally simple to extract, easy to match, and efficient to storage. We show that such 3D information can be extracted from a single 2D palmprint image. The analysis for the effectiveness of ordinal measure for palmprint matching is also provided. Four publicly available 2D palmprint databases are used to evaluate the effectiveness of DoN, both for identification and the verification. Our method on all these databases achieves the state-of-the-art performance. PMID:27164564

  15. Scalable High Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning

    PubMed Central

    Wu, Guorong; Kim, Minjeong; Wang, Qian; Munsell, Brent C.

    2015-01-01

    Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data,, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked auto-encoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework image registration experiments were conducted on 7.0-tesla brain MR images. In all experiments, the results showed the new image registration framework consistently demonstrated more accurate registration results when compared to state-of-the-art. PMID:26552069

  16. Scalable High-Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning.

    PubMed

    Wu, Guorong; Kim, Minjeong; Wang, Qian; Munsell, Brent C; Shen, Dinggang

    2016-07-01

    Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked autoencoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework, image registration experiments were conducted on 7.0-T brain MR images. In all experiments, the results showed that the new image registration framework consistently demonstrated more accurate registration results when compared to state of the art. PMID:26552069

  17. Three modality image registration of brain SPECT/CT and MR images for quantitative analysis of dopamine transporter imaging

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Yuzuho; Takeda, Yuta; Hara, Takeshi; Zhou, Xiangrong; Matsusako, Masaki; Tanaka, Yuki; Hosoya, Kazuhiko; Nihei, Tsutomu; Katafuchi, Tetsuro; Fujita, Hiroshi

    2016-03-01

    Important features in Parkinson's disease (PD) are degenerations and losses of dopamine neurons in corpus striatum. 123I-FP-CIT can visualize activities of the dopamine neurons. The activity radio of background to corpus striatum is used for diagnosis of PD and Dementia with Lewy Bodies (DLB). The specific activity can be observed in the corpus striatum on SPECT images, but the location and the shape of the corpus striatum on SPECT images only are often lost because of the low uptake. In contrast, MR images can visualize the locations of the corpus striatum. The purpose of this study was to realize a quantitative image analysis for the SPECT images by using image registration technique with brain MR images that can determine the region of corpus striatum. In this study, the image fusion technique was used to fuse SPECT and MR images by intervening CT image taken by SPECT/CT. The mutual information (MI) for image registration between CT and MR images was used for the registration. Six SPECT/CT and four MR scans of phantom materials are taken by changing the direction. As the results of the image registrations, 16 of 24 combinations were registered within 1.3mm. By applying the approach to 32 clinical SPECT/CT and MR cases, all of the cases were registered within 0.86mm. In conclusions, our registration method has a potential in superimposing MR images on SPECT images.

  18. 2D Imaging in a Lightweight Portable MRI Scanner without Gradient Coils

    PubMed Central

    Cooley, Clarissa Zimmerman; Stockmann, Jason P.; Armstrong, Brandon D.; Sarracanie, Mathieu; Lev, Michael H.; Rosen, Matthew S.; Wald, Lawrence L.

    2014-01-01

    Purpose As the premiere modality for brain imaging, MRI could find wider applicability if lightweight, portable systems were available for siting in unconventional locations such as Intensive Care Units, physician offices, surgical suites, ambulances, emergency rooms, sports facilities, or rural healthcare sites. Methods We construct and validate a truly portable (<100kg) and silent proof-of-concept MRI scanner which replaces conventional gradient encoding with a rotating lightweight cryogen-free, low-field magnet. When rotated about the object, the inhomogeneous field pattern is used as a rotating Spatial Encoding Magnetic field (rSEM) to create generalized projections which encode the iteratively reconstructed 2D image. Multiple receive channels are used to disambiguate the non-bijective encoding field. Results The system is validated with experimental images of 2D test phantoms. Similar to other non-linear field encoding schemes, the spatial resolution is position dependent with blurring in the center, but is shown to be likely sufficient for many medical applications. Conclusion The presented MRI scanner demonstrates the potential for portability by simultaneously relaxing the magnet homogeneity criteria and eliminating the gradient coil. This new architecture and encoding scheme shows convincing proof of concept images that are expected to be further improved with refinement of the calibration and methodology. PMID:24668520

  19. Volumetric synthetic aperture imaging with a piezoelectric 2D row-column probe

    NASA Astrophysics Data System (ADS)

    Bouzari, Hamed; Engholm, Mathias; Christiansen, Thomas Lehrmann; Beers, Christopher; Lei, Anders; Stuart, Matthias Bo; Nikolov, Svetoslav Ivanov; Thomsen, Erik Vilain; Jensen, Jørgen Arendt

    2016-04-01

    The synthetic aperture (SA) technique can be used for achieving real-time volumetric ultrasound imaging using 2-D row-column addressed transducers. This paper investigates SA volumetric imaging performance of an in-house prototyped 3 MHz λ/2-pitch 62+62 element piezoelectric 2-D row-column addressed transducer array. Utilizing single element transmit events, a volume rate of 90 Hz down to 14 cm deep is achieved. Data are obtained using the experimental ultrasound scanner SARUS with a 70 MHz sampling frequency and beamformed using a delay-and-sum (DAS) approach. A signal-to-noise ratio of up to 32 dB is measured on the beamformed images of a tissue mimicking phantom with attenuation of 0.5 dB cm-1 MHz-1, from the surface of the probe to the penetration depth of 300λ. Measured lateral resolution as Full-Width-at-Half-Maximum (FWHM) is between 4λ and 10λ for 18% to 65% of the penetration depth from the surface of the probe. The averaged contrast is 13 dB for the same range. The imaging performance assessment results may represent a reference guide for possible applications of such an array in different medical fields.

  20. Designing of sparse 2D arrays for Lamb wave imaging using coarray concept

    NASA Astrophysics Data System (ADS)

    Ambroziński, Łukasz; Stepinski, Tadeusz; Uhl, Tadeusz

    2015-03-01

    2D ultrasonic arrays have considerable application potential in Lamb wave based SHM systems, since they enable equivocal damage imaging and even in some cases wave-mode selection. Recently, it has been shown that the 2D arrays can be used in SHM applications in a synthetic focusing (SF) mode, which is much more effective than the classical phase array mode commonly used in NDT. The SF mode assumes a single element excitation of subsequent transmitters and off-line processing the acquired data. In the simplest implementation of the technique, only single multiplexed input and output channels are required, which results in significant hardware simplification. Application of the SF mode for 2D arrays creates additional degrees of freedom during the design of the array topology, which complicates the array design process, however, it enables sparse array designs with performance similar to that of the fully populated dense arrays. In this paper we present the coarray concept to facilitate synthesis process of an array's aperture used in the multistatic synthetic focusing approach in Lamb waves-based imaging systems. In the coherent imaging, performed in the transmit/receive mode, the sum coarray is a morphological convolution of the transmit/receive sub-arrays. It can be calculated as the set of sums of the individual sub-arrays' elements locations. The coarray framework will be presented here using a an example of a star-shaped array. The approach will be discussed in terms of beampatterns of the resulting imaging systems. Both simulated and experimental results will be included.

  1. Designing of sparse 2D arrays for Lamb wave imaging using coarray concept

    SciTech Connect

    Ambroziński, Łukasz Stepinski, Tadeusz Uhl, Tadeusz

    2015-03-31

    2D ultrasonic arrays have considerable application potential in Lamb wave based SHM systems, since they enable equivocal damage imaging and even in some cases wave-mode selection. Recently, it has been shown that the 2D arrays can be used in SHM applications in a synthetic focusing (SF) mode, which is much more effective than the classical phase array mode commonly used in NDT. The SF mode assumes a single element excitation of subsequent transmitters and off-line processing the acquired data. In the simplest implementation of the technique, only single multiplexed input and output channels are required, which results in significant hardware simplification. Application of the SF mode for 2D arrays creates additional degrees of freedom during the design of the array topology, which complicates the array design process, however, it enables sparse array designs with performance similar to that of the fully populated dense arrays. In this paper we present the coarray concept to facilitate synthesis process of an array’s aperture used in the multistatic synthetic focusing approach in Lamb waves-based imaging systems. In the coherent imaging, performed in the transmit/receive mode, the sum coarray is a morphological convolution of the transmit/receive sub-arrays. It can be calculated as the set of sums of the individual sub-arrays’ elements locations. The coarray framework will be presented here using a an example of a star-shaped array. The approach will be discussed in terms of beampatterns of the resulting imaging systems. Both simulated and experimental results will be included.

  2. Using three-dimensional multigrid-based snake and multiresolution image registration for reconstruction of cranial defect.

    PubMed

    Liao, Yuan-Lin; Lu, Chia-Feng; Wu, Chieh-Tsai; Lee, Jiann-Der; Lee, Shih-Tseng; Sun, Yung-Nien; Wu, Yu-Te

    2013-02-01

    In cranioplasty, neurosurgeons use bone grafts to repair skull defects. To ensure the protection of intracranial tissues and recover the original head shape for aesthetic purposes, a custom-made pre-fabricated prosthesis must match the cranial incision as closely as possible. In our previous study (Liao et al. in Med Biol Eng Comput 49:203-211, 2011), we proposed an algorithm consisting of the 2D snake and image registration using the patient's own diagnostic low-resolution and defective high-resolution computed tomography (CT) images to repair the impaired skull. In this study, we developed a 3D multigrid snake and employed multiresolution image registration to improve the computational efficiency. After extracting the defect portion images, we designed an image-trimming process to remove the bumped inner margin that can facilitate the placement of skull implants without manual trimming during surgery. To evaluate the performance of the proposed algorithm, a set of skull phantoms were manufactured to simulate six different conditions of cranial defects, namely, unilateral, bilateral, and cross-midline defects with 20 or 40% skull defects. The overall image processing time in reconstructing the defect portion images can be reduced from 3 h to 20 min, as compared with our previous method. Furthermore, the reconstruction accuracies using the 3D multigrid snake were superior to those using the 2D snake. PMID:23076880

  3. Adaptive optofluidic lens(es) for switchable 2D and 3D imaging

    NASA Astrophysics Data System (ADS)

    Huang, Hanyang; Wei, Kang; Zhao, Yi

    2016-03-01

    The stereoscopic image is often captured using dual cameras arranged side-by-side and optical path switching systems such as two separate solid lenses or biprism/mirrors. The miniaturization of the overall size of current stereoscopic devices down to several millimeters is at a sacrifice of further device size shrinkage. The limited light entry worsens the final image resolution and brightness. It is known that optofluidics offer good re-configurability for imaging systems. Leveraging this technique, we report a reconfigurable optofluidic system whose optical layout can be swapped between a singlet lens with 10 mm in diameter and a pair of binocular lenses with each lens of 3 mm in diameter for switchable two-dimensional (2D) and three-dimensional (3D) imaging. The singlet and the binoculars share the same optical path and the same imaging sensor. The singlet acquires a 3D image with better resolution and brightness, while the binoculars capture stereoscopic image pairs for 3D vision and depth perception. The focusing power tuning capability of the singlet and the binoculars enable image acquisition at varied object planes by adjusting the hydrostatic pressure across the lens membrane. The vari-focal singlet and binoculars thus work interchangeably and complementarily. The device is thus expected to have applications in robotic vision, stereoscopy, laparoendoscopy and miniaturized zoom lens system.

  4. Registration and Fusion of Multiple Source Remotely Sensed Image Data

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline

    2004-01-01

    Earth and Space Science often involve the comparison, fusion, and integration of multiple types of remotely sensed data at various temporal, radiometric, and spatial resolutions. Results of this integration may be utilized for global change analysis, global coverage of an area at multiple resolutions, map updating or validation of new instruments, as well as integration of data provided by multiple instruments carried on multiple platforms, e.g. in spacecraft constellations or fleets of planetary rovers. Our focus is on developing methods to perform fast, accurate and automatic image registration and fusion. General methods for automatic image registration are being reviewed and evaluated. Various choices for feature extraction, feature matching and similarity measurements are being compared, including wavelet-based algorithms, mutual information and statistically robust techniques. Our work also involves studies related to image fusion and investigates dimension reduction and co-kriging for application-dependent fusion. All methods are being tested using several multi-sensor datasets, acquired at EOS Core Sites, and including multiple sensors such as IKONOS, Landsat-7/ETM+, EO1/ALI and Hyperion, MODIS, and SeaWIFS instruments. Issues related to the coregistration of data from the same platform (i.e., AIRS and MODIS from Aqua) or from several platforms of the A-train (i.e., MLS, HIRDLS, OMI from Aura with AIRS and MODIS from Terra and Aqua) will also be considered.

  5. Distance-Dependent Multimodal Image Registration for Agriculture Tasks

    PubMed Central

    Berenstein, Ron; Hočevar, Marko; Godeša, Tone; Edan, Yael; Ben-Shahar, Ohad

    2015-01-01

    Image registration is the process of aligning two or more images of the same scene taken at different times; from different viewpoints; and/or by different sensors. This research focuses on developing a practical method for automatic image registration for agricultural systems that use multimodal sensory systems and operate in natural environments. While not limited to any particular modalities; here we focus on systems with visual and thermal sensory inputs. Our approach is based on pre-calibrating a distance-dependent transformation matrix (DDTM) between the sensors; and representing it in a compact way by regressing the distance-dependent coefficients as distance-dependent functions. The DDTM is measured by calculating a projective transformation matrix for varying distances between the sensors and possible targets. To do so we designed a unique experimental setup including unique Artificial Control Points (ACPs) and their detection algorithms for the two sensors. We demonstrate the utility of our approach using different experiments and evaluation criteria. PMID:26308000

  6. 3D PET image reconstruction including both motion correction and registration directly into an MR or stereotaxic spatial atlas

    NASA Astrophysics Data System (ADS)

    Gravel, Paul; Verhaeghe, Jeroen; Reader, Andrew J.

    2013-01-01

    This work explores the feasibility and impact of including both the motion correction and the image registration transformation parameters from positron emission tomography (PET) image space to magnetic resonance (MR), or stereotaxic, image space within the system matrix of PET image reconstruction. This approach is motivated by the fields of neuroscience and psychiatry, where PET is used to investigate differences in activation patterns between different groups of participants, requiring all images to be registered to a common spatial atlas. Currently, image registration is performed after image reconstruction which introduces interpolation effects into the final image. Furthermore, motion correction (also requiring registration) introduces a further level of interpolation, and the overall result of these operations can lead to resolution degradation and possibly artifacts. It is important to note that performing such operations on a post-reconstruction basis means, strictly speaking, that the final images are not ones which maximize the desired objective function (e.g. maximum likelihood (ML), or maximum a posteriori reconstruction (MAP)). To correctly seek parameter estimates in the desired spatial atlas which are in accordance with the chosen reconstruction objective function, it is necessary to include the transformation parameters for both motion correction and registration within the system modeling stage of image reconstruction. Such an approach not only respects the statistically chosen objective function (e.g. ML or MAP), but furthermore should serve to reduce the interpolation effects. To evaluate the proposed method, this work investigates registration (including motion correction) using 2D and 3D simulations based on the high resolution research tomograph (HRRT) PET scanner geometry, with and without resolution modeling, using the ML expectation maximization (MLEM) reconstruction algorithm. The quality of reconstruction was assessed using bias

  7. Image registration and averaging of low laser power two-photon fluorescence images of mouse retina.

    PubMed

    Alexander, Nathan S; Palczewska, Grazyna; Stremplewski, Patrycjusz; Wojtkowski, Maciej; Kern, Timothy S; Palczewski, Krzysztof

    2016-07-01

    Two-photon fluorescence microscopy (TPM) is now being used routinely to image live cells for extended periods deep within tissues, including the retina and other structures within the eye . However, very low laser power is a requirement to obtain TPM images of the retina safely. Unfortunately, a reduction in laser power also reduces the signal-to-noise ratio of collected images, making it difficult to visualize structural details. Here, image registration and averaging methods applied to TPM images of the eye in living animals (without the need for auxiliary hardware) demonstrate the structural information obtained with laser power down to 1 mW. Image registration provided between 1.4% and 13.0% improvement in image quality compared to averaging images without registrations when using a high-fluorescence template, and between 0.2% and 12.0% when employing the average of collected images as the template. Also, a diminishing return on image quality when more images were used to obtain the averaged image is shown. This work provides a foundation for obtaining informative TPM images with laser powers of 1 mW, compared to previous levels for imaging mice ranging between 6.3 mW [Palczewska G., Nat Med.20, 785 (2014) Sharma R., Biomed. Opt. Express4, 1285 (2013)]. PMID:27446697

  8. Image registration and averaging of low laser power two-photon fluorescence images of mouse retina

    PubMed Central

    Alexander, Nathan S.; Palczewska, Grazyna; Stremplewski, Patrycjusz; Wojtkowski, Maciej; Kern, Timothy S.; Palczewski, Krzysztof

    2016-01-01

    Two-photon fluorescence microscopy (TPM) is now being used routinely to image live cells for extended periods deep within tissues, including the retina and other structures within the eye . However, very low laser power is a requirement to obtain TPM images of the retina safely. Unfortunately, a reduction in laser power also reduces the signal-to-noise ratio of collected images, making it difficult to visualize structural details. Here, image registration and averaging methods applied to TPM images of the eye in living animals (without the need for auxiliary hardware) demonstrate the structural information obtained with laser power down to 1 mW. Image registration provided between 1.4% and 13.0% improvement in image quality compared to averaging images without registrations when using a high-fluorescence template, and between 0.2% and 12.0% when employing the average of collected images as the template. Also, a diminishing return on image quality when more images were used to obtain the averaged image is shown. This work provides a foundation for obtaining informative TPM images with laser powers of 1 mW, compared to previous levels for imaging mice ranging between 6.3 mW [PalczewskaG., Nat Med. 20, 785 (2014)24952647 SharmaR., Biomed. Opt. Express 4, 1285 (2013)24009992]. PMID:27446697

  9. Calibration of an Ultrasound Tomography System for Medical Imaging with 2D Contrast-Source Inversion

    NASA Astrophysics Data System (ADS)

    Faucher, Gabriel Paul

    This dissertation describes two possible methods for the calibration of an ultrasound tomography system developed at University of Manitoba's Electromagnetic Imaging Laboratory for imaging with the contrast-source inversion algorithm. The calibration techniques are adapted from existing procedures employed for microwave tomography. A theoretical model of these calibration principles is developed in order to provide a rationale for the effectiveness of the proposed procedures. The applicability of such an imagi