Science.gov

Sample records for 2d image registration

  1. Multiple 2D video/3D medical image registration algorithm

    NASA Astrophysics Data System (ADS)

    Clarkson, Matthew J.; Rueckert, Daniel; Hill, Derek L.; Hawkes, David J.

    2000-06-01

    In this paper we propose a novel method to register at least two vide images to a 3D surface model. The potential applications of such a registration method could be in image guided surgery, high precision radiotherapy, robotics or computer vision. Registration is performed by optimizing a similarity measure with respect to the pose parameters. The similarity measure is based on 'photo-consistency' and computes for each surface point, how consistent the corresponding video image information in each view is with a lighting model. We took four video views of a volunteer's face, and used an independent method to reconstruct a surface that was intrinsically registered to the four views. In addition, we extracted a skin surface from the volunteer's MR scan. The surfaces were misregistered from a gold standard pose and our algorithm was used to register both types of surfaces to the video images. For the reconstructed surface, the mean 3D error was 1.53 mm. For the MR surface, the standard deviation of the pose parameters after registration ranged from 0.12 to 0.70 mm and degrees. The performance of the algorithm is accurate, precise and robust.

  2. Advanced 2D-3D registration for endovascular aortic interventions: addressing dissimilarity in images

    NASA Astrophysics Data System (ADS)

    Demirci, Stefanie; Kutter, Oliver; Manstad-Hulaas, Frode; Bauernschmitt, Robert; Navab, Nassir

    2008-03-01

    In the current clinical workflow of minimally invasive aortic procedures navigation tasks are performed under 2D or 3D angiographic imaging. Many solutions for navigation enhancement suggest an integration of the preoperatively acquired computed tomography angiography (CTA) in order to provide the physician with more image information and reduce contrast injection and radiation exposure. This requires exact registration algorithms that align the CTA volume to the intraoperative 2D or 3D images. Additional to the real-time constraint, the registration accuracy should be independent of image dissimilarities due to varying presence of medical instruments and contrast agent. In this paper, we propose efficient solutions for image-based 2D-3D and 3D-3D registration that reduce the dissimilarities by image preprocessing, e.g. implicit detection and segmentation, and adaptive weights introduced into the registration procedure. Experiments and evaluations are conducted on real patient data.

  3. Automatic Masking for Robust 3D-2D Image Registration in Image-Guided Spine Surgery

    PubMed Central

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-01-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies. PMID:27335531

  4. Automatic masking for robust 3D-2D image registration in image-guided spine surgery

    NASA Astrophysics Data System (ADS)

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-03-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  5. Image-based RSA: Roentgen stereophotogrammetric analysis based on 2D-3D image registration.

    PubMed

    de Bruin, P W; Kaptein, B L; Stoel, B C; Reiber, J H C; Rozing, P M; Valstar, E R

    2008-01-01

    Image-based Roentgen stereophotogrammetric analysis (IBRSA) integrates 2D-3D image registration and conventional RSA. Instead of radiopaque RSA bone markers, IBRSA uses 3D CT data, from which digitally reconstructed radiographs (DRRs) are generated. Using 2D-3D image registration, the 3D pose of the CT is iteratively adjusted such that the generated DRRs resemble the 2D RSA images as closely as possible, according to an image matching metric. Effectively, by registering all 2D follow-up moments to the same 3D CT, the CT volume functions as common ground. In two experiments, using RSA and using a micromanipulator as gold standard, IBRSA has been validated on cadaveric and sawbone scapula radiographs, and good matching results have been achieved. The accuracy was: |mu |< 0.083 mm for translations and |mu| < 0.023 degrees for rotations. The precision sigma in x-, y-, and z-direction was 0.090, 0.077, and 0.220 mm for translations and 0.155 degrees , 0.243 degrees , and 0.074 degrees for rotations. Our results show that the accuracy and precision of in vitro IBRSA, performed under ideal laboratory conditions, are lower than in vitro standard RSA but higher than in vivo standard RSA. Because IBRSA does not require radiopaque markers, it adds functionality to the RSA method by opening new directions and possibilities for research, such as dynamic analyses using fluoroscopy on subjects without markers and computer navigation applications.

  6. Nonrigid 2D registration of fluoroscopic coronary artery image sequence with layered motion

    NASA Astrophysics Data System (ADS)

    Park, Taewoo; Jung, Hoyup; Yun, Il Dong

    2016-03-01

    We present a new method for nonrigid registration of coronary artery models with layered motion information. 2D nonrigid registration method is proposed that brings layered motion information into correspondence with fluoroscopic angiograms. The registered model is overlaid on top of interventional angiograms to provide surgical assistance during image-guided chronic total occlusion procedures. The proposed methodology is divided into two parts: layered structures alignments and local nonrigid registration. In the first part, inpainting method is used to estimate a layered rigid transformation that aligns layered motion information. In the second part, a nonrigid registration method is implemented and used to compensate for any local shape discrepancy. Experimental evaluation conducted on a set of 7 fluoroscopic angiograms results in a reduced target registration error, which showed the effectiveness of the proposed method over single layered approach.

  7. Dynamic 2D ultrasound and 3D CT image registration of the beating heart.

    PubMed

    Huang, Xishi; Moore, John; Guiraudon, Gerard; Jones, Douglas L; Bainbridge, Daniel; Ren, Jing; Peters, Terry M

    2009-08-01

    Two-dimensional ultrasound (US) is widely used in minimally invasive cardiac procedures due to its convenience of use and noninvasive nature. However, the low quality of US images often limits their utility as a means for guiding procedures, since it is often difficult to relate the images to their anatomical context. To improve the interpretability of the US images while maintaining US as a flexible anatomical and functional real-time imaging modality, we describe a multimodality image navigation system that integrates 2D US images with their 3D context by registering them to high quality preoperative models based on magnetic resonance imaging (MRI) or computed tomography (CT) images. The mapping from such a model to the patient is completed using spatial and temporal registrations. Spatial registration is performed by a two-step rapid registration method that first approximately aligns the two images as a starting point to an automatic registration procedure. Temporal alignment is performed with the aid of electrocardiograph (ECG) signals and a latency compensation method. Registration accuracy is measured by calculating the TRE. Results show that the error between the US and preoperative images of a beating heart phantom is 1.7 +/-0.4 mm, with a similar performance being observed in in vivo animal experiments.

  8. Simultaneous 3D–2D image registration and C-arm calibration: Application to endovascular image-guided interventions

    SciTech Connect

    Mitrović, Uroš; Pernuš, Franjo; Likar, Boštjan; Špiclin, Žiga

    2015-11-15

    Purpose: Three-dimensional to two-dimensional (3D–2D) image registration is a key to fusion and simultaneous visualization of valuable information contained in 3D pre-interventional and 2D intra-interventional images with the final goal of image guidance of a procedure. In this paper, the authors focus on 3D–2D image registration within the context of intracranial endovascular image-guided interventions (EIGIs), where the 3D and 2D images are generally acquired with the same C-arm system. The accuracy and robustness of any 3D–2D registration method, to be used in a clinical setting, is influenced by (1) the method itself, (2) uncertainty of initial pose of the 3D image from which registration starts, (3) uncertainty of C-arm’s geometry and pose, and (4) the number of 2D intra-interventional images used for registration, which is generally one and at most two. The study of these influences requires rigorous and objective validation of any 3D–2D registration method against a highly accurate reference or “gold standard” registration, performed on clinical image datasets acquired in the context of the intervention. Methods: The registration process is split into two sequential, i.e., initial and final, registration stages. The initial stage is either machine-based or template matching. The latter aims to reduce possibly large in-plane translation errors by matching a projection of the 3D vessel model and 2D image. In the final registration stage, four state-of-the-art intrinsic image-based 3D–2D registration methods, which involve simultaneous refinement of rigid-body and C-arm parameters, are evaluated. For objective validation, the authors acquired an image database of 15 patients undergoing cerebral EIGI, for which accurate gold standard registrations were established by fiducial marker coregistration. Results: Based on target registration error, the obtained success rates of 3D to a single 2D image registration after initial machine-based and

  9. 3D/2D image registration using weighted histogram of gradient directions

    NASA Astrophysics Data System (ADS)

    Ghafurian, Soheil; Hacihaliloglu, Ilker; Metaxas, Dimitris N.; Tan, Virak; Li, Kang

    2015-03-01

    Three dimensional (3D) to two dimensional (2D) image registration is crucial in many medical applications such as image-guided evaluation of musculoskeletal disorders. One of the key problems is to estimate the 3D CT- reconstructed bone model positions (translation and rotation) which maximize the similarity between the digitally reconstructed radiographs (DRRs) and the 2D fluoroscopic images using a registration method. This problem is computational-intensive due to a large search space and the complicated DRR generation process. Also, finding a similarity measure which converges to the global optimum instead of local optima adds to the challenge. To circumvent these issues, most existing registration methods need a manual initialization, which requires user interaction and is prone to human error. In this paper, we introduce a novel feature-based registration method using the weighted histogram of gradient directions of images. This method simplifies the computation by searching the parameter space (rotation and translation) sequentially rather than simultaneously. In our numeric simulation experiments, the proposed registration algorithm was able to achieve sub-millimeter and sub-degree accuracies. Moreover, our method is robust to the initial guess. It can tolerate up to +/-90°rotation offset from the global optimal solution, which minimizes the need for human interaction to initialize the algorithm.

  10. 2D imaging and 3D sensing data acquisition and mutual registration for painting conservation

    NASA Astrophysics Data System (ADS)

    Fontana, Raffaella; Gambino, Maria Chiara; Greco, Marinella; Marras, Luciano; Pampaloni, Enrico M.; Pelagotti, Anna; Pezzati, Luca; Poggi, Pasquale

    2005-01-01

    We describe the application of 2D and 3D data acquisition and mutual registration to the conservation of paintings. RGB color image acquisition, IR and UV fluorescence imaging, together with the more recent hyperspectral imaging (32 bands) are among the most useful techniques in this field. They generally are meant to provide information on the painting materials, on the employed techniques and on the object state of conservation. However, only when the various images are perfectly registered on each other and on the 3D model, no ambiguity is possible and safe conclusions may be drawn. We present the integration of 2D and 3D measurements carried out on two different paintings: "Madonna of the Yarnwinder" by Leonardo da Vinci, and "Portrait of Lionello d'Este", by Pisanello, both painted in the XV century.

  11. 2D imaging and 3D sensing data acquisition and mutual registration for painting conservation

    NASA Astrophysics Data System (ADS)

    Fontana, Raffaella; Gambino, Maria Chiara; Greco, Marinella; Marras, Luciano; Pampaloni, Enrico M.; Pelagotti, Anna; Pezzati, Luca; Poggi, Pasquale

    2004-12-01

    We describe the application of 2D and 3D data acquisition and mutual registration to the conservation of paintings. RGB color image acquisition, IR and UV fluorescence imaging, together with the more recent hyperspectral imaging (32 bands) are among the most useful techniques in this field. They generally are meant to provide information on the painting materials, on the employed techniques and on the object state of conservation. However, only when the various images are perfectly registered on each other and on the 3D model, no ambiguity is possible and safe conclusions may be drawn. We present the integration of 2D and 3D measurements carried out on two different paintings: "Madonna of the Yarnwinder" by Leonardo da Vinci, and "Portrait of Lionello d'Este", by Pisanello, both painted in the XV century.

  12. GPU accelerated generation of digitally reconstructed radiographs for 2-D/3-D image registration.

    PubMed

    Dorgham, Osama M; Laycock, Stephen D; Fisher, Mark H

    2012-09-01

    Recent advances in programming languages for graphics processing units (GPUs) provide developers with a convenient way of implementing applications which can be executed on the CPU and GPU interchangeably. GPUs are becoming relatively cheap, powerful, and widely available hardware components, which can be used to perform intensive calculations. The last decade of hardware performance developments shows that GPU-based computation is progressing significantly faster than CPU-based computation, particularly if one considers the execution of highly parallelisable algorithms. Future predictions illustrate that this trend is likely to continue. In this paper, we introduce a way of accelerating 2-D/3-D image registration by developing a hybrid system which executes on the CPU and utilizes the GPU for parallelizing the generation of digitally reconstructed radiographs (DRRs). Based on the advancements of the GPU over the CPU, it is timely to exploit the benefits of many-core GPU technology by developing algorithms for DRR generation. Although some previous work has investigated the rendering of DRRs using the GPU, this paper investigates approximations which reduce the computational overhead while still maintaining a quality consistent with that needed for 2-D/3-D registration with sufficient accuracy to be clinically acceptable in certain applications of radiation oncology. Furthermore, by comparing implementations of 2-D/3-D registration on the CPU and GPU, we investigate current performance and propose an optimal framework for PC implementations addressing the rigid registration problem. Using this framework, we are able to render DRR images from a 256×256×133 CT volume in ~24 ms using an NVidia GeForce 8800 GTX and in ~2 ms using NVidia GeForce GTX 580. In addition to applications requiring fast automatic patient setup, these levels of performance suggest image-guided radiation therapy at video frame rates is technically feasible using relatively low cost PC

  13. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes.

    PubMed

    Zhong, Zichun; Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun; Mao, Weihua

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes. PMID:27019849

  14. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes

    PubMed Central

    Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes. PMID:27019849

  15. Registration of 2D x-ray images to 3D MRI by generating pseudo-CT data

    NASA Astrophysics Data System (ADS)

    van der Bom, M. J.; Pluim, J. P. W.; Gounis, M. J.; van de Kraats, E. B.; Sprinkhuizen, S. M.; Timmer, J.; Homan, R.; Bartels, L. W.

    2011-02-01

    Spatial and soft tissue information provided by magnetic resonance imaging can be very valuable during image-guided procedures, where usually only real-time two-dimensional (2D) x-ray images are available. Registration of 2D x-ray images to three-dimensional (3D) magnetic resonance imaging (MRI) data, acquired prior to the procedure, can provide optimal information to guide the procedure. However, registering x-ray images to MRI data is not a trivial task because of their fundamental difference in tissue contrast. This paper presents a technique that generates pseudo-computed tomography (CT) data from multi-spectral MRI acquisitions which is sufficiently similar to real CT data to enable registration of x-ray to MRI with comparable accuracy as registration of x-ray to CT. The method is based on a k-nearest-neighbors (kNN)-regression strategy which labels voxels of MRI data with CT Hounsfield Units. The regression method uses multi-spectral MRI intensities and intensity gradients as features to discriminate between various tissue types. The efficacy of using pseudo-CT data for registration of x-ray to MRI was tested on ex vivo animal data. 2D-3D registration experiments using CT and pseudo-CT data of multiple subjects were performed with a commonly used 2D-3D registration algorithm. On average, the median target registration error for registration of two x-ray images to MRI data was approximately 1 mm larger than for x-ray to CT registration. The authors have shown that pseudo-CT data generated from multi-spectral MRI facilitate registration of MRI to x-ray images. From the experiments it could be concluded that the accuracy achieved was comparable to that of registering x-ray images to CT data.

  16. Clinical Assessment of 2D/3D Registration Accuracy in 4 Major Anatomic Sites Using On-Board 2D Kilovoltage Images for 6D Patient Setup

    PubMed Central

    Li, Guang; Yang, T. Jonathan; Furtado, Hugo; Birkfellner, Wolfgang; Ballangrud, Åse; Powell, Simon N.; Mechalakos, James

    2015-01-01

    To provide a comprehensive assessment of patient setup accuracy in 6 degrees of freedom (DOFs) using 2-dimensional/3-dimensional (2D/3D) image registration with on-board 2-dimensional kilovoltage (OB-2DkV) radiographic images, we evaluated cranial, head and neck (HN), and thoracic and abdominal sites under clinical conditions. A fast 2D/3D image registration method using graphics processing unit GPU was modified for registration between OB-2DkV and 3D simulation computed tomography (simCT) images, with 3D/3D registration as the gold standard for 6DOF alignment. In 2D/3D registration, body roll rotation was obtained solely by matching orthogonal OB-2DkV images with a series of digitally reconstructed radiographs (DRRs) from simCT with a small rotational increment along the gantry rotation axis. The window/level adjustments for optimal visualization of the bone in OB-2DkV and DRRs were performed prior to registration. Ideal patient alignment at the isocenter was calculated and used as an initial registration position. In 3D/3D registration, cone-beam CT (CBCT) was aligned to simCT on bony structures using a bone density filter in 6DOF. Included in this retrospective study were 37 patients treated in 55 fractions with frameless stereotactic radiosurgery or stereotactic body radiotherapy for cranial and paraspinal cancer. A cranial phantom was used to serve as a control. In all cases, CBCT images were acquired for patient setup with subsequent OB-2DkV verification. It was found that the accuracy of the 2D/3D registration was 0.0 ± 0.5 mm and 0.1° ± 0.4° in phantom. In patient, it is site dependent due to deformation of the anatomy: 0.2 ± 1.6 mm and −0.4° ± 1.2° on average for each dimension for the cranial site, 0.7 ± 1.6 mm and 0.3° ± 1.3° for HN, 0.7 ± 2.0 mm and −0.7° ± 1.1° for the thorax, and 1.1 ± 2.6 mm and −0.5° ± 1.9° for the abdomen. Anatomical deformation and presence of soft tissue in 2D/3D registration affect the consistency with

  17. Stochastic rank correlation: A robust merit function for 2D/3D registration of image data obtained at different energies

    PubMed Central

    Birkfellner, Wolfgang; Stock, Markus; Figl, Michael; Gendrin, Christelle; Hummel, Johann; Dong, Shuo; Kettenbach, Joachim; Georg, Dietmar; Bergmann, Helmar

    2010-01-01

    In this article, the authors evaluate a merit function for 2D/3D registration called stochastic rank correlation (SRC). SRC is characterized by the fact that differences in image intensity do not influence the registration result; it therefore combines the numerical advantages of cross correlation (CC)-type merit functions with the flexibility of mutual-information-type merit functions. The basic idea is that registration is achieved on a random subset of the image, which allows for an efficient computation of Spearman’s rank correlation coefficient. This measure is, by nature, invariant to monotonic intensity transforms in the images under comparison, which renders it an ideal solution for intramodal images acquired at different energy levels as encountered in intrafractional kV imaging in image-guided radiotherapy. Initial evaluation was undertaken using a 2D/3D registration reference image dataset of a cadaver spine. Even with no radiometric calibration, SRC shows a significant improvement in robustness and stability compared to CC. Pattern intensity, another merit function that was evaluated for comparison, gave rather poor results due to its limited convergence range. The time required for SRC with 5% image content compares well to the other merit functions; increasing the image content does not significantly influence the algorithm accuracy. The authors conclude that SRC is a promising measure for 2D/3D registration in IGRT and image-guided therapy in general. PMID:19746775

  18. Dynamic tracking of a deformable tissue based on 3D-2D MR-US image registration

    NASA Astrophysics Data System (ADS)

    Marami, Bahram; Sirouspour, Shahin; Fenster, Aaron; Capson, David W.

    2014-03-01

    Real-time registration of pre-operative magnetic resonance (MR) or computed tomography (CT) images with intra-operative Ultrasound (US) images can be a valuable tool in image-guided therapies and interventions. This paper presents an automatic method for dynamically tracking the deformation of a soft tissue based on registering pre-operative three-dimensional (3D) MR images to intra-operative two-dimensional (2D) US images. The registration algorithm is based on concepts in state estimation where a dynamic finite element (FE)- based linear elastic deformation model correlates the imaging data in the spatial and temporal domains. A Kalman-like filtering process estimates the unknown deformation states of the soft tissue using the deformation model and a measure of error between the predicted and the observed intra-operative imaging data. The error is computed based on an intensity-based distance metric, namely, modality independent neighborhood descriptor (MIND), and no segmentation or feature extraction from images is required. The performance of the proposed method is evaluated by dynamically deforming 3D pre-operative MR images of a breast phantom tissue based on real-time 2D images obtained from an US probe. Experimental results on different registration scenarios showed that deformation tracking converges in a few iterations. The average target registration error on the plane of 2D US images for manually selected fiducial points was between 0.3 and 1.5 mm depending on the size of deformation.

  19. 3D/2D model-to-image registration applied to TIPS surgery.

    PubMed

    Jomier, Julien; Bullitt, Elizabeth; Van Horn, Mark; Pathak, Chetna; Aylward, Stephen R

    2006-01-01

    We have developed a novel model-to-image registration technique which aligns a 3-dimensional model of vasculature with two semiorthogonal fluoroscopic projections. Our vascular registration method is used to intra-operatively initialize the alignment of a catheter and a preoperative vascular model in the context of image-guided TIPS (Transjugular, Intrahepatic, Portosystemic Shunt formation) surgery. Registration optimization is driven by the intensity information from the projection pairs at sample points along the centerlines of the model. Our algorithm shows speed, accuracy and consistency given clinical data.

  20. Recovering 3D tumor locations from 2D bioluminescence images and registration with CT images

    NASA Astrophysics Data System (ADS)

    Huang, Xiaolei; Metaxas, Dimitris N.; Menon, Lata G.; Mayer-Kuckuk, Philipp; Bertino, Joseph R.; Banerjee, Debabrata

    2006-02-01

    In this paper, we introduce a novel and efficient algorithm for reconstructing the 3D locations of tumor sites from a set of 2D bioluminescence images which are taken by a same camera but after continually rotating the object by a small angle. Our approach requires a much simpler set up than those using multiple cameras, and the algorithmic steps in our framework are efficient and robust enough to facilitate its use in analyzing the repeated imaging of a same animal transplanted with gene marked cells. In order to visualize in 3D the structure of the tumor, we also co-register the BLI-reconstructed crude structure with detailed anatomical structure extracted from high-resolution microCT on a single platform. We present our method using both phantom studies and real studies on small animals.

  1. Registration of 2D to 3D joint images using phase-based mutual information

    NASA Astrophysics Data System (ADS)

    Dalvi, Rupin; Abugharbieh, Rafeef; Pickering, Mark; Scarvell, Jennie; Smith, Paul

    2007-03-01

    Registration of two dimensional to three dimensional orthopaedic medical image data has important applications particularly in the area of image guided surgery and sports medicine. Fluoroscopy to computer tomography (CT) registration is an important case, wherein digitally reconstructed radiographs derived from the CT data are registered to the fluoroscopy data. Traditional registration metrics such as intensity-based mutual information (MI) typically work well but often suffer from gross misregistration errors when the image to be registered contains a partial view of the anatomy visible in the target image. Phase-based MI provides a robust alternative similarity measure which, in addition to possessing the general robustness and noise immunity that MI provides, also employs local phase information in the registration process which makes it less susceptible to the aforementioned errors. In this paper, we propose using the complex wavelet transform for computing image phase information and incorporating that into a phase-based MI measure for image registration. Tests on a CT volume and 6 fluoroscopy images of the knee are presented. The femur and the tibia in the CT volume were individually registered to the fluoroscopy images using intensity-based MI, gradient-based MI and phase-based MI. Errors in the coordinates of fiducials present in the bone structures were used to assess the accuracy of the different registration schemes. Quantitative results demonstrate that the performance of intensity-based MI was the worst. Gradient-based MI performed slightly better, while phase-based MI results were the best consistently producing the lowest errors.

  2. Self-calibration of cone-beam CT geometry using 3D–2D image registration

    PubMed Central

    Ouadah, S; Stayman, J W; Gang, G J; Ehtiati, T; Siewerdsen, J H

    2016-01-01

    Robotic C-arms are capable of complex orbits that can increase field of view, reduce artifacts, improve image quality, and/or reduce dose; however, it can be challenging to obtain accurate, reproducible geometric calibration required for image reconstruction for such complex orbits. This work presents a method for geometric calibration for an arbitrary source-detector orbit by registering 2D projection data to a previously acquired 3D image. It also yields a method by which calibration of simple circular orbits can be improved. The registration uses a normalized gradient information similarity metric and the covariance matrix adaptation-evolution strategy optimizer for robustness against local minima and changes in image content. The resulting transformation provides a ‘self-calibration’ of system geometry. The algorithm was tested in phantom studies using both a cone-beam CT (CBCT) test-bench and a robotic C-arm (Artis Zeego, Siemens Healthcare) for circular and non-circular orbits. Self-calibration performance was evaluated in terms of the full-width at half-maximum (FWHM) of the point spread function in CBCT reconstructions, the reprojection error (RPE) of steel ball bearings placed on each phantom, and the overall quality and presence of artifacts in CBCT images. In all cases, self-calibration improved the FWHM—e.g. on the CBCT bench, FWHM = 0.86 mm for conventional calibration compared to 0.65 mm for self-calibration (p < 0.001). Similar improvements were measured in RPE—e.g. on the robotic C-arm, RPE = 0.73 mm for conventional calibration compared to 0.55 mm for self-calibration (p < 0.001). Visible improvement was evident in CBCT reconstructions using self-calibration, particularly about high-contrast, high-frequency objects (e.g. temporal bone air cells and a surgical needle). The results indicate that self-calibration can improve even upon systems with presumably accurate geometric calibration and is applicable to situations where conventional

  3. Self-calibration of cone-beam CT geometry using 3D-2D image registration.

    PubMed

    Ouadah, S; Stayman, J W; Gang, G J; Ehtiati, T; Siewerdsen, J H

    2016-04-01

    Robotic C-arms are capable of complex orbits that can increase field of view, reduce artifacts, improve image quality, and/or reduce dose; however, it can be challenging to obtain accurate, reproducible geometric calibration required for image reconstruction for such complex orbits. This work presents a method for geometric calibration for an arbitrary source-detector orbit by registering 2D projection data to a previously acquired 3D image. It also yields a method by which calibration of simple circular orbits can be improved. The registration uses a normalized gradient information similarity metric and the covariance matrix adaptation-evolution strategy optimizer for robustness against local minima and changes in image content. The resulting transformation provides a 'self-calibration' of system geometry. The algorithm was tested in phantom studies using both a cone-beam CT (CBCT) test-bench and a robotic C-arm (Artis Zeego, Siemens Healthcare) for circular and non-circular orbits. Self-calibration performance was evaluated in terms of the full-width at half-maximum (FWHM) of the point spread function in CBCT reconstructions, the reprojection error (RPE) of steel ball bearings placed on each phantom, and the overall quality and presence of artifacts in CBCT images. In all cases, self-calibration improved the FWHM-e.g. on the CBCT bench, FWHM  =  0.86 mm for conventional calibration compared to 0.65 mm for self-calibration (p  <  0.001). Similar improvements were measured in RPE-e.g. on the robotic C-arm, RPE  =  0.73 mm for conventional calibration compared to 0.55 mm for self-calibration (p  <  0.001). Visible improvement was evident in CBCT reconstructions using self-calibration, particularly about high-contrast, high-frequency objects (e.g. temporal bone air cells and a surgical needle). The results indicate that self-calibration can improve even upon systems with presumably accurate geometric calibration and is

  4. Self-calibration of cone-beam CT geometry using 3D-2D image registration.

    PubMed

    Ouadah, S; Stayman, J W; Gang, G J; Ehtiati, T; Siewerdsen, J H

    2016-04-01

    Robotic C-arms are capable of complex orbits that can increase field of view, reduce artifacts, improve image quality, and/or reduce dose; however, it can be challenging to obtain accurate, reproducible geometric calibration required for image reconstruction for such complex orbits. This work presents a method for geometric calibration for an arbitrary source-detector orbit by registering 2D projection data to a previously acquired 3D image. It also yields a method by which calibration of simple circular orbits can be improved. The registration uses a normalized gradient information similarity metric and the covariance matrix adaptation-evolution strategy optimizer for robustness against local minima and changes in image content. The resulting transformation provides a 'self-calibration' of system geometry. The algorithm was tested in phantom studies using both a cone-beam CT (CBCT) test-bench and a robotic C-arm (Artis Zeego, Siemens Healthcare) for circular and non-circular orbits. Self-calibration performance was evaluated in terms of the full-width at half-maximum (FWHM) of the point spread function in CBCT reconstructions, the reprojection error (RPE) of steel ball bearings placed on each phantom, and the overall quality and presence of artifacts in CBCT images. In all cases, self-calibration improved the FWHM-e.g. on the CBCT bench, FWHM  =  0.86 mm for conventional calibration compared to 0.65 mm for self-calibration (p  <  0.001). Similar improvements were measured in RPE-e.g. on the robotic C-arm, RPE  =  0.73 mm for conventional calibration compared to 0.55 mm for self-calibration (p  <  0.001). Visible improvement was evident in CBCT reconstructions using self-calibration, particularly about high-contrast, high-frequency objects (e.g. temporal bone air cells and a surgical needle). The results indicate that self-calibration can improve even upon systems with presumably accurate geometric calibration and is

  5. Self-calibration of cone-beam CT geometry using 3D-2D image registration

    NASA Astrophysics Data System (ADS)

    Ouadah, S.; Stayman, J. W.; Gang, G. J.; Ehtiati, T.; Siewerdsen, J. H.

    2016-04-01

    Robotic C-arms are capable of complex orbits that can increase field of view, reduce artifacts, improve image quality, and/or reduce dose; however, it can be challenging to obtain accurate, reproducible geometric calibration required for image reconstruction for such complex orbits. This work presents a method for geometric calibration for an arbitrary source-detector orbit by registering 2D projection data to a previously acquired 3D image. It also yields a method by which calibration of simple circular orbits can be improved. The registration uses a normalized gradient information similarity metric and the covariance matrix adaptation-evolution strategy optimizer for robustness against local minima and changes in image content. The resulting transformation provides a ‘self-calibration’ of system geometry. The algorithm was tested in phantom studies using both a cone-beam CT (CBCT) test-bench and a robotic C-arm (Artis Zeego, Siemens Healthcare) for circular and non-circular orbits. Self-calibration performance was evaluated in terms of the full-width at half-maximum (FWHM) of the point spread function in CBCT reconstructions, the reprojection error (RPE) of steel ball bearings placed on each phantom, and the overall quality and presence of artifacts in CBCT images. In all cases, self-calibration improved the FWHM—e.g. on the CBCT bench, FWHM  =  0.86 mm for conventional calibration compared to 0.65 mm for self-calibration (p  <  0.001). Similar improvements were measured in RPE—e.g. on the robotic C-arm, RPE  =  0.73 mm for conventional calibration compared to 0.55 mm for self-calibration (p  <  0.001). Visible improvement was evident in CBCT reconstructions using self-calibration, particularly about high-contrast, high-frequency objects (e.g. temporal bone air cells and a surgical needle). The results indicate that self-calibration can improve even upon systems with presumably accurate geometric calibration and is

  6. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    NASA Astrophysics Data System (ADS)

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  7. Known-component 3D-2D registration for image guidance and quality assurance in spine surgery pedicle screw placement

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Stayman, J. W.; De Silva, T.; Wang, A. S.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Wolinsky, J.-P.; Gokaslan, Z. L.; Siewerdsen, J. H.

    2015-03-01

    Purpose. To extend the functionality of radiographic / fluoroscopic imaging systems already within standard spine surgery workflow to: 1) provide guidance of surgical device analogous to an external tracking system; and 2) provide intraoperative quality assurance (QA) of the surgical product. Methods. Using fast, robust 3D-2D registration in combination with 3D models of known components (surgical devices), the 3D pose determination was solved to relate known components to 2D projection images and 3D preoperative CT in near-real-time. Exact and parametric models of the components were used as input to the algorithm to evaluate the effects of model fidelity. The proposed algorithm employs the covariance matrix adaptation evolution strategy (CMA-ES) to maximize gradient correlation (GC) between measured projections and simulated forward projections of components. Geometric accuracy was evaluated in a spine phantom in terms of target registration error at the tool tip (TREx), and angular deviation (TREΦ) from planned trajectory. Results. Transpedicle surgical devices (probe tool and spine screws) were successfully guided with TREx<2 mm and TREΦ <0.5° given projection views separated by at least >30° (easily accommodated on a mobile C-arm). QA of the surgical product based on 3D-2D registration demonstrated the detection of pedicle screw breach with TREx<1 mm, demonstrating a trend of improved accuracy correlated to the fidelity of the component model employed. Conclusions. 3D-2D registration combined with 3D models of known surgical components provides a novel method for near-real-time guidance and quality assurance using a mobile C-arm without external trackers or fiducial markers. Ongoing work includes determination of optimal views based on component shape and trajectory, improved robustness to anatomical deformation, and expanded preclinical testing in spine and intracranial surgeries.

  8. Development of fast patient position verification software using 2D-3D image registration and its clinical experience.

    PubMed

    Mori, Shinichiro; Kumagai, Motoki; Miki, Kentaro; Fukuhara, Riki; Haneishi, Hideaki

    2015-09-01

    To improve treatment workflow, we developed a graphic processing unit (GPU)-based patient positional verification software application and integrated it into carbon-ion scanning beam treatment. Here, we evaluated the basic performance of the software. The algorithm provides 2D/3D registration matching using CT and orthogonal X-ray flat panel detector (FPD) images. The participants were 53 patients with tumors of the head and neck, prostate or lung receiving carbon-ion beam treatment. 2D/3D-ITchi-Gime (ITG) calculation accuracy was evaluated in terms of computation time and registration accuracy. Registration calculation was determined using the similarity measurement metrics gradient difference (GD), normalized mutual information (NMI), zero-mean normalized cross-correlation (ZNCC), and their combination. Registration accuracy was dependent on the particular metric used. Representative examples were determined to have target registration error (TRE) = 0.45 ± 0.23 mm and angular error (AE) = 0.35 ± 0.18° with ZNCC + GD for a head and neck tumor; TRE = 0.12 ± 0.07 mm and AE = 0.16 ± 0.07° with ZNCC for a pelvic tumor; and TRE = 1.19 ± 0.78 mm and AE = 0.83 ± 0.61° with ZNCC for lung tumor. Calculation time was less than 7.26 s.The new registration software has been successfully installed and implemented in our treatment process. We expect that it will improve both treatment workflow and treatment accuracy. PMID:26081313

  9. Development of fast patient position verification software using 2D-3D image registration and its clinical experience

    PubMed Central

    Mori, Shinichiro; Kumagai, Motoki; Miki, Kentaro; Fukuhara, Riki; Haneishi, Hideaki

    2015-01-01

    To improve treatment workflow, we developed a graphic processing unit (GPU)-based patient positional verification software application and integrated it into carbon-ion scanning beam treatment. Here, we evaluated the basic performance of the software. The algorithm provides 2D/3D registration matching using CT and orthogonal X-ray flat panel detector (FPD) images. The participants were 53 patients with tumors of the head and neck, prostate or lung receiving carbon-ion beam treatment. 2D/3D-ITchi-Gime (ITG) calculation accuracy was evaluated in terms of computation time and registration accuracy. Registration calculation was determined using the similarity measurement metrics gradient difference (GD), normalized mutual information (NMI), zero-mean normalized cross-correlation (ZNCC), and their combination. Registration accuracy was dependent on the particular metric used. Representative examples were determined to have target registration error (TRE) = 0.45 ± 0.23 mm and angular error (AE) = 0.35 ± 0.18° with ZNCC + GD for a head and neck tumor; TRE = 0.12 ± 0.07 mm and AE = 0.16 ± 0.07° with ZNCC for a pelvic tumor; and TRE = 1.19 ± 0.78 mm and AE = 0.83 ± 0.61° with ZNCC for lung tumor. Calculation time was less than 7.26 s.The new registration software has been successfully installed and implemented in our treatment process. We expect that it will improve both treatment workflow and treatment accuracy. PMID:26081313

  10. Registration of dynamic multiview 2D ultrasound and late gadolinium enhanced images of the heart: Application to hypertrophic cardiomyopathy characterization.

    PubMed

    Betancur, Julián; Simon, Antoine; Halbert, Edgar; Tavard, François; Carré, François; Hernández, Alfredo; Donal, Erwan; Schnell, Frédéric; Garreau, Mireille

    2016-02-01

    Describing and analyzing heart multiphysics requires the acquisition and fusion of multisensor cardiac images. Multisensor image fusion enables a combined analysis of these heterogeneous modalities. We propose to register intra-patient multiview 2D+t ultrasound (US) images with multiview late gadolinium-enhanced (LGE) images acquired during cardiac magnetic resonance imaging (MRI), in order to fuse mechanical and tissue state information. The proposed procedure registers both US and LGE to cine MRI. The correction of slice misalignment and the rigid registration of multiview LGE and cine MRI are studied, to select the most appropriate similarity measure. It showed that mutual information performs the best for LGE slice misalignment correction and for LGE and cine registration. Concerning US registration, dynamic endocardial contours resulting from speckle tracking echocardiography were exploited in a geometry-based dynamic registration. We propose the use of an adapted dynamic time warping procedure to synchronize cardiac dynamics in multiview US and cine MRI. The registration of US and LGE MRI was evaluated on a dataset of patients with hypertrophic cardiomyopathy. A visual assessment of 330 left ventricular regions from US images of 28 patients resulted in 92.7% of regions successfully aligned with cardiac structures in LGE. Successfully-aligned regions were then used to evaluate the abilities of strain indicators to predict the presence of fibrosis. Longitudinal peak-strain and peak-delay of aligned left ventricular regions were computed from corresponding regional strain curves from US. The Mann-Withney test proved that the expected values of these indicators change between the populations of regions with and without fibrosis (p < 0.01). ROC curves otherwise proved that the presence of fibrosis is one factor amongst others which modifies longitudinal peak-strain and peak-delay. PMID:26619189

  11. Auto-masked 2D/3D image registration and its validation with clinical cone-beam computed tomography

    NASA Astrophysics Data System (ADS)

    Steininger, P.; Neuner, M.; Weichenberger, H.; Sharp, G. C.; Winey, B.; Kametriser, G.; Sedlmayer, F.; Deutschmann, H.

    2012-07-01

    Image-guided alignment procedures in radiotherapy aim at minimizing discrepancies between the planned and the real patient setup. For that purpose, we developed a 2D/3D approach which rigidly registers a computed tomography (CT) with two x-rays by maximizing the agreement in pixel intensity between the x-rays and the corresponding reconstructed radiographs from the CT. Moreover, the algorithm selects regions of interest (masks) in the x-rays based on 3D segmentations from the pre-planning stage. For validation, orthogonal x-ray pairs from different viewing directions of 80 pelvic cone-beam CT (CBCT) raw data sets were used. The 2D/3D results were compared to corresponding standard 3D/3D CBCT-to-CT alignments. Outcome over 8400 2D/3D experiments showed that parametric errors in root mean square were <0.18° (rotations) and <0.73 mm (translations), respectively, using rank correlation as intensity metric. This corresponds to a mean target registration error, related to the voxels of the lesser pelvis, of <2 mm in 94.1% of the cases. From the results we conclude that 2D/3D registration based on sequentially acquired orthogonal x-rays of the pelvis is a viable alternative to CBCT-based approaches if rigid alignment on bony anatomy is sufficient, no volumetric intra-interventional data set is required and the expected error range fits the individual treatment prescription.

  12. Registration of 2D C-Arm and 3D CT Images for a C-Arm Image-Assisted Navigation System for Spinal Surgery

    PubMed Central

    Chang, Chih-Ju; Lin, Geng-Li; Tse, Alex; Chu, Hong-Yu; Tseng, Ching-Shiow

    2015-01-01

    C-Arm image-assisted surgical navigation system has been broadly applied to spinal surgery. However, accurate path planning on the C-Arm AP-view image is difficult. This research studies 2D-3D image registration methods to obtain the optimum transformation matrix between C-Arm and CT image frames. Through the transformation matrix, the surgical path planned on preoperative CT images can be transformed and displayed on the C-Arm images for surgical guidance. The positions of surgical instruments will also be displayed on both CT and C-Arm in the real time. Five similarity measure methods of 2D-3D image registration including Normalized Cross-Correlation, Gradient Correlation, Pattern Intensity, Gradient Difference Correlation, and Mutual Information combined with three optimization methods including Powell's method, Downhill simplex algorithm, and genetic algorithm are applied to evaluate their performance in converge range, efficiency, and accuracy. Experimental results show that the combination of Normalized Cross-Correlation measure method with Downhill simplex algorithm obtains maximum correlation and similarity in C-Arm and Digital Reconstructed Radiograph (DRR) images. Spine saw bones are used in the experiment to evaluate 2D-3D image registration accuracy. The average error in displacement is 0.22 mm. The success rate is approximately 90% and average registration time takes 16 seconds. PMID:27018859

  13. Registration of 2D C-Arm and 3D CT Images for a C-Arm Image-Assisted Navigation System for Spinal Surgery.

    PubMed

    Chang, Chih-Ju; Lin, Geng-Li; Tse, Alex; Chu, Hong-Yu; Tseng, Ching-Shiow

    2015-01-01

    C-Arm image-assisted surgical navigation system has been broadly applied to spinal surgery. However, accurate path planning on the C-Arm AP-view image is difficult. This research studies 2D-3D image registration methods to obtain the optimum transformation matrix between C-Arm and CT image frames. Through the transformation matrix, the surgical path planned on preoperative CT images can be transformed and displayed on the C-Arm images for surgical guidance. The positions of surgical instruments will also be displayed on both CT and C-Arm in the real time. Five similarity measure methods of 2D-3D image registration including Normalized Cross-Correlation, Gradient Correlation, Pattern Intensity, Gradient Difference Correlation, and Mutual Information combined with three optimization methods including Powell's method, Downhill simplex algorithm, and genetic algorithm are applied to evaluate their performance in converge range, efficiency, and accuracy. Experimental results show that the combination of Normalized Cross-Correlation measure method with Downhill simplex algorithm obtains maximum correlation and similarity in C-Arm and Digital Reconstructed Radiograph (DRR) images. Spine saw bones are used in the experiment to evaluate 2D-3D image registration accuracy. The average error in displacement is 0.22 mm. The success rate is approximately 90% and average registration time takes 16 seconds.

  14. Registration of 2D C-Arm and 3D CT Images for a C-Arm Image-Assisted Navigation System for Spinal Surgery.

    PubMed

    Chang, Chih-Ju; Lin, Geng-Li; Tse, Alex; Chu, Hong-Yu; Tseng, Ching-Shiow

    2015-01-01

    C-Arm image-assisted surgical navigation system has been broadly applied to spinal surgery. However, accurate path planning on the C-Arm AP-view image is difficult. This research studies 2D-3D image registration methods to obtain the optimum transformation matrix between C-Arm and CT image frames. Through the transformation matrix, the surgical path planned on preoperative CT images can be transformed and displayed on the C-Arm images for surgical guidance. The positions of surgical instruments will also be displayed on both CT and C-Arm in the real time. Five similarity measure methods of 2D-3D image registration including Normalized Cross-Correlation, Gradient Correlation, Pattern Intensity, Gradient Difference Correlation, and Mutual Information combined with three optimization methods including Powell's method, Downhill simplex algorithm, and genetic algorithm are applied to evaluate their performance in converge range, efficiency, and accuracy. Experimental results show that the combination of Normalized Cross-Correlation measure method with Downhill simplex algorithm obtains maximum correlation and similarity in C-Arm and Digital Reconstructed Radiograph (DRR) images. Spine saw bones are used in the experiment to evaluate 2D-3D image registration accuracy. The average error in displacement is 0.22 mm. The success rate is approximately 90% and average registration time takes 16 seconds. PMID:27018859

  15. Robust 3D-2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation

    NASA Astrophysics Data System (ADS)

    Otake, Yoshito; Wang, Adam S.; Webster Stayman, J.; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Khanna, A. Jay; Gokaslan, Ziya L.; Siewerdsen, Jeffrey H.

    2013-12-01

    We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with ‘success’ defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial

  16. Robust 3D-2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation.

    PubMed

    Otake, Yoshito; Wang, Adam S; Webster Stayman, J; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Khanna, A Jay; Gokaslan, Ziya L; Siewerdsen, Jeffrey H

    2013-12-01

    We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with 'success' defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial run) the

  17. Automatic localization of target vertebrae in spine surgery using fast CT-to-fluoroscopy (3D-2D) image registration

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Schafer, S.; Stayman, J. W.; Zbijewski, W.; Kleinszig, G.; Graumann, R.; Khanna, A. J.; Siewerdsen, J. H.

    2012-02-01

    Localization of target vertebrae is an essential step in minimally invasive spine surgery, with conventional methods relying on "level counting" - i.e., manual counting of vertebrae under fluoroscopy starting from readily identifiable anatomy (e.g., the sacrum). The approach requires an undesirable level of radiation, time, and is prone to counting errors due to the similar appearance of vertebrae in projection images; wrong-level surgery occurs in 1 of every ~3000 cases. This paper proposes a method to automatically localize target vertebrae in x-ray projections using 3D-2D registration between preoperative CT (in which vertebrae are preoperatively labeled) and intraoperative fluoroscopy. The registration uses an intensity-based approach with a gradient-based similarity metric and the CMA-ES algorithm for optimization. Digitally reconstructed radiographs (DRRs) and a robust similarity metric are computed on GPU to accelerate the process. Evaluation in clinical CT data included 5,000 PA and LAT projections randomly perturbed to simulate human variability in setup of mobile intraoperative C-arm. The method demonstrated 100% success for PA view (projection error: 0.42mm) and 99.8% success for LAT view (projection error: 0.37mm). Initial implementation on GPU provided automatic target localization within about 3 sec, with further improvement underway via multi-GPU. The ability to automatically label vertebrae in fluoroscopy promises to streamline surgical workflow, improve patient safety, and reduce wrong-site surgeries, especially in large patients for whom manual methods are time consuming and error prone.

  18. Position tracking of moving liver lesion based on real-time registration between 2D ultrasound and 3D preoperative images

    SciTech Connect

    Weon, Chijun; Hyun Nam, Woo; Lee, Duhgoon; Ra, Jong Beom; Lee, Jae Young

    2015-01-15

    Purpose: Registration between 2D ultrasound (US) and 3D preoperative magnetic resonance (MR) (or computed tomography, CT) images has been studied recently for US-guided intervention. However, the existing techniques have some limits, either in the registration speed or the performance. The purpose of this work is to develop a real-time and fully automatic registration system between two intermodal images of the liver, and subsequently an indirect lesion positioning/tracking algorithm based on the registration result, for image-guided interventions. Methods: The proposed position tracking system consists of three stages. In the preoperative stage, the authors acquire several 3D preoperative MR (or CT) images at different respiratory phases. Based on the transformations obtained from nonrigid registration of the acquired 3D images, they then generate a 4D preoperative image along the respiratory phase. In the intraoperative preparatory stage, they properly attach a 3D US transducer to the patient’s body and fix its pose using a holding mechanism. They then acquire a couple of respiratory-controlled 3D US images. Via the rigid registration of these US images to the 3D preoperative images in the 4D image, the pose information of the fixed-pose 3D US transducer is determined with respect to the preoperative image coordinates. As feature(s) to use for the rigid registration, they may choose either internal liver vessels or the inferior vena cava. Since the latter is especially useful in patients with a diffuse liver disease, the authors newly propose using it. In the intraoperative real-time stage, they acquire 2D US images in real-time from the fixed-pose transducer. For each US image, they select candidates for its corresponding 2D preoperative slice from the 4D preoperative MR (or CT) image, based on the predetermined pose information of the transducer. The correct corresponding image is then found among those candidates via real-time 2D registration based on a

  19. A comparison of the 3D kinematic measurements obtained by single-plane 2D-3D image registration and RSA.

    PubMed

    Muhit, Abdullah A; Pickering, Mark R; Ward, Tom; Scarvell, Jennie M; Smith, Paul N

    2010-01-01

    3D computed tomography (CT) to single-plane 2D fluoroscopy registration is an emerging technology for many clinical applications such as kinematic analysis of human joints and image-guided surgery. However, previous registration approaches have suffered from the inaccuracy of determining precise motion parameters for out-of-plane movements. In this paper we compare kinematic measurements obtained by a new 2D-3D registration algorithm with measurements provided by the gold standard Roentgen Stereo Analysis (RSA). In particular, we are interested in the out-of-plane translation and rotations which are difficult to measure precisely using a single plane approach. Our experimental results show that the standard deviation of the error for out-of-plane translation is 0.42 mm which compares favourably to RSA. It is also evident that our approach produces very similar flexion/extension, abduction/adduction and external knee rotation angles when compared to RSA.

  20. 3D-2D image registration for target localization in spine surgery: investigation of similarity metrics providing robustness to content mismatch

    NASA Astrophysics Data System (ADS)

    De Silva, T.; Uneri, A.; Ketcha, M. D.; Reaungamornrat, S.; Kleinszig, G.; Vogt, S.; Aygun, N.; Lo, S.-F.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-04-01

    In image-guided spine surgery, robust three-dimensional to two-dimensional (3D-2D) registration of preoperative computed tomography (CT) and intraoperative radiographs can be challenged by the image content mismatch associated with the presence of surgical instrumentation and implants as well as soft-tissue resection or deformation. This work investigates image similarity metrics in 3D-2D registration offering improved robustness against mismatch, thereby improving performance and reducing or eliminating the need for manual masking. The performance of four gradient-based image similarity metrics (gradient information (GI), gradient correlation (GC), gradient information with linear scaling (GS), and gradient orientation (GO)) with a multi-start optimization strategy was evaluated in an institutional review board-approved retrospective clinical study using 51 preoperative CT images and 115 intraoperative mobile radiographs. Registrations were tested with and without polygonal masks as a function of the number of multistarts employed during optimization. Registration accuracy was evaluated in terms of the projection distance error (PDE) and assessment of failure modes (PDE  >  30 mm) that could impede reliable vertebral level localization. With manual polygonal masking and 200 multistarts, the GC and GO metrics exhibited robust performance with 0% gross failures and median PDE  <  6.4 mm (±4.4 mm interquartile range (IQR)) and a median runtime of 84 s (plus upwards of 1-2 min for manual masking). Excluding manual polygonal masks and decreasing the number of multistarts to 50 caused the GC-based registration to fail at a rate of  >14% however, GO maintained robustness with a 0% gross failure rate. Overall, the GI, GC, and GS metrics were susceptible to registration errors associated with content mismatch, but GO provided robust registration (median PDE  =  5.5 mm, 2.6 mm IQR) without manual masking and with an improved

  1. 3D–2D image registration for target localization in spine surgery: investigation of similarity metrics providing robustness to content mismatch

    PubMed Central

    De Silva, T; Uneri, A; Ketcha, M D; Reaungamornrat, S; Kleinszig, G; Vogt, S; Aygun, N; Lo, S-F; Wolinsky, J-P; Siewerdsen, J H

    2016-01-01

    In image-guided spine surgery, robust three-dimensional to two-dimensional (3D–2D) registration of preoperative computed tomography (CT) and intraoperative radiographs can be challenged by the image content mismatch associated with the presence of surgical instrumentation and implants as well as soft-tissue resection or deformation. This work investigates image similarity metrics in 3D–2D registration offering improved robustness against mismatch, thereby improving performance and reducing or eliminating the need for manual masking. The performance of four gradient-based image similarity metrics (gradient information (GI), gradient correlation (GC), gradient information with linear scaling (GS), and gradient orientation (GO)) with a multi-start optimization strategy was evaluated in an institutional review board-approved retrospective clinical study using 51 preoperative CT images and 115 intraoperative mobile radiographs. Registrations were tested with and without polygonal masks as a function of the number of multistarts employed during optimization. Registration accuracy was evaluated in terms of the projection distance error (PDE) and assessment of failure modes (PDE > 30 mm) that could impede reliable vertebral level localization. With manual polygonal masking and 200 multistarts, the GC and GO metrics exhibited robust performance with 0% gross failures and median PDE < 6.4 mm (±4.4 mm interquartile range (IQR)) and a median runtime of 84 s (plus upwards of 1–2 min for manual masking). Excluding manual polygonal masks and decreasing the number of multistarts to 50 caused the GC-based registration to fail at a rate of >14%; however, GO maintained robustness with a 0% gross failure rate. Overall, the GI, GC, and GS metrics were susceptible to registration errors associated with content mismatch, but GO provided robust registration (median PDE = 5.5 mm, 2.6 mm IQR) without manual masking and with an improved runtime (29.3 s). The GO metric improved the

  2. 3D-2D image registration for target localization in spine surgery: investigation of similarity metrics providing robustness to content mismatch.

    PubMed

    De Silva, T; Uneri, A; Ketcha, M D; Reaungamornrat, S; Kleinszig, G; Vogt, S; Aygun, N; Lo, S-F; Wolinsky, J-P; Siewerdsen, J H

    2016-04-21

    In image-guided spine surgery, robust three-dimensional to two-dimensional (3D-2D) registration of preoperative computed tomography (CT) and intraoperative radiographs can be challenged by the image content mismatch associated with the presence of surgical instrumentation and implants as well as soft-tissue resection or deformation. This work investigates image similarity metrics in 3D-2D registration offering improved robustness against mismatch, thereby improving performance and reducing or eliminating the need for manual masking. The performance of four gradient-based image similarity metrics (gradient information (GI), gradient correlation (GC), gradient information with linear scaling (GS), and gradient orientation (GO)) with a multi-start optimization strategy was evaluated in an institutional review board-approved retrospective clinical study using 51 preoperative CT images and 115 intraoperative mobile radiographs. Registrations were tested with and without polygonal masks as a function of the number of multistarts employed during optimization. Registration accuracy was evaluated in terms of the projection distance error (PDE) and assessment of failure modes (PDE  >  30 mm) that could impede reliable vertebral level localization. With manual polygonal masking and 200 multistarts, the GC and GO metrics exhibited robust performance with 0% gross failures and median PDE < 6.4 mm (±4.4 mm interquartile range (IQR)) and a median runtime of 84 s (plus upwards of 1-2 min for manual masking). Excluding manual polygonal masks and decreasing the number of multistarts to 50 caused the GC-based registration to fail at a rate of >14%; however, GO maintained robustness with a 0% gross failure rate. Overall, the GI, GC, and GS metrics were susceptible to registration errors associated with content mismatch, but GO provided robust registration (median PDE  =  5.5 mm, 2.6 mm IQR) without manual masking and with an improved runtime (29.3 s). The GO metric improved

  3. Intraoperative Image-based Multiview 2D/3D Registration for Image-Guided Orthopaedic Surgery: Incorporation of Fiducial-Based C-Arm Tracking and GPU-Acceleration

    PubMed Central

    Armand, Mehran; Armiger, Robert S.; Kutzer, Michael D.; Basafa, Ehsan; Kazanzides, Peter; Taylor, Russell H.

    2012-01-01

    Intraoperative patient registration may significantly affect the outcome of image-guided surgery (IGS). Image-based registration approaches have several advantages over the currently dominant point-based direct contact methods and are used in some industry solutions in image-guided radiation therapy with fixed X-ray gantries. However, technical challenges including geometric calibration and computational cost have precluded their use with mobile C-arms for IGS. We propose a 2D/3D registration framework for intraoperative patient registration using a conventional mobile X-ray imager combining fiducial-based C-arm tracking and graphics processing unit (GPU)-acceleration. The two-stage framework 1) acquires X-ray images and estimates relative pose between the images using a custom-made in-image fiducial, and 2) estimates the patient pose using intensity-based 2D/3D registration. Experimental validations using a publicly available gold standard dataset, a plastic bone phantom and cadaveric specimens have been conducted. The mean target registration error (mTRE) was 0.34 ± 0.04 mm (success rate: 100%, registration time: 14.2 s) for the phantom with two images 90° apart, and 0.99 ± 0.41 mm (81%, 16.3 s) for the cadaveric specimen with images 58.5° apart. The experimental results showed the feasibility of the proposed registration framework as a practical alternative for IGS routines. PMID:22113773

  4. Image fusion of Ultrasound Computer Tomography volumes with X-ray mammograms using a biomechanical model based 2D/3D registration.

    PubMed

    Hopp, T; Duric, N; Ruiter, N V

    2015-03-01

    Ultrasound Computer Tomography (USCT) is a promising breast imaging modality under development. Comparison to a standard method like mammography is essential for further development. Due to significant differences in image dimensionality and compression state of the breast, correlating USCT images and X-ray mammograms is challenging. In this paper we present a 2D/3D registration method to improve the spatial correspondence and allow direct comparison of the images. It is based on biomechanical modeling of the breast and simulation of the mammographic compression. We investigate the effect of including patient-specific material parameters estimated automatically from USCT images. The method was systematically evaluated using numerical phantoms and in-vivo data. The average registration accuracy using the automated registration was 11.9mm. Based on the registered images a method for analysis of the diagnostic value of the USCT images was developed and initially applied to analyze sound speed and attenuation images based on X-ray mammograms as ground truth. Combining sound speed and attenuation allows differentiating lesions from surrounding tissue. Overlaying this information on mammograms, combines quantitative and morphological information for multimodal diagnosis. PMID:25456144

  5. Interactive initialization of 2D/3D rigid registration

    SciTech Connect

    Gong, Ren Hui; Güler, Özgür; Kürklüoglu, Mustafa; Lovejoy, John; Yaniv, Ziv

    2013-12-15

    Purpose: Registration is one of the key technical components in an image-guided navigation system. A large number of 2D/3D registration algorithms have been previously proposed, but have not been able to transition into clinical practice. The authors identify the primary reason for the lack of adoption with the prerequisite for a sufficiently accurate initial transformation, mean target registration error of about 10 mm or less. In this paper, the authors present two interactive initialization approaches that provide the desired accuracy for x-ray/MR and x-ray/CT registration in the operating room setting. Methods: The authors have developed two interactive registration methods based on visual alignment of a preoperative image, MR, or CT to intraoperative x-rays. In the first approach, the operator uses a gesture based interface to align a volume rendering of the preoperative image to multiple x-rays. The second approach uses a tracked tool available as part of a navigation system. Preoperatively, a virtual replica of the tool is positioned next to the anatomical structures visible in the volumetric data. Intraoperatively, the physical tool is positioned in a similar manner and subsequently used to align a volume rendering to the x-ray images using an augmented reality (AR) approach. Both methods were assessed using three publicly available reference data sets for 2D/3D registration evaluation. Results: In the authors' experiments, the authors show that for x-ray/MR registration, the gesture based method resulted in a mean target registration error (mTRE) of 9.3 ± 5.0 mm with an average interaction time of 146.3 ± 73.0 s, and the AR-based method had mTREs of 7.2 ± 3.2 mm with interaction times of 44 ± 32 s. For x-ray/CT registration, the gesture based method resulted in a mTRE of 7.4 ± 5.0 mm with an average interaction time of 132.1 ± 66.4 s, and the AR-based method had mTREs of 8.3 ± 5.0 mm with interaction times of 58 ± 52 s. Conclusions: Based on the

  6. SU-E-J-13: Six Degree of Freedom Image Fusion Accuracy for Cranial Target Localization On the Varian Edge Stereotactic Radiosurgery System: Comparison Between 2D/3D and KV CBCT Image Registration

    SciTech Connect

    Xu, H; Song, K; Chetty, I; Kim, J; Wen, N

    2015-06-15

    Purpose: To determine the 6 degree of freedom systematic deviations between 2D/3D and CBCT image registration with various imaging setups and fusion algorithms on the Varian Edge Linac. Methods: An anthropomorphic head phantom with radio opaque targets embedded was scanned with CT slice thicknesses of 0.8, 1, 2, and 3mm. The 6 DOF systematic errors were assessed by comparing 2D/3D (kV/MV with CT) with 3D/3D (CBCT with CT) image registrations with different offset positions, similarity measures, image filters, and CBCT slice thicknesses (1 and 2 mm). The 2D/3D registration accuracy of 51 fractions for 26 cranial SRS patients was also evaluated by analyzing 2D/3D pre-treatment verification taken after 3D/3D image registrations. Results: The systematic deviations of 2D/3D image registration using kV- kV, MV-kV and MV-MV image pairs were within ±0.3mm and ±0.3° for translations and rotations with 95% confidence interval (CI) for a reference CT with 0.8 mm slice thickness. No significant difference (P>0.05) on target localization was observed between 0.8mm, 1mm, and 2mm CT slice thicknesses with CBCT slice thicknesses of 1mm and 2mm. With 3mm CT slice thickness, both 2D/3D and 3D/3D registrations performed less accurately in longitudinal direction than thinner CT slice thickness (0.60±0.12mm and 0.63±0.07mm off, respectively). Using content filter and using similarity measure of pattern intensity instead of mutual information, improved the 2D/3D registration accuracy significantly (P=0.02 and P=0.01, respectively). For the patient study, means and standard deviations of residual errors were 0.09±0.32mm, −0.22±0.51mm and −0.07±0.32mm in VRT, LNG and LAT directions, respectively, and 0.12°±0.46°, −0.12°±0.39° and 0.06°±0.28° in RTN, PITCH, and ROLL directions, respectively. 95% CI of translational and rotational deviations were comparable to those in phantom study. Conclusion: 2D/3D image registration provided on the Varian Edge radiosurgery, 6 DOF

  7. FIRE: an open-software suite for real-time 2D/3D image registration for image guided radiotherapy research

    NASA Astrophysics Data System (ADS)

    Furtado, H.; Gendrin, C.; Spoerk, J.; Steiner, E.; Underwood, T.; Kuenzler, T.; Georg, D.; Birkfellner, W.

    2016-03-01

    Radiotherapy treatments have changed at a tremendously rapid pace. Dose delivered to the tumor has escalated while organs at risk (OARs) are better spared. The impact of moving tumors during dose delivery has become higher due to very steep dose gradients. Intra-fractional tumor motion has to be managed adequately to reduce errors in dose delivery. For tumors with large motion such as tumors in the lung, tracking is an approach that can reduce position uncertainty. Tumor tracking approaches range from purely image intensity based techniques to motion estimation based on surrogate tracking. Research efforts are often based on custom designed software platforms which take too much time and effort to develop. To address this challenge we have developed an open software platform especially focusing on tumor motion management. FLIRT is a freely available open-source software platform. The core method for tumor tracking is purely intensity based 2D/3D registration. The platform is written in C++ using the Qt framework for the user interface. The performance critical methods are implemented on the graphics processor using the CUDA extension. One registration can be as fast as 90ms (11Hz). This is suitable to track tumors moving due to respiration (~0.3Hz) or heartbeat (~1Hz). Apart from focusing on high performance, the platform is designed to be flexible and easy to use. Current use cases range from tracking feasibility studies, patient positioning and method validation. Such a framework has the potential of enabling the research community to rapidly perform patient studies or try new methods.

  8. 2D/3D registration algorithm for lung brachytherapy

    SciTech Connect

    Zvonarev, P. S.; Farrell, T. J.; Hunter, R.; Wierzbicki, M.; Hayward, J. E.; Sur, R. K.

    2013-02-15

    Purpose: A 2D/3D registration algorithm is proposed for registering orthogonal x-ray images with a diagnostic CT volume for high dose rate (HDR) lung brachytherapy. Methods: The algorithm utilizes a rigid registration model based on a pixel/voxel intensity matching approach. To achieve accurate registration, a robust similarity measure combining normalized mutual information, image gradient, and intensity difference was developed. The algorithm was validated using a simple body and anthropomorphic phantoms. Transfer catheters were placed inside the phantoms to simulate the unique image features observed during treatment. The algorithm sensitivity to various degrees of initial misregistration and to the presence of foreign objects, such as ECG leads, was evaluated. Results: The mean registration error was 2.2 and 1.9 mm for the simple body and anthropomorphic phantoms, respectively. The error was comparable to the interoperator catheter digitization error of 1.6 mm. Preliminary analysis of data acquired from four patients indicated a mean registration error of 4.2 mm. Conclusions: Results obtained using the proposed algorithm are clinically acceptable especially considering the complications normally encountered when imaging during lung HDR brachytherapy.

  9. Kinematic Analysis of Healthy Hips during Weight-Bearing Activities by 3D-to-2D Model-to-Image Registration Technique

    PubMed Central

    Hara, Daisuke; Nakashima, Yasuharu; Hamai, Satoshi; Higaki, Hidehiko; Ikebe, Satoru; Shimoto, Takeshi; Hirata, Masanobu; Kanazawa, Masayuki; Kohno, Yusuke; Iwamoto, Yukihide

    2014-01-01

    Dynamic hip kinematics during weight-bearing activities were analyzed for six healthy subjects. Continuous X-ray images of gait, chair-rising, squatting, and twisting were taken using a flat panel X-ray detector. Digitally reconstructed radiographic images were used for 3D-to-2D model-to-image registration technique. The root-mean-square errors associated with tracking the pelvis and femur were less than 0.3 mm and 0.3° for translations and rotations. For gait, chair-rising, and squatting, the maximum hip flexion angles averaged 29.6°, 81.3°, and 102.4°, respectively. The pelvis was tilted anteriorly around 4.4° on average during full gait cycle. For chair-rising and squatting, the maximum absolute value of anterior/posterior pelvic tilt averaged 12.4°/11.7° and 10.7°/10.8°, respectively. Hip flexion peaked on the way of movement due to further anterior pelvic tilt during both chair-rising and squatting. For twisting, the maximum absolute value of hip internal/external rotation averaged 29.2°/30.7°. This study revealed activity dependent kinematics of healthy hip joints with coordinated pelvic and femoral dynamic movements. Kinematics' data during activities of daily living may provide important insight as to the evaluating kinematics of pathological and reconstructed hips. PMID:25506056

  10. Model-based measurement of food portion size for image-based dietary assessment using 3D/2D registration

    NASA Astrophysics Data System (ADS)

    Chen, Hsin-Chen; Jia, Wenyan; Yue, Yaofeng; Li, Zhaoxin; Sun, Yung-Nien; Fernstrom, John D.; Sun, Mingui

    2013-10-01

    Dietary assessment is important in health maintenance and intervention in many chronic conditions, such as obesity, diabetes and cardiovascular disease. However, there is currently a lack of convenient methods for measuring the volume of food (portion size) in real-life settings. We present a computational method to estimate food volume from a single photographic image of food contained on a typical dining plate. First, we calculate the food location with respect to a 3D camera coordinate system using the plate as a scale reference. Then, the food is segmented automatically from the background in the image. Adaptive thresholding and snake modeling are implemented based on several image features, such as color contrast, regional color homogeneity and curve bending degree. Next, a 3D model representing the general shape of the food (e.g., a cylinder, a sphere, etc) is selected from a pre-constructed shape model library. The position, orientation and scale of the selected shape model are determined by registering the projected 3D model and the food contour in the image, where the properties of the reference are used as constraints. Experimental results using various realistically shaped foods with known volumes demonstrated satisfactory performance of our image-based food volume measurement method even if the 3D geometric surface of the food is not completely represented in the input image.

  11. Device and methods for "gold standard" registration of clinical 3D and 2D cerebral angiograms

    NASA Astrophysics Data System (ADS)

    Madan, Hennadii; Likar, Boštjan; Pernuš, Franjo; Å piclin, Žiga

    2015-03-01

    Translation of any novel and existing 3D-2D image registration methods into clinical image-guidance systems is limited due to lack of their objective validation on clinical image datasets. The main reason is that, besides the calibration of the 2D imaging system, a reference or "gold standard" registration is very difficult to obtain on clinical image datasets. In the context of cerebral endovascular image-guided interventions (EIGIs), we present a calibration device in the form of a headband with integrated fiducial markers and, secondly, propose an automated pipeline comprising 3D and 2D image processing, analysis and annotation steps, the result of which is a retrospective calibration of the 2D imaging system and an optimal, i.e., "gold standard" registration of 3D and 2D images. The device and methods were used to create the "gold standard" on 15 datasets of 3D and 2D cerebral angiograms, whereas each dataset was acquired on a patient undergoing EIGI for either aneurysm coiling or embolization of arteriovenous malformation. The use of the device integrated seamlessly in the clinical workflow of EIGI. While the automated pipeline eliminated all manual input or interactive image processing, analysis or annotation. In this way, the time to obtain the "gold standard" was reduced from 30 to less than one minute and the "gold standard" of 3D-2D registration on all 15 datasets of cerebral angiograms was obtained with a sub-0.1 mm accuracy.

  12. Reconstruction of 3D lung models from 2D planning data sets for Hodgkin's lymphoma patients using combined deformable image registration and navigator channels

    SciTech Connect

    Ng, Angela; Nguyen, Thao-Nguyen; Moseley, Joanne L.; Hodgson, David C.; Sharpe, Michael B.; Brock, Kristy K.

    2010-03-15

    Purpose: Late complications (cardiac toxicities, secondary lung, and breast cancer) remain a significant concern in the radiation treatment of Hodgkin's lymphoma (HL). To address this issue, predictive dose-risk models could potentially be used to estimate radiotherapy-related late toxicities. This study investigates the use of deformable image registration (DIR) and navigator channels (NCs) to reconstruct 3D lung models from 2D radiographic planning images, in order to retrospectively calculate the treatment dose exposure to HL patients treated with 2D planning, which are now experiencing late effects. Methods: Three-dimensional planning CT images of 52 current HL patients were acquired. 12 image sets were used to construct a male and a female population lung model. 23 ''Reference'' images were used to generate lung deformation adaptation templates, constructed by deforming the population model into each patient-specific lung geometry using a biomechanical-based DIR algorithm, MORFEUS. 17 ''Test'' patients were used to test the accuracy of the reconstruction technique by adapting existing templates using 2D digitally reconstructed radiographs. The adaptation process included three steps. First, a Reference patient was matched to a Test patient by thorax measurements. Second, four NCs (small regions of interest) were placed on the lung boundary to calculate 1D differences in lung edges. Third, the Reference lung model was adapted to the Test patient's lung using the 1D edge differences. The Reference-adapted Test model was then compared to the 3D lung contours of the actual Test patient by computing their percentage volume overlap (POL) and Dice coefficient. Results: The average percentage overlapping volumes and Dice coefficient expressed as a percentage between the adapted and actual Test models were found to be 89.2{+-}3.9% (Right lung=88.8%; Left lung=89.6%) and 89.3{+-}2.7% (Right=88.5%; Left=90.2%), respectively. Paired T-tests demonstrated that the

  13. Radar image registration and rectification

    NASA Technical Reports Server (NTRS)

    Naraghi, M.; Stromberg, W. D.

    1983-01-01

    Two techniques for radar image registration and rectification are presented. In the registration method, a general 2-D polynomial transform is defined to accomplish the geometric mapping from one image into the other. The degree and coefficients of the polynomial are obtained using an a priori found tiepoint data set. In the second part of the paper, a rectification procedure is developed that models the distortion present in the radar image in terms of the radar sensor's platform parameters and the topographic variations of the imaged scene. This model, the ephemeris data and the digital topographic data are then used in rectifying the radar image. The two techniques are then used in registering and rectifying two examples of radar imagery. Each method is discussed as to its benefits, shortcomings and registration accuracy.

  14. Image Registration Workshop Proceedings

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline (Editor)

    1997-01-01

    Automatic image registration has often been considered as a preliminary step for higher-level processing, such as object recognition or data fusion. But with the unprecedented amounts of data which are being and will continue to be generated by newly developed sensors, the very topic of automatic image registration has become and important research topic. This workshop presents a collection of very high quality work which has been grouped in four main areas: (1) theoretical aspects of image registration; (2) applications to satellite imagery; (3) applications to medical imagery; and (4) image registration for computer vision research.

  15. Self-calibration of cone-beam CT geometry using 3D-2D image registration: development and application to tasked-based imaging with a robotic C-arm

    NASA Astrophysics Data System (ADS)

    Ouadah, S.; Stayman, J. W.; Gang, G.; Uneri, A.; Ehtiati, T.; Siewerdsen, J. H.

    2015-03-01

    Purpose: Robotic C-arm systems are capable of general noncircular orbits whose trajectories can be driven by the particular imaging task. However obtaining accurate calibrations for reconstruction in such geometries can be a challenging problem. This work proposes a method to perform a unique geometric calibration of an arbitrary C-arm orbit by registering 2D projections to a previously acquired 3D image to determine the transformation parameters representing the system geometry. Methods: Experiments involved a cone-beam CT (CBCT) bench system, a robotic C-arm, and three phantoms. A robust 3D-2D registration process was used to compute the 9 degree of freedom (DOF) transformation between each projection and an existing 3D image by maximizing normalized gradient information with a digitally reconstructed radiograph (DRR) of the 3D volume. The quality of the resulting "self-calibration" was evaluated in terms of the agreement with an established calibration method using a BB phantom as well as image quality in the resulting CBCT reconstruction. Results: The self-calibration yielded CBCT images without significant difference in spatial resolution from the standard ("true") calibration methods (p-value >0.05 for all three phantoms), and the differences between CBCT images reconstructed using the "self" and "true" calibration methods were on the order of 10-3 mm-1. Maximum error in magnification was 3.2%, and back-projection ray placement was within 0.5 mm. Conclusion: The proposed geometric "self" calibration provides a means for 3D imaging on general noncircular orbits in CBCT systems for which a geometric calibration is either not available or not reproducible. The method forms the basis of advanced "task-based" 3D imaging methods now in development for robotic C-arms.

  16. 3D-2D registration of cerebral angiograms based on vessel directions and intensity gradients

    NASA Astrophysics Data System (ADS)

    Mitrovic, Uroš; Špiclin, Žiga; Štern, Darko; Markelj, Primož; Likar, Boštjan; Miloševic, Zoran; Pernuš, Franjo

    2012-02-01

    Endovascular treatment of cerebral aneurysms and arteriovenous malformations (AVM) involves navigation of a catheter through the femoral artery and vascular system to the site of pathology. Intra-interventional navigation is done under the guidance of one or at most two two-dimensional (2D) X-ray fluoroscopic images or 2D digital subtracted angiograms (DSA). Due to the projective nature of 2D images, the interventionist needs to mentally reconstruct the position of the catheter in respect to the three-dimensional (3D) patient vasculature, which is not a trivial task. By 3D-2D registration of pre-interventional 3D images like CTA, MRA or 3D-DSA and intra-interventional 2D images, intra-interventional tools such as catheters can be visualized on the 3D model of patient vasculature, allowing easier and faster navigation. Such a navigation may consequently lead to the reduction of total ionizing dose and delivered contrast medium. In the past, development and evaluation of 3D-2D registration methods for endovascular treatments received considerable attention. The main drawback of these methods is that they have to be initialized rather close to the correct position as they mostly have a rather small capture range. In this paper, a novel registration method that has a higher capture range and success rate is proposed. The proposed method and a state-of-the-art method were tested and evaluated on synthetic and clinical 3D-2D image-pairs. The results on both databases indicate that although the proposed method was slightly less accurate, it significantly outperformed the state-of-the-art 3D-2D registration method in terms of robustness measured by capture range and success rate.

  17. Validation for 2D/3D registration I: A new gold standard data set

    SciTech Connect

    Pawiro, S. A.; Markelj, P.; Pernus, F.; Gendrin, C.; Figl, M.; Weber, C.; Kainberger, F.; Noebauer-Huhmann, I.; Bergmeister, H.; Stock, M.; Georg, D.; Bergmann, H.; Birkfellner, W.

    2011-03-15

    Purpose: In this article, the authors propose a new gold standard data set for the validation of two-dimensional/three-dimensional (2D/3D) and 3D/3D image registration algorithms. Methods: A gold standard data set was produced using a fresh cadaver pig head with attached fiducial markers. The authors used several imaging modalities common in diagnostic imaging or radiotherapy, which include 64-slice computed tomography (CT), magnetic resonance imaging using Tl, T2, and proton density sequences, and cone beam CT imaging data. Radiographic data were acquired using kilovoltage and megavoltage imaging techniques. The image information reflects both anatomy and reliable fiducial marker information and improves over existing data sets by the level of anatomical detail, image data quality, and soft-tissue content. The markers on the 3D and 2D image data were segmented using ANALYZE 10.0 (AnalyzeDirect, Inc., Kansas City, KN) and an in-house software. Results: The projection distance errors and the expected target registration errors over all the image data sets were found to be less than 2.71 and 1.88 mm, respectively. Conclusions: The gold standard data set, obtained with state-of-the-art imaging technology, has the potential to improve the validation of 2D/3D and 3D/3D registration algorithms for image guided therapy.

  18. 3D-2D registration for surgical guidance: effect of projection view angles on registration accuracy

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Otake, Y.; Wang, A. S.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Siewerdsen, J. H.

    2014-01-01

    An algorithm for intensity-based 3D-2D registration of CT and x-ray projections is evaluated, specifically using single- or dual-projection views to provide 3D localization. The registration framework employs the gradient information similarity metric and covariance matrix adaptation evolution strategy to solve for the patient pose in six degrees of freedom. Registration performance was evaluated in an anthropomorphic phantom and cadaver, using C-arm projection views acquired at angular separation, Δθ, ranging from ˜0°-180° at variable C-arm magnification. Registration accuracy was assessed in terms of 2D projection distance error and 3D target registration error (TRE) and compared to that of an electromagnetic (EM) tracker. The results indicate that angular separation as small as Δθ ˜10°-20° achieved TRE <2 mm with 95% confidence, comparable or superior to that of the EM tracker. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers and manual registration.

  19. Locally adaptive 2D-3D registration using vascular structure model for liver catheterization.

    PubMed

    Kim, Jihye; Lee, Jeongjin; Chung, Jin Wook; Shin, Yeong-Gil

    2016-03-01

    Two-dimensional-three-dimensional (2D-3D) registration between intra-operative 2D digital subtraction angiography (DSA) and pre-operative 3D computed tomography angiography (CTA) can be used for roadmapping purposes. However, through the projection of 3D vessels, incorrect intersections and overlaps between vessels are produced because of the complex vascular structure, which makes it difficult to obtain the correct solution of 2D-3D registration. To overcome these problems, we propose a registration method that selects a suitable part of a 3D vascular structure for a given DSA image and finds the optimized solution to the partial 3D structure. The proposed algorithm can reduce the registration errors because it restricts the range of the 3D vascular structure for the registration by using only the relevant 3D vessels with the given DSA. To search for the appropriate 3D partial structure, we first construct a tree model of the 3D vascular structure and divide it into several subtrees in accordance with the connectivity. Then, the best matched subtree with the given DSA image is selected using the results from the coarse registration between each subtree and the vessels in the DSA image. Finally, a fine registration is conducted to minimize the difference between the selected subtree and the vessels of the DSA image. In experimental results obtained using 10 clinical datasets, the average distance errors in the case of the proposed method were 2.34±1.94mm. The proposed algorithm converges faster and produces more correct results than the conventional method in evaluations on patient datasets.

  20. Locally adaptive 2D-3D registration using vascular structure model for liver catheterization.

    PubMed

    Kim, Jihye; Lee, Jeongjin; Chung, Jin Wook; Shin, Yeong-Gil

    2016-03-01

    Two-dimensional-three-dimensional (2D-3D) registration between intra-operative 2D digital subtraction angiography (DSA) and pre-operative 3D computed tomography angiography (CTA) can be used for roadmapping purposes. However, through the projection of 3D vessels, incorrect intersections and overlaps between vessels are produced because of the complex vascular structure, which makes it difficult to obtain the correct solution of 2D-3D registration. To overcome these problems, we propose a registration method that selects a suitable part of a 3D vascular structure for a given DSA image and finds the optimized solution to the partial 3D structure. The proposed algorithm can reduce the registration errors because it restricts the range of the 3D vascular structure for the registration by using only the relevant 3D vessels with the given DSA. To search for the appropriate 3D partial structure, we first construct a tree model of the 3D vascular structure and divide it into several subtrees in accordance with the connectivity. Then, the best matched subtree with the given DSA image is selected using the results from the coarse registration between each subtree and the vessels in the DSA image. Finally, a fine registration is conducted to minimize the difference between the selected subtree and the vessels of the DSA image. In experimental results obtained using 10 clinical datasets, the average distance errors in the case of the proposed method were 2.34±1.94mm. The proposed algorithm converges faster and produces more correct results than the conventional method in evaluations on patient datasets. PMID:26824922

  1. Evaluation of low-dose limits in 3D-2D rigid registration for surgical guidance

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Wang, A. S.; Otake, Y.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Gallia, G. L.; Gokaslan, Z. L.; Siewerdsen, J. H.

    2014-09-01

    An algorithm for intensity-based 3D-2D registration of CT and C-arm fluoroscopy is evaluated for use in surgical guidance, specifically considering the low-dose limits of the fluoroscopic x-ray projections. The registration method is based on a framework using the covariance matrix adaptation evolution strategy (CMA-ES) to identify the 3D patient pose that maximizes the gradient information similarity metric. Registration performance was evaluated in an anthropomorphic head phantom emulating intracranial neurosurgery, using target registration error (TRE) to characterize accuracy and robustness in terms of 95% confidence upper bound in comparison to that of an infrared surgical tracking system. Three clinical scenarios were considered: (1) single-view image + guidance, wherein a single x-ray projection is used for visualization and 3D-2D guidance; (2) dual-view image + guidance, wherein one projection is acquired for visualization, combined with a second (lower-dose) projection acquired at a different C-arm angle for 3D-2D guidance; and (3) dual-view guidance, wherein both projections are acquired at low dose for the purpose of 3D-2D guidance alone (not visualization). In each case, registration accuracy was evaluated as a function of the entrance surface dose associated with the projection view(s). Results indicate that images acquired at a dose as low as 4 μGy (approximately one-tenth the dose of a typical fluoroscopic frame) were sufficient to provide TRE comparable or superior to that of conventional surgical tracking, allowing 3D-2D guidance at a level of dose that is at most 10% greater than conventional fluoroscopy (scenario #2) and potentially reducing the dose to approximately 20% of the level in a conventional fluoroscopically guided procedure (scenario #3).

  2. Staring 2-D hadamard transform spectral imager

    DOEpatents

    Gentry, Stephen M.; Wehlburg, Christine M.; Wehlburg, Joseph C.; Smith, Mark W.; Smith, Jody L.

    2006-02-07

    A staring imaging system inputs a 2D spatial image containing multi-frequency spectral information. This image is encoded in one dimension of the image with a cyclic Hadamarid S-matrix. The resulting image is detecting with a spatial 2D detector; and a computer applies a Hadamard transform to recover the encoded image.

  3. Multi-modal 2D-3D non-rigid registration

    NASA Astrophysics Data System (ADS)

    Prümmer, M.; Hornegger, J.; Pfister, M.; Dörfler, A.

    2006-03-01

    In this paper, we propose a multi-modal non-rigid 2D-3D registration technique. This method allows a non-rigid alignment of a patient pre-operatively computed tomography (CT) to few intra operatively acquired fluoroscopic X-ray images obtained with a C-arm system. This multi-modal approach is especially focused on the 3D alignment of high contrast reconstructed volumes with intra-interventional low contrast X-ray images in order to make use of up-to-date information for surgical guidance and other interventions. The key issue of non-rigid 2D-3D registration is how to define the distance measure between high contrast 3D data and low contrast 2D projections. In this work, we use algebraic reconstruction theory to handle this problem. We modify the Euler-Lagrange equation by introducing a new 3D force. This external force term is computed from the residual of the algebraic reconstruction procedures. In the multi-modal case we replace the residual between the digitally reconstructed radiographs (DRR) and observed X-ray images with a statistical based distance measure. We integrate the algebraic reconstruction technique into a variational registration framework, so that the 3D displacement field is driven to minimize the reconstruction distance between the volumetric data and its 2D projections using mutual information (MI). The benefits of this 2D-3D registration approach are its scalability in the number of used X-ray reference images and the proposed distance that can handle low contrast fluoroscopies as well. Experimental results are presented on both artificial phantom and 3D C-arm CT images.

  4. Automatic pose initialization for accurate 2D/3D registration applied to abdominal aortic aneurysm endovascular repair

    NASA Astrophysics Data System (ADS)

    Miao, Shun; Lucas, Joseph; Liao, Rui

    2012-02-01

    Minimally invasive abdominal aortic aneurysm (AAA) stenting can be greatly facilitated by overlaying the preoperative 3-D model of the abdominal aorta onto the intra-operative 2-D X-ray images. Accurate 2-D/3-D registration in 3-D space makes the 2-D/3-D overlay robust to the change of C-Arm angulations. By far, the 2-D/3-D registration methods based on simulated X-ray projection images using multiple image planes have been shown to be able to provide satisfactory 3-D registration accuracy. However, one drawback of the intensity-based 2-D/3-D registration methods is that the similarity measure is usually highly non-convex and hence the optimizer can easily be trapped into local minima. User interaction therefore is often needed in the initialization of the position of the 3-D model in order to get a successful 2-D/3-D registration. In this paper, a novel 3-D pose initialization technique is proposed, as an extension of our previously proposed bi-plane 2-D/3-D registration method for AAA intervention [4]. The proposed method detects vessel bifurcation points and spine centerline in both 2-D and 3-D images, and utilizes landmark information to bring the 3-D volume into a 15mm capture range. The proposed landmark detection method was validated on real dataset, and is shown to be able to provide a good initialization for 2-D/3-D registration in [4], thus making the workflow fully automatic.

  5. Deformable 3D-2D registration for CT and its application to low dose tomographic fluoroscopy

    NASA Astrophysics Data System (ADS)

    Flach, Barbara; Brehm, Marcus; Sawall, Stefan; Kachelrieß, Marc

    2014-12-01

    Many applications in medical imaging include image registration for matching of images from the same or different modalities. In the case of full data sampling, the respective reconstructed images are usually of such a good image quality that standard deformable volume-to-volume (3D-3D) registration approaches can be applied. But research in temporal-correlated image reconstruction and dose reductions increases the number of cases where rawdata are available from only few projection angles. Here, deteriorated image quality leads to non-acceptable deformable volume-to-volume registration results. Therefore a registration approach is required that is robust against a decreasing number of projections defining the target position. We propose a deformable volume-to-rawdata (3D-2D) registration method that aims at finding a displacement vector field maximizing the alignment of a CT volume and the acquired rawdata based on the sum of squared differences in rawdata domain. The registration is constrained by a regularization term in accordance with a fluid-based diffusion. Both cost function components, the rawdata fidelity and the regularization term, are optimized in an alternating manner. The matching criterion is optimized by a conjugate gradient descent for nonlinear functions, while the regularization is realized by convolution of the vector fields with Gaussian kernels. We validate the proposed method and compare it to the demons algorithm, a well-known 3D-3D registration method. The comparison is done for a range of 4-60 target projections using datasets from low dose tomographic fluoroscopy as an application example. The results show a high correlation to the ground truth target position without introducing artifacts even in the case of very few projections. In particular the matching in the rawdata domain is improved compared to the 3D-3D registration for the investigated range. The proposed volume-to-rawdata registration increases the robustness regarding sparse

  6. Deformable 3D-2D registration for CT and its application to low dose tomographic fluoroscopy.

    PubMed

    Flach, Barbara; Brehm, Marcus; Sawall, Stefan; Kachelrieß, Marc

    2014-12-21

    Many applications in medical imaging include image registration for matching of images from the same or different modalities. In the case of full data sampling, the respective reconstructed images are usually of such a good image quality that standard deformable volume-to-volume (3D-3D) registration approaches can be applied. But research in temporal-correlated image reconstruction and dose reductions increases the number of cases where rawdata are available from only few projection angles. Here, deteriorated image quality leads to non-acceptable deformable volume-to-volume registration results. Therefore a registration approach is required that is robust against a decreasing number of projections defining the target position. We propose a deformable volume-to-rawdata (3D-2D) registration method that aims at finding a displacement vector field maximizing the alignment of a CT volume and the acquired rawdata based on the sum of squared differences in rawdata domain. The registration is constrained by a regularization term in accordance with a fluid-based diffusion. Both cost function components, the rawdata fidelity and the regularization term, are optimized in an alternating manner. The matching criterion is optimized by a conjugate gradient descent for nonlinear functions, while the regularization is realized by convolution of the vector fields with Gaussian kernels. We validate the proposed method and compare it to the demons algorithm, a well-known 3D-3D registration method. The comparison is done for a range of 4-60 target projections using datasets from low dose tomographic fluoroscopy as an application example. The results show a high correlation to the ground truth target position without introducing artifacts even in the case of very few projections. In particular the matching in the rawdata domain is improved compared to the 3D-3D registration for the investigated range. The proposed volume-to-rawdata registration increases the robustness regarding sparse

  7. A comment on the rank correlation merit function for 2D/3D registration

    NASA Astrophysics Data System (ADS)

    Figl, Michael; Bloch, Christoph; Birkfellner, Wolfgang

    2010-02-01

    Lots of procedures in computer assisted interventions register pre-interventionally generated 3D data sets to the intraoperative situation using fast and simply generated 2D images, e.g. from a C-Arm, a B-mode Ultrasound, etc. Registration is typically done by generating a 2D image out of the 3D data set, comparison to the original 2D image using a planar similarity measure and subsequent optimisation. As these two images can be very different, a lot of different comparison functions are in use. In a recent article Stochastic Rank Correlation, a merit function based on Spearman's rank correlation coefficient was presented. By comparing randomly chosen subsets of the images, the authors wanted to avoid the computational expense of sorting all the points in the image. In the current paper we show that, because of the limited grey level range in medical images, full image rank correlation can be computed almost as fast as Pearson's correlation coefficient. A run time estimation is illustrated with numerical results using a 2D Shepp-Logan phantom at different sizes, and a sample data set of a pig.

  8. Semiautomated Multimodal Breast Image Registration

    PubMed Central

    Curtis, Charlotte; Frayne, Richard; Fear, Elise

    2012-01-01

    Consideration of information from multiple modalities has been shown to have increased diagnostic power in breast imaging. As a result, new techniques such as microwave imaging continue to be developed. Interpreting these novel image modalities is a challenge, requiring comparison to established techniques such as the gold standard X-ray mammography. However, due to the highly deformable nature of breast tissues, comparison of 3D and 2D modalities is a challenge. To enable this comparison, a registration technique was developed to map features from 2D mammograms to locations in the 3D image space. This technique was developed and tested using magnetic resonance (MR) images as a reference 3D modality, as MR breast imaging is an established technique in clinical practice. The algorithm was validated using a numerical phantom then successfully tested on twenty-four image pairs. Dice's coefficient was used to measure the external goodness of fit, resulting in an excellent overall average of 0.94. Internal agreement was evaluated by examining internal features in consultation with a radiologist, and subjective assessment concludes that reasonable alignment was achieved. PMID:22481910

  9. Image registration with uncertainty analysis

    DOEpatents

    Simonson, Katherine M.

    2011-03-22

    In an image registration method, edges are detected in a first image and a second image. A percentage of edge pixels in a subset of the second image that are also edges in the first image shifted by a translation is calculated. A best registration point is calculated based on a maximum percentage of edges matched. In a predefined search region, all registration points other than the best registration point are identified that are not significantly worse than the best registration point according to a predetermined statistical criterion.

  10. Topology-Preserving Rigid Transformation of 2D Digital Images.

    PubMed

    Ngo, Phuc; Passat, Nicolas; Kenmochi, Yukiko; Talbot, Hugues

    2014-02-01

    We provide conditions under which 2D digital images preserve their topological properties under rigid transformations. We consider the two most common digital topology models, namely dual adjacency and well-composedness. This paper leads to the proposal of optimal preprocessing strategies that ensure the topological invariance of images under arbitrary rigid transformations. These results and methods are proved to be valid for various kinds of images (binary, gray-level, label), thus providing generic and efficient tools, which can be used in particular in the context of image registration and warping.

  11. Topology-Preserving Rigid Transformation of 2D Digital Images.

    PubMed

    Ngo, Phuc; Passat, Nicolas; Kenmochi, Yukiko; Talbot, Hugues

    2014-02-01

    We provide conditions under which 2D digital images preserve their topological properties under rigid transformations. We consider the two most common digital topology models, namely dual adjacency and well-composedness. This paper leads to the proposal of optimal preprocessing strategies that ensure the topological invariance of images under arbitrary rigid transformations. These results and methods are proved to be valid for various kinds of images (binary, gray-level, label), thus providing generic and efficient tools, which can be used in particular in the context of image registration and warping. PMID:26270925

  12. A multicore based parallel image registration method.

    PubMed

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L; Foran, David J

    2009-01-01

    Image registration is a crucial step for many image-assisted clinical applications such as surgery planning and treatment evaluation. In this paper we proposed a landmark based nonlinear image registration algorithm for matching 2D image pairs. The algorithm was shown to be effective and robust under conditions of large deformations. In landmark based registration, the most important step is establishing the correspondence among the selected landmark points. This usually requires an extensive search which is often computationally expensive. We introduced a nonregular data partition algorithm using the K-means clustering algorithm to group the landmarks based on the number of available processing cores. The step optimizes the memory usage and data transfer. We have tested our method using IBM Cell Broadband Engine (Cell/B.E.) platform. PMID:19964921

  13. Validation of histology image registration

    NASA Astrophysics Data System (ADS)

    Shojaii, Rushin; Karavardanyan, Tigran; Yaffe, Martin; Martel, Anne L.

    2011-03-01

    The aim of this paper is to validate an image registration pipeline used for histology image alignment. In this work a set of histology images are registered to their correspondent optical blockface images to make a histology volume. Then multi-modality fiducial markers are used to validate the alignment of histology images. The fiducial markers are catheters perfused with a mixture of cuttlefish ink and flour. Based on our previous investigations this fiducial marker is visible in medical images, optical blockface images and it can also be localized in histology images. The properties of this fiducial marker make it suitable for validation of the registration techniques used for histology image alignment. This paper reports on the accuracy of a histology image registration approach by calculation of target registration error using these fiducial markers.

  14. 2D microwave imaging reflectometer electronics

    SciTech Connect

    Spear, A. G.; Domier, C. W. Hu, X.; Muscatello, C. M.; Ren, X.; Luhmann, N. C.; Tobias, B. J.

    2014-11-15

    A 2D microwave imaging reflectometer system has been developed to visualize electron density fluctuations on the DIII-D tokamak. Simultaneously illuminated at four probe frequencies, large aperture optics image reflections from four density-dependent cutoff surfaces in the plasma over an extended region of the DIII-D plasma. Localized density fluctuations in the vicinity of the plasma cutoff surfaces modulate the plasma reflections, yielding a 2D image of electron density fluctuations. Details are presented of the receiver down conversion electronics that generate the in-phase (I) and quadrature (Q) reflectometer signals from which 2D density fluctuation data are obtained. Also presented are details on the control system and backplane used to manage the electronics as well as an introduction to the computer based control program.

  15. Fast DRR generation for 2D to 3D registration on GPUs

    SciTech Connect

    Tornai, Gabor Janos; Cserey, Gyoergy

    2012-08-15

    Purpose: The generation of digitally reconstructed radiographs (DRRs) is the most time consuming step on the CPU in intensity based two-dimensional x-ray to three-dimensional (CT or 3D rotational x-ray) medical image registration, which has application in several image guided interventions. This work presents optimized DRR rendering on graphical processor units (GPUs) and compares performance achievable on four commercially available devices. Methods: A ray-cast based DRR rendering was implemented for a 512 Multiplication-Sign 512 Multiplication-Sign 72 CT volume. The block size parameter was optimized for four different GPUs for a region of interest (ROI) of 400 Multiplication-Sign 225 pixels with different sampling ratios (1.1%-9.1% and 100%). Performance was statistically evaluated and compared for the four GPUs. The method and the block size dependence were validated on the latest GPU for several parameter settings with a public gold standard dataset (512 Multiplication-Sign 512 Multiplication-Sign 825 CT) for registration purposes. Results: Depending on the GPU, the full ROI is rendered in 2.7-5.2 ms. If sampling ratio of 1.1%-9.1% is applied, execution time is in the range of 0.3-7.3 ms. On all GPUs, the mean of the execution time increased linearly with respect to the number of pixels if sampling was used. Conclusions: The presented results outperform other results from the literature. This indicates that automatic 2D to 3D registration, which typically requires a couple of hundred DRR renderings to converge, can be performed quasi on-line, in less than a second or depending on the application and hardware in less than a couple of seconds. Accordingly, a whole new field of applications is opened for image guided interventions, where the registration is continuously performed to match the real-time x-ray.

  16. Personalized x-ray reconstruction of the proximal femur via a non-rigid 2D-3D registration

    NASA Astrophysics Data System (ADS)

    Yu, Weimin; Zysset, Philippe; Zheng, Guoyan

    2015-03-01

    In this paper we present a new approach for a personalized X-ray reconstruction of the proximal femur via a non-rigid registration of a 3D volumetric template to 2D calibrated C-arm images. The 2D-3D registration is done with a hierarchical two-stage strategy: the global scaled rigid registration stage followed by a regularized deformable b-spline registration stage. In both stages, a set of control points with uniform spacing are placed over the domain of the 3D volumetric template and the registrations are driven by computing updated positions of these control points, which then allows to accurately register the 3D volumetric template to the reference space of the C-arm images. Comprehensive experiments on simulated images, on images of cadaveric femurs and on clinical datasets are designed and conducted to evaluate the performance of the proposed approach. Quantitative and qualitative evaluation results are given, which demonstrate the efficacy of the present approach.

  17. 2D-3D rigid registration to compensate for prostate motion during 3D TRUS-guided biopsy

    NASA Astrophysics Data System (ADS)

    De Silva, Tharindu; Fenster, Aaron; Bax, Jeffrey; Gardi, Lori; Romagnoli, Cesare; Samarabandu, Jagath; Ward, Aaron D.

    2012-02-01

    Prostate biopsy is the clinical standard for prostate cancer diagnosis. To improve the accuracy of targeting suspicious locations, systems have been developed that can plan and record biopsy locations in a 3D TRUS image acquired at the beginning of the procedure. Some systems are designed for maximum compatibility with existing ultrasound equipment and are thus designed around the use of a conventional 2D TRUS probe, using controlled axial rotation of this probe to acquire a 3D TRUS reference image at the start of the biopsy procedure. Prostate motion during the biopsy procedure causes misalignments between the prostate in the live 2D TRUS images and the pre-acquired 3D TRUS image. We present an image-based rigid registration technique that aligns live 2D TRUS images, acquired immediately prior to biopsy needle insertion, with the pre-acquired 3D TRUS image to compensate for this motion. Our method was validated using 33 manually identified intrinsic fiducials in eight subjects and the target registration error was found to be 1.89 mm. We analysed the suitability of two image similarity metrics (normalized cross correlation and mutual information) for this task by plotting these metrics as a function of varying parameters in the six degree-of-freedom transformation space, with the ground truth plane obtained from registration as the starting point for the parameter exploration. We observed a generally convex behaviour of the similarity metrics. This encourages their use for this registration problem, and could assist in the design of a tool for the detection of misalignment, which could trigger the execution of a non-real-time registration, when needed during the procedure.

  18. Efficient feature-based 2D/3D registration of transesophageal echocardiography to x-ray fluoroscopy for cardiac interventions

    NASA Astrophysics Data System (ADS)

    Hatt, Charles R.; Speidel, Michael A.; Raval, Amish N.

    2014-03-01

    We present a novel 2D/ 3D registration algorithm for fusion between transesophageal echocardiography (TEE) and X-ray fluoroscopy (XRF). The TEE probe is modeled as a subset of 3D gradient and intensity point features, which facilitates efficient 3D-to-2D perspective projection. A novel cost-function, based on a combination of intensity and edge features, evaluates the registration cost value without the need for time-consuming generation of digitally reconstructed radiographs (DRRs). Validation experiments were performed with simulations and phantom data. For simulations, in silica XRF images of a TEE probe were generated in a number of different pose configurations using a previously acquired CT image. Random misregistrations were applied and our method was used to recover the TEE probe pose and compare the result to the ground truth. Phantom experiments were performed by attaching fiducial markers externally to a TEE probe, imaging the probe with an interventional cardiac angiographic x-ray system, and comparing the pose estimated from the external markers to that estimated from the TEE probe using our algorithm. Simulations found a 3D target registration error of 1.08(1.92) mm for biplane (monoplane) geometries, while the phantom experiment found a 2D target registration error of 0.69mm. For phantom experiments, we demonstrated a monoplane tracking frame-rate of 1.38 fps. The proposed feature-based registration method is computationally efficient, resulting in near real-time, accurate image based registration between TEE and XRF.

  19. Known-component 3D-2D registration for quality assurance of spine surgery pedicle screw placement

    NASA Astrophysics Data System (ADS)

    Uneri, A.; De Silva, T.; Stayman, J. W.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Gokaslan, Z. L.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2015-10-01

    A 3D-2D image registration method is presented that exploits knowledge of interventional devices (e.g. K-wires or spine screws—referred to as ‘known components’) to extend the functionality of intraoperative radiography/fluoroscopy by providing quantitative measurement and quality assurance (QA) of the surgical product. The known-component registration (KC-Reg) algorithm uses robust 3D-2D registration combined with 3D component models of surgical devices known to be present in intraoperative 2D radiographs. Component models were investigated that vary in fidelity from simple parametric models (e.g. approximation of a screw as a simple cylinder, referred to as ‘parametrically-known’ component [pKC] registration) to precise models based on device-specific CAD drawings (referred to as ‘exactly-known’ component [eKC] registration). 3D-2D registration from three intraoperative radiographs was solved using the covariance matrix adaptation evolution strategy (CMA-ES) to maximize image-gradient similarity, relating device placement relative to 3D preoperative CT of the patient. Spine phantom and cadaver studies were conducted to evaluate registration accuracy and demonstrate QA of the surgical product by verification of the type of devices delivered and conformance within the ‘acceptance window’ of the spinal pedicle. Pedicle screws were successfully registered to radiographs acquired from a mobile C-arm, providing TRE 1-4 mm and  <5° using simple parametric (pKC) models, further improved to  <1 mm and  <1° using eKC registration. Using advanced pKC models, screws that did not match the device models specified in the surgical plan were detected with an accuracy of  >99%. Visualization of registered devices relative to surgical planning and the pedicle acceptance window provided potentially valuable QA of the surgical product and reliable detection of pedicle screw breach. 3D-2D registration combined with 3D models of known surgical

  20. Image Segmentation, Registration, Compression, and Matching

    NASA Technical Reports Server (NTRS)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  1. 2D and 3D registration methods for dual-energy contrast-enhanced digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Lau, Kristen C.; Roth, Susan; Maidment, Andrew D. A.

    2014-03-01

    Contrast-enhanced digital breast tomosynthesis (CE-DBT) uses an iodinated contrast agent to image the threedimensional breast vasculature. The University of Pennsylvania is conducting a CE-DBT clinical study in patients with known breast cancers. The breast is compressed continuously and imaged at four time points (1 pre-contrast; 3 postcontrast). A hybrid subtraction scheme is proposed. First, dual-energy (DE) images are obtained by a weighted logarithmic subtraction of the high-energy and low-energy image pairs. Then, post-contrast DE images are subtracted from the pre-contrast DE image. This hybrid temporal subtraction of DE images is performed to analyze iodine uptake, but suffers from motion artifacts. Employing image registration further helps to correct for motion, enhancing the evaluation of vascular kinetics. Registration using ANTS (Advanced Normalization Tools) is performed in an iterative manner. Mutual information optimization first corrects large-scale motions. Normalized cross-correlation optimization then iteratively corrects fine-scale misalignment. Two methods have been evaluated: a 2D method using a slice-by-slice approach, and a 3D method using a volumetric approach to account for out-of-plane breast motion. Our results demonstrate that iterative registration qualitatively improves with each iteration (five iterations total). Motion artifacts near the edge of the breast are corrected effectively and structures within the breast (e.g. blood vessels, surgical clip) are better visualized. Statistical and clinical evaluations of registration accuracy in the CE-DBT images are ongoing.

  2. Remapping of digital subtraction angiography on a standard fluoroscopy system using 2D-3D registration

    NASA Astrophysics Data System (ADS)

    Alhrishy, Mazen G.; Varnavas, Andreas; Guyot, Alexis; Carrell, Tom; King, Andrew; Penney, Graeme

    2015-03-01

    Fluoroscopy-guided endovascular interventions are being performing for more and more complex cases with longer screening times. However, X-ray is much better at visualizing interventional devices and dense structures compared to vasculature. To visualise vasculature, angiography screening is essential but requires the use of iodinated contrast medium (ICM) which is nephrotoxic. Acute kidney injury is the main life-threatening complication of ICM. Digital subtraction angiography (DSA) is also often a major contributor to overall patient radiation dose (81% reported). Furthermore, a DSA image is only valid for the current interventional view and not the new view once the C-arm is moved. In this paper, we propose the use of 2D-3D image registration between intraoperative images and the preoperative CT volume to facilitate DSA remapping using a standard fluoroscopy system. This allows repeated ICM-free DSA and has the potential to enable a reduction in ICM usage and radiation dose. Experiments were carried out using 9 clinical datasets. In total, 41 DSA images were remapped. For each dataset, the maximum and averaged remapping accuracy error were calculated and presented. Numerical results showed an overall averaged error of 2.50 mm, with 7 patients scoring averaged errors < 3 mm and 2 patients < 6 mm.

  3. Registration of interferometric SAR images

    NASA Technical Reports Server (NTRS)

    Lin, Qian; Vesecky, John F.; Zebker, Howard A.

    1992-01-01

    Interferometric synthetic aperture radar (INSAR) is a new way of performing topography mapping. Among the factors critical to mapping accuracy is the registration of the complex SAR images from repeated orbits. A new algorithm for registering interferometric SAR images is presented. A new figure of merit, the average fluctuation function of the phase difference image, is proposed to evaluate the fringe pattern quality. The process of adjusting the registration parameters according to the fringe pattern quality is optimized through a downhill simplex minimization algorithm. The results of applying the proposed algorithm to register two pairs of Seasat SAR images with a short baseline (75 m) and a long baseline (500 m) are shown. It is found that the average fluctuation function is a very stable measure of fringe pattern quality allowing very accurate registration.

  4. 2D-3D registration for brain radiation therapy using a 3D CBCT and a single limited field-of-view 2D kV radiograph

    NASA Astrophysics Data System (ADS)

    Munbodh, R.; Moseley, D. J.

    2014-03-01

    We report results of an intensity-based 2D-3D rigid registration framework for patient positioning and monitoring during brain radiotherapy. We evaluated two intensity-based similarity measures, the Pearson Correlation Coefficient (ICC) and Maximum Likelihood with Gaussian noise (MLG) derived from the statistics of transmission images. A useful image frequency band was identified from the bone-to-no-bone ratio. Validation was performed on gold-standard data consisting of 3D kV CBCT scans and 2D kV radiographs of an anthropomorphic head phantom acquired at 23 different poses with parameter variations along six degrees of freedom. At each pose, a single limited field of view kV radiograph was registered to the reference CBCT. The ground truth was determined from markers affixed to the phantom and visible in the CBCT images. The mean (and standard deviation) of the absolute errors in recovering each of the six transformation parameters along the x, y and z axes for ICC were varphix: 0.08(0.04)°, varphiy: 0.10(0.09)°, varphiz: 0.03(0.03)°, tx: 0.13(0.11) mm, ty: 0.08(0.06) mm and tz: 0.44(0.23) mm. For MLG, the corresponding results were varphix: 0.10(0.04)°, varphiy: 0.10(0.09)°, varphiz: 0.05(0.07)°, tx: 0.11(0.13) mm, ty: 0.05(0.05) mm and tz: 0.44(0.31) mm. It is feasible to accurately estimate all six transformation parameters from a 3D CBCT of the head and a single 2D kV radiograph within an intensity-based registration framework that incorporates the physics of transmission images.

  5. 3D-2D registration in mobile radiographs: algorithm development and preliminary clinical evaluation

    NASA Astrophysics Data System (ADS)

    Otake, Yoshito; Wang, Adam S.; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Aygun, Nafi; Lo, Sheng-fu L.; Wolinsky, Jean-Paul; Gokaslan, Ziya L.; Siewerdsen, Jeffrey H.

    2015-03-01

    An image-based 3D-2D registration method is presented using radiographs acquired in the uncalibrated, unconstrained geometry of mobile radiography. The approach extends a previous method for six degree-of-freedom (DOF) registration in C-arm fluoroscopy (namely ‘LevelCheck’) to solve the 9-DOF estimate of geometry in which the position of the source and detector are unconstrained. The method was implemented using a gradient correlation similarity metric and stochastic derivative-free optimization on a GPU. Development and evaluation were conducted in three steps. First, simulation studies were performed that involved a CT scan of an anthropomorphic body phantom and 1000 randomly generated digitally reconstructed radiographs in posterior-anterior and lateral views. A median projection distance error (PDE) of 0.007 mm was achieved with 9-DOF registration compared to 0.767 mm for 6-DOF. Second, cadaver studies were conducted using mobile radiographs acquired in three anatomical regions (thorax, abdomen and pelvis) and three levels of source-detector distance (~800, ~1000 and ~1200 mm). The 9-DOF method achieved a median PDE of 0.49 mm (compared to 2.53 mm for the 6-DOF method) and demonstrated robustness in the unconstrained imaging geometry. Finally, a retrospective clinical study was conducted with intraoperative radiographs of the spine exhibiting real anatomical deformation and image content mismatch (e.g. interventional devices in the radiograph that were not in the CT), demonstrating a PDE = 1.1 mm for the 9-DOF approach. Average computation time was 48.5 s, involving 687 701 function evaluations on average, compared to 18.2 s for the 6-DOF method. Despite the greater computational load, the 9-DOF method may offer a valuable tool for target localization (e.g. decision support in level counting) as well as safety and quality assurance checks at the conclusion of a procedure (e.g. overlay of planning data on the radiograph for verification of

  6. Automatic C-arm pose estimation via 2D/3D hybrid registration of a radiographic fiducial

    NASA Astrophysics Data System (ADS)

    Moult, E.; Burdette, E. C.; Song, D. Y.; Abolmaesumi, P.; Fichtinger, G.; Fallavollita, P.

    2011-03-01

    Motivation: In prostate brachytherapy, real-time dosimetry would be ideal to allow for rapid evaluation of the implant quality intra-operatively. However, such a mechanism requires an imaging system that is both real-time and which provides, via multiple C-arm fluoroscopy images, clear information describing the three-dimensional position of the seeds deposited within the prostate. Thus, accurate tracking of the C-arm poses proves to be of critical importance to the process. Methodology: We compute the pose of the C-arm relative to a stationary radiographic fiducial of known geometry by employing a hybrid registration framework. Firstly, by means of an ellipse segmentation algorithm and a 2D/3D feature based registration, we exploit known FTRAC geometry to recover an initial estimate of the C-arm pose. Using this estimate, we then initialize the intensity-based registration which serves to recover a refined and accurate estimation of the C-arm pose. Results: Ground-truth pose was established for each C-arm image through a published and clinically tested segmentation-based method. Using 169 clinical C-arm images and a +/-10° and +/-10 mm random perturbation of the ground-truth pose, the average rotation and translation errors were 0.68° (std = 0.06°) and 0.64 mm (std = 0.24 mm). Conclusion: Fully automated C-arm pose estimation using a 2D/3D hybrid registration scheme was found to be clinically robust based on human patient data.

  7. Robust 2D/3D registration for fast-flexion motion of the knee joint using hybrid optimization.

    PubMed

    Ohnishi, Takashi; Suzuki, Masahiko; Kobayashi, Tatsuya; Naomoto, Shinji; Sukegawa, Tomoyuki; Nawata, Atsushi; Haneishi, Hideaki

    2013-01-01

    Previously, we proposed a 2D/3D registration method that uses Powell's algorithm to obtain 3D motion of a knee joint by 3D computed-tomography and bi-plane fluoroscopic images. The 2D/3D registration is performed consecutively and automatically for each frame of the fluoroscopic images. This method starts from the optimum parameters of the previous frame for each frame except for the first one, and it searches for the next set of optimum parameters using Powell's algorithm. However, if the flexion motion of the knee joint is fast, it is likely that Powell's algorithm will provide a mismatch because the initial parameters are far from the correct ones. In this study, we applied a hybrid optimization algorithm (HPS) combining Powell's algorithm with the Nelder-Mead simplex (NM-simplex) algorithm to overcome this problem. The performance of the HPS was compared with the separate performances of Powell's algorithm and the NM-simplex algorithm, the Quasi-Newton algorithm and hybrid optimization algorithm with the Quasi-Newton and NM-simplex algorithms with five patient data sets in terms of the root-mean-square error (RMSE), target registration error (TRE), success rate, and processing time. The RMSE, TRE, and the success rate of the HPS were better than those of the other optimization algorithms, and the processing time was similar to that of Powell's algorithm alone.

  8. Medical image registration using sparse coding of image patches.

    PubMed

    Afzali, Maryam; Ghaffari, Aboozar; Fatemizadeh, Emad; Soltanian-Zadeh, Hamid

    2016-06-01

    Image registration is a basic task in medical image processing applications like group analysis and atlas construction. Similarity measure is a critical ingredient of image registration. Intensity distortion of medical images is not considered in most previous similarity measures. Therefore, in the presence of bias field distortions, they do not generate an acceptable registration. In this paper, we propose a sparse based similarity measure for mono-modal images that considers non-stationary intensity and spatially-varying distortions. The main idea behind this measure is that the aligned image is constructed by an analysis dictionary trained using the image patches. For this purpose, we use "Analysis K-SVD" to train the dictionary and find the sparse coefficients. We utilize image patches to construct the analysis dictionary and then we employ the proposed sparse similarity measure to find a non-rigid transformation using free form deformation (FFD). Experimental results show that the proposed approach is able to robustly register 2D and 3D images in both simulated and real cases. The proposed method outperforms other state-of-the-art similarity measures and decreases the transformation error compared to the previous methods. Even in the presence of bias field distortion, the proposed method aligns images without any preprocessing. PMID:27085311

  9. Medical image registration using sparse coding of image patches.

    PubMed

    Afzali, Maryam; Ghaffari, Aboozar; Fatemizadeh, Emad; Soltanian-Zadeh, Hamid

    2016-06-01

    Image registration is a basic task in medical image processing applications like group analysis and atlas construction. Similarity measure is a critical ingredient of image registration. Intensity distortion of medical images is not considered in most previous similarity measures. Therefore, in the presence of bias field distortions, they do not generate an acceptable registration. In this paper, we propose a sparse based similarity measure for mono-modal images that considers non-stationary intensity and spatially-varying distortions. The main idea behind this measure is that the aligned image is constructed by an analysis dictionary trained using the image patches. For this purpose, we use "Analysis K-SVD" to train the dictionary and find the sparse coefficients. We utilize image patches to construct the analysis dictionary and then we employ the proposed sparse similarity measure to find a non-rigid transformation using free form deformation (FFD). Experimental results show that the proposed approach is able to robustly register 2D and 3D images in both simulated and real cases. The proposed method outperforms other state-of-the-art similarity measures and decreases the transformation error compared to the previous methods. Even in the presence of bias field distortion, the proposed method aligns images without any preprocessing.

  10. Fast 3D fluid registration of brain magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Leporé, Natasha; Chou, Yi-Yu; Lopez, Oscar L.; Aizenstein, Howard J.; Becker, James T.; Toga, Arthur W.; Thompson, Paul M.

    2008-03-01

    Fluid registration is widely used in medical imaging to track anatomical changes, to correct image distortions, and to integrate multi-modality data. Fluid mappings guarantee that the template image deforms smoothly into the target, without tearing or folding, even when large deformations are required for accurate matching. Here we implemented an intensity-based fluid registration algorithm, accelerated by using a filter designed by Bro-Nielsen and Gramkow. We validated the algorithm on 2D and 3D geometric phantoms using the mean square difference between the final registered image and target as a measure of the accuracy of the registration. In tests on phantom images with different levels of overlap, varying amounts of Gaussian noise, and different intensity gradients, the fluid method outperformed a more commonly used elastic registration method, both in terms of accuracy and in avoiding topological errors during deformation. We also studied the effect of varying the viscosity coefficients in the viscous fluid equation, to optimize registration accuracy. Finally, we applied the fluid registration algorithm to a dataset of 2D binary corpus callosum images and 3D volumetric brain MRIs from 14 healthy individuals to assess its accuracy and robustness.

  11. Image registration method for medical image sequences

    DOEpatents

    Gee, Timothy F.; Goddard, James S.

    2013-03-26

    Image registration of low contrast image sequences is provided. In one aspect, a desired region of an image is automatically segmented and only the desired region is registered. Active contours and adaptive thresholding of intensity or edge information may be used to segment the desired regions. A transform function is defined to register the segmented region, and sub-pixel information may be determined using one or more interpolation methods.

  12. Evaluating Similarity Measures for Brain Image Registration

    PubMed Central

    Razlighi, Q. R.; Kehtarnavaz, N.; Yousefi, S.

    2013-01-01

    Evaluation of similarity measures for image registration is a challenging problem due to its complex interaction with the underlying optimization, regularization, image type and modality. We propose a single performance metric, named robustness, as part of a new evaluation method which quantifies the effectiveness of similarity measures for brain image registration while eliminating the effects of the other parts of the registration process. We show empirically that similarity measures with higher robustness are more effective in registering degraded images and are also more successful in performing intermodal image registration. Further, we introduce a new similarity measure, called normalized spatial mutual information, for 3D brain image registration whose robustness is shown to be much higher than the existing ones. Consequently, it tolerates greater image degradation and provides more consistent outcomes for intermodal brain image registration. PMID:24039378

  13. [Progress of research in retinal image registration].

    PubMed

    Yu, Lun; Wei, Lifang; Pan, Lin

    2011-10-01

    The retinal image registration has important applications in the processes of auxiliary diagnosis and treatment for a variety of diseases. The retinal image registration can be used to measure the disease process and the therapeutic effect. A variety of retinal image registration techniques have been studied extensively in recent years. However, there are still many problems existing and there are numerous research possibilities. Based on extensive investigation of existing literatures, the present paper analyzes the feature of retinal image and current challenges of retinal image registration, and reviews the transformation models of the retinal image registration technology and the main research algorithms in current retinal image registration, and analyzes the advantages and disadvantages of various types of algorithms. Some research challenges and future developing trends are also discussed.

  14. [A coarse-to-fine registration method for satellite infrared image and visual image].

    PubMed

    Hu, Yong-Li; Wang, Liang; Liu, Rong; Zhang, Li; Duan, Fu-Qing

    2013-11-01

    In the present paper, in order to resolve the registration of the multi-mode satellite images with different signal properties and features, a two-phase coarse-to-fine registration method is presented and is applied to the registration of satellite infrared images and visual images. In the coarse registration phase of this method, the edge of infrared and visual images is firstly detected. Then the Fourier-Mellin transform is adopted to process the edge images. Finally, the affine transformation parameters of the registration are computed rapidly by the transformation relation between the registering images in frequency domain. In the fine registration phase of the proposed method, the feature points of infrared and visual images are firstly detected by Harris operator. Then the matched feature points of infrared and visual images are determined by the cross-correlation similarity of their local neighborhoods. The fine registration is finally realized according to the spatial correspondent relation of the matched feature points in infrared and visual images. The proposed coarse-to-fine registration method derives both the advantages of two methods, the high efficiency of Fourier-Mellin transform based registration method and the accuracy of Harris operator based registration method, which is considered the novelty and merit of the proposed method. To evaluate the performance of the proposed registration method, the coarse-to-fine registration method is implemented on the infrared and visual images captured by the FY-2D meteorological satellite. The experimental results show that the presented registration method is robust and has acceptable registration accuracy.

  15. 2D-3D Registration of CT Vertebra Volume to Fluoroscopy Projection: A Calibration Model Assessment

    NASA Astrophysics Data System (ADS)

    Bifulco, P.; Cesarelli, M.; Allen, R.; Romano, M.; Fratini, A.; Pasquariello, G.

    2009-12-01

    This study extends a previous research concerning intervertebral motion registration by means of 2D dynamic fluoroscopy to obtain a more comprehensive 3D description of vertebral kinematics. The problem of estimating the 3D rigid pose of a CT volume of a vertebra from its 2D X-ray fluoroscopy projection is addressed. 2D-3D registration is obtained maximising a measure of similarity between Digitally Reconstructed Radiographs (obtained from the CT volume) and real fluoroscopic projection. X-ray energy correction was performed. To assess the method a calibration model was realised a sheep dry vertebra was rigidly fixed to a frame of reference including metallic markers. Accurate measurement of 3D orientation was obtained via single-camera calibration of the markers and held as true 3D vertebra position; then, vertebra 3D pose was estimated and results compared. Error analysis revealed accuracy of the order of 0.1 degree for the rotation angles of about 1 mm for displacements parallel to the fluoroscopic plane, and of order of 10 mm for the orthogonal displacement.

  16. Error correction in image registration using POCS

    NASA Astrophysics Data System (ADS)

    Duraisamy, Prakash; Alam, Mohammad S.; Jackson, Stephen C.

    2011-04-01

    Image registration plays a vital role in many real time imaging applications. Registering the images in a precise manner is a challenging problem. In this paper, we focus on improving image registration error computation using the projection onto convex sets (POCS) techniques which improves the sub-pixel accuracy in the images leading to better estimates for the registration error. This can be used in turn to improve the registration itself. The results obtained from the proposed technique match well with the ground truth which validates the accuracy of this technique. Furthermore, the proposed technique shows better performance compared to existing methods.

  17. Measurement of complex joint trajectories using slice-to-volume 2D/3D registration and cine MR

    NASA Astrophysics Data System (ADS)

    Bloch, C.; Figl, M.; Gendrin, C.; Weber, C.; Unger, E.; Aldrian, S.; Birkfellner, W.

    2010-02-01

    A method for studying the in vivo kinematics of complex joints is presented. It is based on automatic fusion of single slice cine MR images capturing the dynamics and a static MR volume. With the joint at rest the 3D scan is taken. In the data the anatomical compartments are identified and segmented resulting in a 3D volume of each individual part. In each of the cine MR images the joint parts are segmented and their pose and position are derived using a 2D/3D slice-to-volume registration to the volumes. The method is tested on the carpal joint because of its complexity and the small but complex motion of its compartments. For a first study a human cadaver hand was scanned and the method was evaluated with artificially generated slice images. Starting from random initial positions of about 5 mm translational and 12° rotational deviation, 70 to 90 % of the registrations converged successfully to a deviation better than 0.5 mm and 5°. First evaluations using real data from a cine MR were promising. The feasibility of the method was demonstrated. However we experienced difficulties with the segmentation of the cine MR images. We therefore plan to examine different parameters for the image acquisition in future studies.

  18. Practical pseudo-3D registration for large tomographic images

    NASA Astrophysics Data System (ADS)

    Liu, Xuan; Laperre, Kjell; Sasov, Alexander

    2014-09-01

    Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has

  19. Automated 2D-3D registration of a radiograph and a cone beam CT using line-segment enhancement

    SciTech Connect

    Munbodh, Reshma; Jaffray, David A.; Moseley, Douglas J.; Chen Zhe; Knisely, Jonathan P.S.; Cathier, Pascal; Duncan, James S.

    2006-05-15

    The objective of this study was to develop a fully automated two-dimensional (2D)-three-dimensional (3D) registration framework to quantify setup deviations in prostate radiation therapy from cone beam CT (CBCT) data and a single AP radiograph. A kilovoltage CBCT image and kilovoltage AP radiograph of an anthropomorphic phantom of the pelvis were acquired at 14 accurately known positions. The shifts in the phantom position were subsequently estimated by registering digitally reconstructed radiographs (DRRs) from the 3D CBCT scan to the AP radiographs through the correlation of enhanced linear image features mainly representing bony ridges. Linear features were enhanced by filtering the images with ''sticks,'' short line segments which are varied in orientation to achieve the maximum projection value at every pixel in the image. The mean (and standard deviations) of the absolute errors in estimating translations along the three orthogonal axes in millimeters were 0.134 (0.096) AP(out-of-plane), 0.021 (0.023) ML and 0.020 (0.020) SI. The corresponding errors for rotations in degrees were 0.011 (0.009) AP, 0.029 (0.016) ML (out-of-plane), and 0.030 (0.028) SI (out-of-plane). Preliminary results with megavoltage patient data have also been reported. The results suggest that it may be possible to enhance anatomic features that are common to DRRs from a CBCT image and a single AP radiography of the pelvis for use in a completely automated and accurate 2D-3D registration framework for setup verification in prostate radiotherapy. This technique is theoretically applicable to other rigid bony structures such as the cranial vault or skull base and piecewise rigid structures such as the spine.

  20. Intensity-based femoral atlas 2D/3D registration using Levenberg-Marquardt optimisation

    NASA Astrophysics Data System (ADS)

    Klima, Ondrej; Kleparnik, Petr; Spanel, Michal; Zemcik, Pavel

    2016-03-01

    The reconstruction of a patient-specific 3D anatomy is the crucial step in the computer-aided preoperative planning based on plain X-ray images. In this paper, we propose a robust and fast reconstruction methods based on fitting the statistical shape and intensity model of a femoral bone onto a pair of calibrated X-ray images. We formulate the registration as a non-linear least squares problem, allowing for the involvement of Levenberg-Marquardt optimisation. The proposed methods have been tested on a set of 96 virtual X-ray images. The reconstruction accuracy was evaluated using the symmetric Hausdorff distance between reconstructed and ground-truth bones. The accuracy of the intensity-based method reached 1.18 +/- 1.57mm on average, the registration took 8.76 seconds on average.

  1. Image Registration for Stability Testing of MEMS

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; LeMoigne, Jacqueline; Blake, Peter N.; Morey, Peter A.; Landsman, Wayne B.; Chambers, Victor J.; Moseley, Samuel H.

    2011-01-01

    Image registration, or alignment of two or more images covering the same scenes or objects, is of great interest in many disciplines such as remote sensing, medical imaging. astronomy, and computer vision. In this paper, we introduce a new application of image registration algorithms. We demonstrate how through a wavelet based image registration algorithm, engineers can evaluate stability of Micro-Electro-Mechanical Systems (MEMS). In particular, we applied image registration algorithms to assess alignment stability of the MicroShutters Subsystem (MSS) of the Near Infrared Spectrograph (NIRSpec) instrument of the James Webb Space Telescope (JWST). This work introduces a new methodology for evaluating stability of MEMS devices to engineers as well as a new application of image registration algorithms to computer scientists.

  2. Research on 2D representation method of wireless Micro-Ball endoscopic images.

    PubMed

    Wang, Dan; Xie, Xiang; Li, Guolin; Gu, Yingke; Yin, Zheng; Wang, Zhihua

    2012-01-01

    Nowadays the interpretation of the images acquired by wireless endoscopy system is a tedious job for doctors. A viable solution is to construct a map, which is the 2D representation of gastrointestinal (GI) tract to reduce the redundancy of images and improve the understandability of them. The work reported in this paper addresses the problem of the 2D representation of GI tract based on a new wireless Micro-Ball endoscopy system with multiple image sensors. This paper firstly models the problem of constructing the map, and then discusses mainly on the issues of perspective distortion correction, image preprocessing and image registration, which lie in the whole problem. The perspective distortion correction algorithm is realized based on attitude angles, while the image registration is based on phase correlation method (PCM) and scale invariant feature transform (SIFT) combined with particular image preprocessing methods. Based on R channels of images, the algorithm can deal with 26.3% to 100% of image registration when the ratio of overlap varies from 25% to 80%. The performance and effectiveness of the algorithms are verified by experiments.

  3. Diffusion tensor image registration using polynomial expansion

    NASA Astrophysics Data System (ADS)

    Wang, Yuanjun; Chen, Zengai; Nie, Shengdong; Westin, Carl-Fredrik

    2013-09-01

    In this paper, we present a deformable registration framework for the diffusion tensor image (DTI) using polynomial expansion. The use of polynomial expansion in image registration has previously been shown to be beneficial due to fast convergence and high accuracy. However, earlier work was developed only for 3D scalar medical image registration. In this work, it is shown how polynomial expansion can be applied to DTI registration. A new measurement is proposed for DTI registration evaluation, which seems to be robust and sensitive in evaluating the result of DTI registration. We present the algorithms for DTI registration using polynomial expansion by the fractional anisotropy image, and an explicit tensor reorientation strategy is inherent to the registration process. Analytic transforms with high accuracy are derived from polynomial expansion and used for transforming the tensor's orientation. Three measurements for DTI registration evaluation are presented and compared in experimental results. The experiments for algorithm validation are designed from simple affine deformation to nonlinear deformation cases, and the algorithms using polynomial expansion give a good performance in both cases. Inter-subject DTI registration results are presented showing the utility of the proposed method.

  4. Local image registration a comparison for bilateral registration mammography

    NASA Astrophysics Data System (ADS)

    Celaya-Padilaa, José M.; Rodriguez-Rojas, Juan; Trevino, Victor; Tamez-Pena, José G.

    2013-11-01

    Early tumor detection is key in reducing the number of breast cancer death and screening mammography is one of the most widely available and reliable method for early detection. However, it is difficult for the radiologist to process with the same attention each case, due the large amount of images to be read. Computer aided detection (CADe) systems improve tumor detection rate; but the current efficiency of these systems is not yet adequate and the correct interpretation of CADe outputs requires expert human intervention. Computer aided diagnosis systems (CADx) are being designed to improve cancer diagnosis accuracy, but they have not been efficiently applied in breast cancer. CADx efficiency can be enhanced by considering the natural mirror symmetry between the right and left breast. The objective of this work is to evaluate co-registration algorithms for the accurate alignment of the left to right breast for CADx enhancement. A set of mammograms were artificially altered to create a ground truth set to evaluate the registration efficiency of DEMONs , and SPLINE deformable registration algorithms. The registration accuracy was evaluated using mean square errors, mutual information and correlation. The results on the 132 images proved that the SPLINE deformable registration over-perform the DEMONS on mammography images.

  5. Research relative to automated multisensor image registration

    NASA Technical Reports Server (NTRS)

    Kanal, L. N.

    1983-01-01

    The basic aproaches to image registration are surveyed. Three image models are presented as models of the subpixel problem. A variety of approaches to the analysis of subpixel analysis are presented using these models.

  6. Tensor scale-based image registration

    NASA Astrophysics Data System (ADS)

    Saha, Punam K.; Zhang, Hui; Udupa, Jayaram K.; Gee, James C.

    2003-05-01

    Tangible solutions to image registration are paramount in longitudinal as well as multi-modal medical imaging studies. In this paper, we introduce tensor scale - a recently developed local morphometric parameter - in rigid image registration. A tensor scale-based registration method incorporates local structure size, orientation and anisotropy into the matching criterion, and therefore, allows efficient multi-modal image registration and holds potential to overcome the effects of intensity inhomogeneity in MRI. Two classes of two-dimensional image registration methods are proposed - (1) that computes angular shift between two images by correlating their tensor scale orientation histogram, and (2) that registers two images by maximizing the similarity of tensor scale features. Results of applications of the proposed methods on proton density and T2-weighted MR brain images of (1) the same slice of the same subject, and (2) different slices of the same subject are presented. The basic superiority of tensor scale-based registration over intensity-based registration is that it may allow the use of local Gestalts formed by the intensity patterns over the image instead of simply considering intensities as isolated events at the pixel level. This would be helpful in dealing with the effects of intensity inhomogeneity and noise in MRI.

  7. Contrast-Based 3D/2D Registration of the Left Atrium: Fast versus Consistent

    PubMed Central

    Kowalewski, Christopher; Kurzidim, Klaus; Strobel, Norbert; Hornegger, Joachim

    2016-01-01

    For augmented fluoroscopy during cardiac ablation, a preoperatively acquired 3D model of a patient's left atrium (LA) can be registered to X-ray images recorded during a contrast agent (CA) injection. An automatic registration method that works also for small amounts of CA is desired. We propose two similarity measures: The first focuses on edges of the patient anatomy. The second computes a contrast agent distribution estimate (CADE) inside the 3D model and rates its consistency with the CA as seen in biplane fluoroscopic images. Moreover, temporal filtering on the obtained registration results of a sequence is applied using a Markov chain framework. Evaluation was performed on 11 well-contrasted clinical angiographic sequences and 10 additional sequences with less CA. For well-contrasted sequences, the error for all 73 frames was 7.9 ± 6.3 mm and it dropped to 4.6 ± 4.0 mm when registering to an automatically selected, well enhanced frame in each sequence. Temporal filtering reduced the error for all frames from 7.9 ± 6.3 mm to 5.7 ± 4.6 mm. The error was typically higher if less CA was used. A combination of both similarity measures outperforms a previously proposed similarity measure. The mean accuracy for well contrasted sequences is in the range of other proposed manual registration methods. PMID:27051412

  8. Edge-based correlation image registration for multispectral imaging

    DOEpatents

    Nandy, Prabal

    2009-11-17

    Registration information for images of a common target obtained from a plurality of different spectral bands can be obtained by combining edge detection and phase correlation. The images are edge-filtered, and pairs of the edge-filtered images are then phase correlated to produce phase correlation images. The registration information can be determined based on these phase correlation images.

  9. Automated Image Registration for the Future (Invited)

    NASA Astrophysics Data System (ADS)

    Hack, W. J.

    2008-08-01

    A primary problem facing all surveys or archives of astronomical images remains the automated registration of the images. Images often have pointing errors not accounted for in their headers or metadata, errors which need to be removed in order to successfully combine them into a single, deeper, more scientifically valuable product. The primary techniques rely on either matching catalogs of positions or performing cross-correlation on images. Each of these techniques can only be applied successfully to limited sets of astronomical data. This talk will review the primary techniques currently used for image registration and identify their most obvious limitations for use in automated registration. A new algorithm merging the best of these techniques with a proven technique developed outside of astronomy will be explored as an example of a new paradigm for solving the problem of automated and robust image registration.

  10. Photorealistic image synthesis and camera validation from 2D images

    NASA Astrophysics Data System (ADS)

    Santos Ferrer, Juan C.; González Chévere, David; Manian, Vidya

    2014-06-01

    This paper presents a new 3D scene reconstruction technique using the Unity 3D game engine. The method presented here allow us to reconstruct the shape of simple objects and more complex ones from multiple 2D images, including infrared and digital images from indoor scenes and only digital images from outdoor scenes and then add the reconstructed object to the simulated scene created in Unity 3D, these scenes are then validated with real world scenes. The method used different cameras settings and explores different properties in the reconstructions of the scenes including light, color, texture, shapes and different views. To achieve the highest possible resolution, it was necessary the extraction of partial textures from visible surfaces. To recover the 3D shapes and the depth of simple objects that can be represented by the geometric bodies, there geometric characteristics were used. To estimate the depth of more complex objects the triangulation method was used, for this the intrinsic and extrinsic parameters were calculated using geometric camera calibration. To implement the methods mentioned above the Matlab tool was used. The technique presented here also let's us to simulate small simple videos, by reconstructing a sequence of multiple scenes of the video separated by small margins of time. To measure the quality of the reconstructed images and video scenes the Fast Low Band Model (FLBM) metric from the Video Quality Measurement (VQM) software was used. Low bandwidth perception based features include edges and motion.

  11. 2D/3D registration with the CMA-ES method

    NASA Astrophysics Data System (ADS)

    Gong, Ren Hui; Abolmaesumi, Purang

    2008-03-01

    In this paper, we propose a new method for 2D/3D registration and report its experimental results. The method employs the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) algorithm to search for an optimal transformation that aligns the 2D and 3D data. The similarity calculation is based on Digitally Reconstructed Radiographs (DRRs), which are dynamically generated from the 3D data using a hardware-accelerated technique - Adaptive Slice Geometry Texture Mapping (ASGTM). Three bone phantoms of different sizes and shapes were used to test our method: a long femur, a large pelvis, and a small scaphoid. A collection of experiments were performed to register CT to fluoroscope and DRRs of these phantoms using the proposed method and two prior work, i.e. our previously proposed Unscented Kalman Filter (UKF) based method and a commonly used simplex-based method. The experimental results showed that: 1) with slightly more computation overhead, the proposed method was significantly more robust to local minima than the simplex-based method; 2) while as robust as the UKF-based method in terms of capture range, the new method was not sensitive to the initial values of its exposed control parameters, and has also no special requirement about the cost function; 3) the proposed method was fast and consistently achieved the best accuracies in all compared methods.

  12. Deformable Medical Image Registration: A Survey

    PubMed Central

    Sotiras, Aristeidis; Davatzikos, Christos; Paragios, Nikos

    2013-01-01

    Deformable image registration is a fundamental task in medical image processing. Among its most important applications, one may cite: i) multi-modality fusion, where information acquired by different imaging devices or protocols is fused to facilitate diagnosis and treatment planning; ii) longitudinal studies, where temporal structural or anatomical changes are investigated; and iii) population modeling and statistical atlases used to study normal anatomical variability. In this paper, we attempt to give an overview of deformable registration methods, putting emphasis on the most recent advances in the domain. Additional emphasis has been given to techniques applied to medical images. In order to study image registration methods in depth, their main components are identified and studied independently. The most recent techniques are presented in a systematic fashion. The contribution of this paper is to provide an extensive account of registration techniques in a systematic manner. PMID:23739795

  13. Effects of spatial resolution on image registration

    NASA Astrophysics Data System (ADS)

    Zhao, Can; Carass, Aaron; Jog, Amod; Prince, Jerry L.

    2016-03-01

    This paper presents a theoretical analysis of the effect of spatial resolution on image registration. Based on the assumption of additive Gaussian noise on the images, the mean and variance of the distribution of the sum of squared differences (SSD) were estimated. Using these estimates, we evaluate a distance between the SSD distributions of aligned images and non-aligned images. The experimental results show that by matching the resolutions of the moving and fixed images one can get a better image registration result. The results agree with our theoretical analysis of SSD, but also suggest that it may be valid for mutual information as well.

  14. Automated Registration Of Images From Multiple Sensors

    NASA Technical Reports Server (NTRS)

    Rignot, Eric J. M.; Kwok, Ronald; Curlander, John C.; Pang, Shirley S. N.

    1994-01-01

    Images of terrain scanned in common by multiple Earth-orbiting remote sensors registered automatically with each other and, where possible, on geographic coordinate grid. Simulated image of terrain viewed by sensor computed from ancillary data, viewing geometry, and mathematical model of physics of imaging. In proposed registration algorithm, simulated and actual sensor images matched by area-correlation technique.

  15. Cellular recurrent deep network for image registration

    NASA Astrophysics Data System (ADS)

    Alam, M.; Vidyaratne, L.; Iftekharuddin, Khan M.

    2015-09-01

    Image registration using Artificial Neural Network (ANN) remains a challenging learning task. Registration can be posed as a two-step problem: parameter estimation and actual alignment/transformation using the estimated parameters. To date ANN based image registration techniques only perform the parameter estimation, while affine equations are used to perform the actual transformation. In this paper, we propose a novel deep ANN based image rigid registration that combines parameter estimation and transformation as a simultaneous learning task. Our previous work shows that a complex universal approximator known as Cellular Simultaneous Recurrent Network (CSRN) can successfully approximate affine transformations with known transformation parameters. This study introduces a deep ANN that combines a feed forward network with a CSRN to perform full rigid registration. Layer wise training is used to pre-train feed forward network for parameter estimation and followed by a CSRN for image transformation respectively. The deep network is then fine-tuned to perform the final registration task. Our result shows that the proposed deep ANN architecture achieves comparable registration accuracy to that of image affine transformation using CSRN with known parameters. We also demonstrate the efficacy of our novel deep architecture by a performance comparison with a deep clustered MLP.

  16. 2D hexagonal quaternion Fourier transform in color image processing

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.; Agaian, Sos S.

    2016-05-01

    In this paper, we present a novel concept of the quaternion discrete Fourier transform on the two-dimensional hexagonal lattice, which we call the two-dimensional hexagonal quaternion discrete Fourier transform (2-D HQDFT). The concept of the right-side 2D HQDFT is described and the left-side 2-D HQDFT is similarly considered. To calculate the transform, the image on the hexagonal lattice is described in the tensor representation when the image is presented by a set of 1-D signals, or splitting-signals which can be separately processed in the frequency domain. The 2-D HQDFT can be calculated by a set of 1-D quaternion discrete Fourier transforms (QDFT) of the splitting-signals.

  17. 2-D Imaging of Electron Temperature in Tokamak Plasmas

    SciTech Connect

    T. Munsat; E. Mazzucato; H. Park; C.W. Domier; M. Johnson; N.C. Luhmann Jr.; J. Wang; Z. Xia; I.G.J. Classen; A.J.H. Donne; M.J. van de Pol

    2004-07-08

    By taking advantage of recent developments in millimeter wave imaging technology, an Electron Cyclotron Emission Imaging (ECEI) instrument, capable of simultaneously measuring 128 channels of localized electron temperature over a 2-D map in the poloidal plane, has been developed for the TEXTOR tokamak. Data from the new instrument, detailing the MHD activity associated with a sawtooth crash, is presented.

  18. 2D electron cyclotron emission imaging at ASDEX Upgrade (invited)

    SciTech Connect

    Classen, I. G. J.; Boom, J. E.; Vries, P. C. de; Suttrop, W.; Schmid, E.; Garcia-Munoz, M.; Schneider, P. A.; Tobias, B.; Domier, C. W.; Luhmann, N. C. Jr.; Donne, A. J. H.; Jaspers, R. J. E.; Park, H. K.; Munsat, T.

    2010-10-15

    The newly installed electron cyclotron emission imaging diagnostic on ASDEX Upgrade provides measurements of the 2D electron temperature dynamics with high spatial and temporal resolution. An overview of the technical and experimental properties of the system is presented. These properties are illustrated by the measurements of the edge localized mode and the reversed shear Alfven eigenmode, showing both the advantage of having a two-dimensional (2D) measurement, as well as some of the limitations of electron cyclotron emission measurements. Furthermore, the application of singular value decomposition as a powerful tool for analyzing and filtering 2D data is presented.

  19. Sparse radar imaging using 2D compressed sensing

    NASA Astrophysics Data System (ADS)

    Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying

    2014-10-01

    Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.

  20. Registration of In Vivo Prostate Magnetic Resonance Images to Digital Histopathology Images

    NASA Astrophysics Data System (ADS)

    Ward, A. D.; Crukley, C.; McKenzie, C.; Montreuil, J.; Gibson, E.; Gomez, J. A.; Moussa, M.; Bauman, G.; Fenster, A.

    Early and accurate diagnosis of prostate cancer enables minimally invasive therapies to cure the cancer with less morbidity. The purpose of this work is to non-rigidly register in vivo pre-prostatectomy prostate medical images to regionally-graded histopathology images from post-prostatectomy specimens, seeking a relationship between the multi parametric imaging and cancer distribution and aggressiveness. Our approach uses image-based registration in combination with a magnetically tracked probe to orient the physical slicing of the specimen to be parallel to the in vivo imaging planes, yielding a tractable 2D registration problem. We measured a target registration error of 0.85 mm, a mean slicing plane marking error of 0.7 mm, and a mean slicing error of 0.6 mm; these results compare favourably with our 2.2 mm diagnostic MR image thickness. Qualitative evaluation of in vivo imaging-histopathology fusion reveals excellent anatomic concordance between MR and digital histopathology.

  1. Register cardiac fiber orientations from 3D DTI volume to 2D ultrasound image of rat hearts

    PubMed Central

    Qin, Xulei; Wang, Silun; Shen, Ming; Zhang, Xiaodong; Lerakis, Stamatios; Wagner, Mary B.; Fei, Baowei

    2015-01-01

    Two-dimensional (2D) ultrasound or echocardiography is one of the most widely used examinations for the diagnosis of cardiac diseases. However, it only supplies the geometric and structural information of the myocardium. In order to supply more detailed microstructure information of the myocardium, this paper proposes a registration method to map cardiac fiber orientations from three-dimensional (3D) magnetic resonance diffusion tensor imaging (MR-DTI) volume to the 2D ultrasound image. It utilizes a 2D/3D intensity based registration procedure including rigid, log-demons, and affine transformations to search the best similar slice from the template volume. After registration, the cardiac fiber orientations are mapped to the 2D ultrasound image via fiber relocations and reorientations. This method was validated by six images of rat hearts ex vivo. The evaluation results indicated that the final Dice similarity coefficient (DSC) achieved more than 90% after geometric registrations; and the inclination angle errors (IAE) between the mapped fiber orientations and the gold standards were less than 15 degree. This method may provide a practical tool for cardiologists to examine cardiac fiber orientations on ultrasound images and have the potential to supply additional information for diagnosis of cardiac diseases. PMID:26855466

  2. Register cardiac fiber orientations from 3D DTI volume to 2D ultrasound image of rat hearts

    NASA Astrophysics Data System (ADS)

    Qin, Xulei; Wang, Silun; Shen, Ming; Zhang, Xiaodong; Lerakis, Stamatios; Wagner, Mary B.; Fei, Baowei

    2015-03-01

    Two-dimensional (2D) ultrasound or echocardiography is one of the most widely used examinations for the diagnosis of cardiac diseases. However, it only supplies the geometric and structural information of the myocardium. In order to supply more detailed microstructure information of the myocardium, this paper proposes a registration method to map cardiac fiber orientations from three-dimensional (3D) magnetic resonance diffusion tensor imaging (MR-DTI) volume to the 2D ultrasound image. It utilizes a 2D/3D intensity based registration procedure including rigid, log-demons, and affine transformations to search the best similar slice from the template volume. After registration, the cardiac fiber orientations are mapped to the 2D ultrasound image via fiber relocations and reorientations. This method was validated by six images of rat hearts ex vivo. The evaluation results indicated that the final Dice similarity coefficient (DSC) achieved more than 90% after geometric registrations; and the inclination angle errors (IAE) between the mapped fiber orientations and the gold standards were less than 15 degree. This method may provide a practical tool for cardiologists to examine cardiac fiber orientations on ultrasound images and have the potential to supply additional information for diagnosis of cardiac diseases.

  3. Onboard Image Registration from Invariant Features

    NASA Technical Reports Server (NTRS)

    Wang, Yi; Ng, Justin; Garay, Michael J.; Burl, Michael C

    2008-01-01

    This paper describes a feature-based image registration technique that is potentially well-suited for onboard deployment. The overall goal is to provide a fast, robust method for dynamically combining observations from multiple platforms into sensors webs that respond quickly to short-lived events and provide rich observations of objects that evolve in space and time. The approach, which has enjoyed considerable success in mainstream computer vision applications, uses invariant SIFT descriptors extracted at image interest points together with the RANSAC algorithm to robustly estimate transformation parameters that relate one image to another. Experimental results for two satellite image registration tasks are presented: (1) automatic registration of images from the MODIS instrument on Terra to the MODIS instrument on Aqua and (2) automatic stabilization of a multi-day sequence of GOES-West images collected during the October 2007 Southern California wildfires.

  4. Dual-projection 3D-2D registration for surgical guidance: preclinical evaluation of performance and minimum angular separation

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Otake, Y.; Wang, A. S.; Kleinszig, G.; Vogt, S.; Gallia, G. L.; Rigamonti, D.; Wolinsky, J.-P.; Gokaslan, Ziya L.; Khanna, A. J.; Siewerdsen, J. H.

    2014-03-01

    An algorithm for 3D-2D registration of CT and x-ray projections has been developed using dual projection views to provide 3D localization with accuracy exceeding that of conventional tracking systems. The registration framework employs a normalized gradient information (NGI) similarity metric and covariance matrix adaptation evolution strategy (CMAES) to solve for the patient pose in 6 degrees of freedom. Registration performance was evaluated in anthropomorphic head and chest phantoms, as well as a human torso cadaver, using C-arm projection views acquired at angular separations (Δ𝜃) ranging 0-178°. Registration accuracy was assessed in terms target registration error (TRE) and compared to that of an electromagnetic tracker. Studies evaluated the influence of C-arm magnification, x-ray dose, and preoperative CT slice thickness on registration accuracy and the minimum angular separation required to achieve TRE ~2 mm. The results indicate that Δ𝜃 as small as 10-20° is adequate to achieve TRE <2 mm with 95% confidence, comparable or superior to that of commercial trackers. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers, and manual registration. The studies support potential application to percutaneous spine procedures and intracranial neurosurgery.

  5. Bidirectional Elastic Image Registration Using B-Spline Affine Transformation

    PubMed Central

    Gu, Suicheng; Meng, Xin; Sciurba, Frank C.; Wang, Chen; Kaminski, Naftali; Pu, Jiantao

    2014-01-01

    A registration scheme termed as B-spline affine transformation (BSAT) is presented in this study to elastically align two images. We define an affine transformation instead of the traditional translation at each control point. Mathematically, BSAT is a generalized form of the affine transformation and the traditional B-Spline transformation (BST). In order to improve the performance of the iterative closest point (ICP) method in registering two homologous shapes but with large deformation, a bi-directional instead of the traditional unidirectional objective / cost function is proposed. In implementation, the objective function is formulated as a sparse linear equation problem, and a sub-division strategy is used to achieve a reasonable efficiency in registration. The performance of the developed scheme was assessed using both two-dimensional (2D) synthesized dataset and three-dimensional (3D) volumetric computed tomography (CT) data. Our experiments showed that the proposed B-spline affine model could obtain reasonable registration accuracy. PMID:24530210

  6. Focusing surface wave imaging with flexible 2D array

    NASA Astrophysics Data System (ADS)

    Zhou, Shiyuan; Fu, Junqiang; Li, Zhe; Xu, Chunguang; Xiao, Dingguo; Wang, Shaohan

    2016-04-01

    Curved surface is widely exist in key parts of energy and power equipment, such as, turbine blade cylinder block and so on. Cycling loading and harsh working condition of enable fatigue cracks appear on the surface. The crack should be found in time to avoid catastrophic damage to the equipment. A flexible 2D array transducer was developed. 2D Phased Array focusing method (2DPA), Mode-Spatial Double Phased focusing method (MSDPF) and the imaging method using the flexible 2D array probe are studied. Experiments using these focusing and imaging method are carried out. Surface crack image is obtained with both 2DPA and MSDPF focusing method. It have been proved that MSDPF can be more adaptable for curved surface and more calculate efficient than 2DPA.

  7. TU-B-19A-01: Image Registration II: TG132-Quality Assurance for Image Registration

    SciTech Connect

    Brock, K; Mutic, S

    2014-06-15

    AAPM Task Group 132 was charged with a review of the current approaches and solutions for image registration in radiotherapy and to provide recommendations for quality assurance and quality control of these clinical processes. As the results of image registration are always used as the input of another process for planning or delivery, it is important for the user to understand and document the uncertainty associate with the algorithm in general and the Result of a specific registration. The recommendations of this task group, which at the time of abstract submission are currently being reviewed by the AAPM, include the following components. The user should understand the basic image registration techniques and methods of visualizing image fusion. The disclosure of basic components of the image registration by commercial vendors is critical in this respect. The physicists should perform end-to-end tests of imaging, registration, and planning/treatment systems if image registration is performed on a stand-alone system. A comprehensive commissioning process should be performed and documented by the physicist prior to clinical use of the system. As documentation is important to the safe implementation of this process, a request and report system should be integrated into the clinical workflow. Finally, a patient specific QA practice should be established for efficient evaluation of image registration results. The implementation of these recommendations will be described and illustrated during this educational session. Learning Objectives: Highlight the importance of understanding the image registration techniques used in their clinic. Describe the end-to-end tests needed for stand-alone registration systems. Illustrate a comprehensive commissioning program using both phantom data and clinical images. Describe a request and report system to ensure communication and documentation. Demonstrate an clinically-efficient patient QA practice for efficient evaluation of image

  8. Image-based registration of ultrasound and magnetic resonance images: a preliminary study

    NASA Astrophysics Data System (ADS)

    Pagoulatos, Niko; Haynor, David R.; Kim, Yongmin

    2000-04-01

    A number of surgical procedures are planned and executed based on medical images. Typically, x-ray computed tomography (CT) and magnetic resonance (MR) images are acquired preoperatively for diagnosis and surgical planning. In the operating room, execution of the surgical plan becomes feasible due to registration between preoperative images and surgical space where patient anatomy lies. In this paper, we present a new automatic algorithm where we use ultrasound (US) 2D B-mode images to register the preoperative MR image coordinate system with the surgical space which in our experiments is represented by the reference coordinate system of a DC magnetic position sensor. The position sensor is also used for tracking the position and orientation of the US images. Furthermore, we simulated patient anatomy by using custom-built phantoms. Our registration algorithm is a hybrid between fiducial- based and image-based registration algorithms. Initially, we perform a fiducial-based rigid-body registration between MR and position sensor space. Then, by changing various parameters of the rigid-body fiducial-based transformation, we produce an MR-sensor misregistration in order to simulate potential movements of the skin fiducials and/or the organs. The perturbed transformation serves as the initial estimate for the image-based registration algorithm, which uses normalized mutual information as a similarity measure, where one or more US images of the phantom are automatically matched with the MR image data set. By using the fiducial- based registration as the gold standard, we could compute the accuracy of the image-based registration algorithm in registering MR and sensor spaces. The registration error varied depending on the number of 2D US images used for registration. A good compromise between accuracy and computation time was the use of 3 US slices. In this case, the registration error had a mean value of 1.88 mm and standard deviation of 0.42 mm, whereas the required

  9. Reflectance and fluorescence hyperspectral elastic image registration

    NASA Astrophysics Data System (ADS)

    Lange, Holger; Baker, Ross; Hakansson, Johan; Gustafsson, Ulf P.

    2004-05-01

    Science and Technology International (STI) presents a novel multi-modal elastic image registration approach for a new hyperspectral medical imaging modality. STI's HyperSpectral Diagnostic Imaging (HSDI) cervical instrument is used for the early detection of uterine cervical cancer. A Computer-Aided-Diagnostic (CAD) system is being developed to aid the physician with the diagnosis of pre-cancerous and cancerous tissue regions. The CAD system uses the fusion of multiple data sources to optimize its performance. The key enabling technology for the data fusion is image registration. The difficulty lies in the image registration of fluorescence and reflectance hyperspectral data due to the occurrence of soft tissue movement and the limited resemblance of these types of imagery. The presented approach is based on embedding a reflectance image in the fluorescence hyperspectral imagery. Having a reflectance image in both data sets resolves the resemblance problem and thereby enables the use of elastic image registration algorithms required to compensate for soft tissue movements. Several methods of embedding the reflectance image in the fluorescence hyperspectral imagery are described. Initial experiments with human subject data are presented where a reflectance image is embedded in the fluorescence hyperspectral imagery.

  10. Improving VERITAS sensitivity by fitting 2D Gaussian image parameters

    NASA Astrophysics Data System (ADS)

    Christiansen, Jodi; VERITAS Collaboration

    2012-12-01

    Our goal is to improve the acceptance and angular resolution of VERITAS by implementing a camera image-fitting algorithm. Elliptical image parameters are extracted from 2D Gaussian distribution fits using a χ2 minimization instead of the standard technique based on the principle moments of an island of pixels above threshold. We optimize the analysis cuts and then characterize the improvements using simulations. We find an improvement of 20% less observing time to reach 5-sigma for weak point sources.

  11. The image registration of multi-band images by geometrical optics

    NASA Astrophysics Data System (ADS)

    Yan, Yung-Jhe; Chiang, Hou-Chi; Tsai, Yu-Hsiang; Huang, Ting-Wei; Mang, Ou-Yang

    2015-09-01

    The image fusion is combination of two or more images into one image. The fusion of multi-band spectral images has been in many applications, such as thermal system, remote sensing, medical treatment, etc. Images are taken with the different imaging sensors. If the sensors take images through the different optical paths in the same time, it will be in the different positions. The task of the image registration will be more difficult. Because the images are in the different field of views (F.O.V.), the different resolutions and the different view angles. It is important to build the relationship of the viewpoints in one image to the other image. In this paper, we focus on the problem of image registration for two non-pinhole sensors. The affine transformation between the 2-D image and the 3-D real world can be derived from the geometrical optics of the sensors. In the other word, the geometrical affine transformation function of two images are derived from the intrinsic and extrinsic parameters of two sensors. According to the affine transformation function, the overlap of the F.O.V. in two images can be calculated and resample two images in the same resolution. Finally, we construct the image registration model by the mapping function. It merges images for different imaging sensors. And, imaging sensors absorb different wavebands of electromagnetic spectrum at the different position in the same time.

  12. Image-based registration for two-dimensional and three-dimensional ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Krucker, Jochen

    Image based registration techniques were developed, evaluated, and applied to 2D and 3D ultrasound (US) imaging in the context of deformation and aberration detection and correction. The specific applications demonstrated here include 3D compounding, generation of extended fields of view, and sound speed estimation. Despite the enormous clinical importance that diagnostic US has gained over more than four decades, and despite the fact that advances in software development and computer technology have made image registration a widely studied and moderately applied technique in other medical imaging modalities, US and image registration have rarely been combined in research or clinical application. We will show that not only can some image registration methods be transferred from other imaging modalities and adjusted to operate on US images, but also that registration can overcome or greatly ameliorate some of the existing limitations of US imaging. A nonlinear registration algorithm developed specifically for ultrasound showed registration accuracy of 0.2 mm in volumes with synthetic deformations, 0.3 mm in phantom experiments, and 0.6 mm in vivo. Extended high-resolution ultrasound volumes with lateral extents of over 10 cm were created by fusing together 3 or 4 individual volumes, using image registration in the areas of overlap. 3D compounding in the out-of-plane direction was achieved by registration of US volumes obtained from different look directions. Examples of compounding in phantoms and in vivo show increased contrast/noise and better visualization of specular reflectors. Image-based estimates of the average sound speed in the field of view were obtained using registration of steered 2D US images. The accuracy of the estimates was improved by including simulations of the sound field generated by the array. Evaluated over a range of sound speeds from 1490 to 1560 m/s in a custom-made phantom, the simulation results reduced the RMS deviation between the

  13. A 2-D ECE Imaging Diagnostic for TEXTOR

    NASA Astrophysics Data System (ADS)

    Wang, J.; Deng, B. H.; Domier, C. W.; Luhmann, H. Lu, Jr.

    2002-11-01

    A true 2-D extension to the UC Davis ECE Imaging (ECEI) concept is under development for installation on the TEXTOR tokamak in 2003. This combines the use of linear arrays with multichannel conventional wideband heterodyne ECE radiometers to provide a true 2-D imaging system. This is in contrast to current 1-D ECEI systems in which 2-D images are obtained through the use of multiple plasma discharges (varying the scanned emission frequency each discharge). Here, each array element of the 20 channel mixer array measures plasma emission at 16 simultaneous frequencies to form a 16x20 image of the plasma electron temperature Te. Correlation techniques can then be applied to any pair of the 320 image elements to study both radial and poloidal characteristics of turbulent Te fluctuations. The system relies strongly on the development of low cost, wideband (2-18 GHz) IF detection electronics for use in both ECE Imaging as well as conventional heterodyne ECE radiometry. System details, with a strong focus on the wideband IF electronics development, will be presented. *Supported by U.S. DoE Contracts DE-FG03-95ER54295 and DE-FG03-99ER54531.

  14. Targeted fluorescence imaging enhanced by 2D materials: a comparison between 2D MoS2 and graphene oxide.

    PubMed

    Xie, Donghao; Ji, Ding-Kun; Zhang, Yue; Cao, Jun; Zheng, Hu; Liu, Lin; Zang, Yi; Li, Jia; Chen, Guo-Rong; James, Tony D; He, Xiao-Peng

    2016-08-01

    Here we demonstrate that 2D MoS2 can enhance the receptor-targeting and imaging ability of a fluorophore-labelled ligand. The 2D MoS2 has an enhanced working concentration range when compared with graphene oxide, resulting in the improved imaging of both cell and tissue samples.

  15. Software for Automated Image-to-Image Co-registration

    NASA Technical Reports Server (NTRS)

    Benkelman, Cody A.; Hughes, Heidi

    2007-01-01

    The project objectives are: a) Develop software to fine-tune image-to-image co-registration, presuming images are orthorectified prior to input; b) Create a reusable software development kit (SDK) to enable incorporation of these tools into other software; d) provide automated testing for quantitative analysis; and e) Develop software that applies multiple techniques to achieve subpixel precision in the co-registration of image pairs.

  16. Groupwise Image Registration Guided by a Dynamic Digraph of Images.

    PubMed

    Tang, Zhenyu; Fan, Yong

    2016-04-01

    For groupwise image registration, graph theoretic methods have been adopted for discovering the manifold of images to be registered so that accurate registration of images to a group center image can be achieved by aligning similar images that are linked by the shortest graph paths. However, the image similarity measures adopted to build a graph of images in the extant methods are essentially pairwise measures, not effective for capturing the groupwise similarity among multiple images. To overcome this problem, we present a groupwise image similarity measure that is built on sparse coding for characterizing image similarity among all input images and build a directed graph (digraph) of images so that similar images are connected by the shortest paths of the digraph. Following the shortest paths determined according to the digraph, images are registered to a group center image in an iterative manner by decomposing a large anatomical deformation field required to register an image to the group center image into a series of small ones between similar images. During the iterative image registration, the digraph of images evolves dynamically at each iteration step to pursue an accurate estimation of the image manifold. Moreover, an adaptive dictionary strategy is adopted in the groupwise image similarity measure to ensure fast convergence of the iterative registration procedure. The proposed method has been validated based on both simulated and real brain images, and experiment results have demonstrated that our method was more effective for learning the manifold of input images and achieved higher registration accuracy than state-of-the-art groupwise image registration methods.

  17. Interactive 2D to 3D stereoscopic image synthesis

    NASA Astrophysics Data System (ADS)

    Feldman, Mark H.; Lipton, Lenny

    2005-03-01

    Advances in stereoscopic display technologies, graphic card devices, and digital imaging algorithms have opened up new possibilities in synthesizing stereoscopic images. The power of today"s DirectX/OpenGL optimized graphics cards together with adapting new and creative imaging tools found in software products such as Adobe Photoshop, provide a powerful environment for converting planar drawings and photographs into stereoscopic images. The basis for such a creative process is the focus of this paper. This article presents a novel technique, which uses advanced imaging features and custom Windows-based software that utilizes the Direct X 9 API to provide the user with an interactive stereo image synthesizer. By creating an accurate and interactive world scene with moveable and flexible depth map altered textured surfaces, perspective stereoscopic cameras with both visible frustums and zero parallax planes, a user can precisely model a virtual three-dimensional representation of a real-world scene. Current versions of Adobe Photoshop provide a creative user with a rich assortment of tools needed to highlight elements of a 2D image, simulate hidden areas, and creatively shape them for a 3D scene representation. The technique described has been implemented as a Photoshop plug-in and thus allows for a seamless transition of these 2D image elements into 3D surfaces, which are subsequently rendered to create stereoscopic views.

  18. Automated image registration for FDOPA PET studies

    NASA Astrophysics Data System (ADS)

    Lin, Kang-Ping; Huang, Sung-Cheng; Yu, Dan-Chu; Melega, William; Barrio, Jorge R.; Phelps, Michael E.

    1996-12-01

    In this study, various image registration methods are investigated for their suitability for registration of L-6-[18F]-fluoro-DOPA (FDOPA) PET images. Five different optimization criteria including sum of absolute difference (SAD), mean square difference (MSD), cross-correlation coefficient (CC), standard deviation of pixel ratio (SDPR), and stochastic sign change (SSC) were implemented and Powell's algorithm was used to optimize the criteria. The optimization criteria were calculated either unidirectionally (i.e. only evaluating the criteria for comparing the resliced image 1 with the original image 2) or bidirectionally (i.e. averaging the criteria for comparing the resliced image 1 with the original image 2 and those for the sliced image 2 with the original image 1). Monkey FDOPA images taken at various known orientations were used to evaluate the accuracy of different methods. A set of human FDOPA dynamic images was used to investigate the ability of the methods for correcting subject movement. It was found that a large improvement in performance resulted when bidirectional rather than unidirectional criteria were used. Overall, the SAD, MSD and SDPR methods were found to be comparable in performance and were suitable for registering FDOPA images. The MSD method gave more adequate results for frame-to-frame image registration for correcting subject movement during a dynamic FDOPA study. The utility of the registration method is further demonstrated by registering FDOPA images in monkeys before and after amphetamine injection to reveal more clearly the changes in spatial distribution of FDOPA due to the drug intervention.

  19. Validation for 2D/3D registration II: The comparison of intensity- and gradient-based merit functions using a new gold standard data set

    SciTech Connect

    Gendrin, Christelle; Markelj, Primoz; Pawiro, Supriyanto Ardjo; Spoerk, Jakob; Bloch, Christoph; Weber, Christoph; Figl, Michael; Bergmann, Helmar; Birkfellner, Wolfgang; Likar, Bostjan; Pernus, Franjo

    2011-03-15

    Purpose: A new gold standard data set for validation of 2D/3D registration based on a porcine cadaver head with attached fiducial markers was presented in the first part of this article. The advantage of this new phantom is the large amount of soft tissue, which simulates realistic conditions for registration. This article tests the performance of intensity- and gradient-based algorithms for 2D/3D registration using the new phantom data set. Methods: Intensity-based methods with four merit functions, namely, cross correlation, rank correlation, correlation ratio, and mutual information (MI), and two gradient-based algorithms, the backprojection gradient-based (BGB) registration method and the reconstruction gradient-based (RGB) registration method, were compared. Four volumes consisting of CBCT with two fields of view, 64 slice multidetector CT, and magnetic resonance-T1 weighted images were registered to a pair of kV x-ray images and a pair of MV images. A standardized evaluation methodology was employed. Targets were evenly spread over the volumes and 250 starting positions of the 3D volumes with initial displacements of up to 25 mm from the gold standard position were calculated. After the registration, the displacement from the gold standard was retrieved and the root mean square (RMS), mean, and standard deviation mean target registration errors (mTREs) over 250 registrations were derived. Additionally, the following merit properties were computed: Accuracy, capture range, number of minima, risk of nonconvergence, and distinctiveness of optimum for better comparison of the robustness of each merit. Results: Among the merit functions used for the intensity-based method, MI reached the best accuracy with an RMS mTRE down to 1.30 mm. Furthermore, it was the only merit function that could accurately register the CT to the kV x rays with the presence of tissue deformation. As for the gradient-based methods, BGB and RGB methods achieved subvoxel accuracy (RMS m

  20. Quantifying Therapeutic and Diagnostic Efficacy in 2D Microvascular Images

    NASA Technical Reports Server (NTRS)

    Parsons-Wingerter, Patricia; Vickerman, Mary B.; Keith, Patricia A.

    2009-01-01

    VESGEN is a newly automated, user-interactive program that maps and quantifies the effects of vascular therapeutics and regulators on microvascular form and function. VESGEN analyzes two-dimensional, black and white vascular images by measuring important vessel morphology parameters. This software guides the user through each required step of the analysis process via a concise graphical user interface (GUI). Primary applications of the VESGEN code are 2D vascular images acquired as clinical diagnostic images of the human retina and as experimental studies of the effects of vascular regulators and therapeutics on vessel remodeling.

  1. An image registration based ultrasound probe calibration

    NASA Astrophysics Data System (ADS)

    Li, Xin; Kumar, Dinesh; Sarkar, Saradwata; Narayanan, Ram

    2012-02-01

    Reconstructed 3D ultrasound of prostate gland finds application in several medical areas such as image guided biopsy, therapy planning and dose delivery. In our application, we use an end-fire probe rotated about its axis to acquire a sequence of rotational slices to reconstruct 3D TRUS (Transrectal Ultrasound) image. The image acquisition system consists of an ultrasound transducer situated on a cradle directly attached to a rotational sensor. However, due to system tolerances, axis of probe does not align exactly with the designed axis of rotation resulting in artifacts in the 3D reconstructed ultrasound volume. We present a rigid registration based automatic probe calibration approach. The method uses a sequence of phantom images, each pair acquired at angular separation of 180 degrees and registers corresponding image pairs to compute the deviation from designed axis. A modified shadow removal algorithm is applied for preprocessing. An attribute vector is constructed from image intensity and a speckle-insensitive information-theoretic feature. We compare registration between the presented method and expert-corrected images in 16 prostate phantom scans. Images were acquired at multiple resolutions, and different misalignment settings from two ultrasound machines. Screenshots from 3D reconstruction are shown before and after misalignment correction. Registration parameters from automatic and manual correction were found to be in good agreement. Average absolute differences of translation and rotation between automatic and manual methods were 0.27 mm and 0.65 degree, respectively. The registration parameters also showed lower variability for automatic registration (pooled standard deviation σtranslation = 0.50 mm, σrotation = 0.52 degree) compared to the manual approach (pooled standard deviation σtranslation = 0.62 mm, σrotation = 0.78 degree).

  2. A survey of medical image registration - under review.

    PubMed

    Viergever, Max A; Maintz, J B Antoine; Klein, Stefan; Murphy, Keelin; Staring, Marius; Pluim, Josien P W

    2016-10-01

    A retrospective view on the past two decades of the field of medical image registration is presented, guided by the article "A survey of medical image registration" (Maintz and Viergever, 1998). It shows that the classification of the field introduced in that article is still usable, although some modifications to do justice to advances in the field would be due. The main changes over the last twenty years are the shift from extrinsic to intrinsic registration, the primacy of intensity-based registration, the breakthrough of nonlinear registration, the progress of inter-subject registration, and the availability of generic image registration software packages. Two problems that were called urgent already 20 years ago, are even more urgent nowadays: Validation of registration methods, and translation of results of image registration research to clinical practice. It may be concluded that the field of medical image registration has evolved, but still is in need of further development in various aspects.

  3. Automatic parameter selection for multimodal image registration.

    PubMed

    Hahn, Dieter A; Daum, Volker; Hornegger, Joachim

    2010-05-01

    Over the past ten years similarity measures based on intensity distributions have become state-of-the-art in automatic multimodal image registration. An implementation for clinical usage has to support a plurality of images. However, a generally applicable parameter configuration for the number and sizes of histogram bins, optimal Parzen-window kernel widths or background thresholds cannot be found. This explains why various research groups present partly contradictory empirical proposals for these parameters. This paper proposes a set of data-driven estimation schemes for a parameter-free implementation that eliminates major caveats of heuristic trial and error. We present the following novel approaches: a new coincidence weighting scheme to reduce the influence of background noise on the similarity measure in combination with Max-Lloyd requantization, and a tradeoff for the automatic estimation of the number of histogram bins. These methods have been integrated into a state-of-the-art rigid registration that is based on normalized mutual information and applied to CT-MR, PET-MR, and MR-MR image pairs of the RIRE 2.0 database. We compare combinations of the proposed techniques to a standard implementation using default parameters, which can be found in the literature, and to a manual registration by a medical expert. Additionally, we analyze the effects of various histogram sizes, sampling rates, and error thresholds for the number of histogram bins. The comparison of the parameter selection techniques yields 25 approaches in total, with 114 registrations each. The number of bins has no significant influence on the proposed implementation that performs better than both the manual and the standard method in terms of acceptance rates and target registration error (TRE). The overall mean TRE is 2.34 mm compared to 2.54 mm for the manual registration and 6.48 mm for a standard implementation. Our results show a significant TRE reduction for distortion

  4. Spatially weighted mutual information image registration for image guided radiation therapy

    SciTech Connect

    Park, Samuel B.; Rhee, Frank C.; Monroe, James I.; Sohn, Jason W.

    2010-09-15

    Purpose: To develop a new metric for image registration that incorporates the (sub)pixelwise differential importance along spatial location and to demonstrate its application for image guided radiation therapy (IGRT). Methods: It is well known that rigid-body image registration with mutual information is dependent on the size and location of the image subset on which the alignment analysis is based [the designated region of interest (ROI)]. Therefore, careful review and manual adjustments of the resulting registration are frequently necessary. Although there were some investigations of weighted mutual information (WMI), these efforts could not apply the differential importance to a particular spatial location since WMI only applies the weight to the joint histogram space. The authors developed the spatially weighted mutual information (SWMI) metric by incorporating an adaptable weight function with spatial localization into mutual information. SWMI enables the user to apply the selected transform to medically ''important'' areas such as tumors and critical structures, so SWMI is neither dominated by, nor neglects the neighboring structures. Since SWMI can be utilized with any weight function form, the authors presented two examples of weight functions for IGRT application: A Gaussian-shaped weight function (GW) applied to a user-defined location and a structures-of-interest (SOI) based weight function. An image registration example using a synthesized 2D image is presented to illustrate the efficacy of SWMI. The convergence and feasibility of the registration method as applied to clinical imaging is illustrated by fusing a prostate treatment planning CT with a clinical cone beam CT (CBCT) image set acquired for patient alignment. Forty-one trials are run to test the speed of convergence. The authors also applied SWMI registration using two types of weight functions to two head and neck cases and a prostate case with clinically acquired CBCT/MVCT image sets. The

  5. SAR imaging via modern 2-D spectral estimation methods.

    PubMed

    DeGraaf, S R

    1998-01-01

    This paper discusses the use of modern 2D spectral estimation algorithms for synthetic aperture radar (SAR) imaging. The motivation for applying power spectrum estimation methods to SAR imaging is to improve resolution, remove sidelobe artifacts, and reduce speckle compared to what is possible with conventional Fourier transform SAR imaging techniques. This paper makes two principal contributions to the field of adaptive SAR imaging. First, it is a comprehensive comparison of 2D spectral estimation methods for SAR imaging. It provides a synopsis of the algorithms available, discusses their relative merits for SAR imaging, and illustrates their performance on simulated and collected SAR imagery. Some of the algorithms presented or their derivations are new, as are some of the insights into or analyses of the algorithms. Second, this work develops multichannel variants of four related algorithms, minimum variance method (MVM), reduced-rank MVM (RRMVM), adaptive sidelobe reduction (ASR) and space variant apodization (SVA) to estimate both reflectivity intensity and interferometric height from polarimetric displaced-aperture interferometric data. All of these interferometric variants are new. In the interferometric contest, adaptive spectral estimation can improve the height estimates through a combination of adaptive nulling and averaging. Examples illustrate that MVM, ASR, and SVA offer significant advantages over Fourier methods for estimating both scattering intensity and interferometric height, and allow empirical comparison of the accuracies of Fourier, MVM, ASR, and SVA interferometric height estimates.

  6. 2D/3D image (facial) comparison using camera matching.

    PubMed

    Goos, Mirelle I M; Alberink, Ivo B; Ruifrok, Arnout C C

    2006-11-10

    A problem in forensic facial comparison of images of perpetrators and suspects is that distances between fixed anatomical points in the face, which form a good starting point for objective, anthropometric comparison, vary strongly according to the position and orientation of the camera. In case of a cooperating suspect, a 3D image may be taken using e.g. a laser scanning device. By projecting the 3D image onto a 2D image with the suspect's head in the same pose as that of the perpetrator, using the same focal length and pixel aspect ratio, numerical comparison of (ratios of) distances between fixed points becomes feasible. An experiment was performed in which, starting from two 3D scans and one 2D image of two colleagues, male and female, and using seven fixed anatomical locations in the face, comparisons were made for the matching and non-matching case. Using this method, the non-matching pair cannot be distinguished from the matching pair of faces. Facial expression and resolution of images were all more or less optimal, and the results of the study are not encouraging for the use of anthropometric arguments in the identification process. More research needs to be done though on larger sets of facial comparisons. PMID:16337353

  7. Multigrid optimal mass transport for image registration and morphing

    NASA Astrophysics Data System (ADS)

    Rehman, Tauseef ur; Tannenbaum, Allen

    2007-02-01

    In this paper we present a computationally efficient Optimal Mass Transport algorithm. This method is based on the Monge-Kantorovich theory and is used for computing elastic registration and warping maps in image registration and morphing applications. This is a parameter free method which utilizes all of the grayscale data in an image pair in a symmetric fashion. No landmarks need to be specified for correspondence. In our work, we demonstrate significant improvement in computation time when our algorithm is applied as compared to the originally proposed method by Haker et al [1]. The original algorithm was based on a gradient descent method for removing the curl from an initial mass preserving map regarded as 2D vector field. This involves inverting the Laplacian in each iteration which is now computed using full multigrid technique resulting in an improvement in computational time by a factor of two. Greater improvement is achieved by decimating the curl in a multi-resolutional framework. The algorithm was applied to 2D short axis cardiac MRI images and brain MRI images for testing and comparison.

  8. Deep Adaptive Log-Demons: Diffeomorphic Image Registration with Very Large Deformations.

    PubMed

    Zhao, Liya; Jia, Kebin

    2015-01-01

    This paper proposes a new framework for capturing large and complex deformation in image registration. Traditionally, this challenging problem relies firstly on a preregistration, usually an affine matrix containing rotation, scale, and translation and afterwards on a nonrigid transformation. According to preregistration, the directly calculated affine matrix, which is obtained by limited pixel information, may misregistrate when large biases exist, thus misleading following registration subversively. To address this problem, for two-dimensional (2D) images, the two-layer deep adaptive registration framework proposed in this paper firstly accurately classifies the rotation parameter through multilayer convolutional neural networks (CNNs) and then identifies scale and translation parameters separately. For three-dimensional (3D) images, affine matrix is located through feature correspondences by a triplanar 2D CNNs. Then deformation removal is done iteratively through preregistration and demons registration. By comparison with the state-of-the-art registration framework, our method gains more accurate registration results on both synthetic and real datasets. Besides, principal component analysis (PCA) is combined with correlation like Pearson and Spearman to form new similarity standards in 2D and 3D registration. Experiment results also show faster convergence speed. PMID:26120356

  9. Deep Adaptive Log-Demons: Diffeomorphic Image Registration with Very Large Deformations

    PubMed Central

    Zhao, Liya; Jia, Kebin

    2015-01-01

    This paper proposes a new framework for capturing large and complex deformation in image registration. Traditionally, this challenging problem relies firstly on a preregistration, usually an affine matrix containing rotation, scale, and translation and afterwards on a nonrigid transformation. According to preregistration, the directly calculated affine matrix, which is obtained by limited pixel information, may misregistrate when large biases exist, thus misleading following registration subversively. To address this problem, for two-dimensional (2D) images, the two-layer deep adaptive registration framework proposed in this paper firstly accurately classifies the rotation parameter through multilayer convolutional neural networks (CNNs) and then identifies scale and translation parameters separately. For three-dimensional (3D) images, affine matrix is located through feature correspondences by a triplanar 2D CNNs. Then deformation removal is done iteratively through preregistration and demons registration. By comparison with the state-of-the-art registration framework, our method gains more accurate registration results on both synthetic and real datasets. Besides, principal component analysis (PCA) is combined with correlation like Pearson and Spearman to form new similarity standards in 2D and 3D registration. Experiment results also show faster convergence speed. PMID:26120356

  10. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2002-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  11. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2003-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  12. Iterative 2D deconvolution of portal imaging radiographs.

    PubMed

    Looe, Hui Khee; Harder, Dietrich; Willborn, Kay C; Poppe, Björn

    2011-01-01

    Portal imaging has become an integral part of modern radiotherapy techniques such as IMRT and IGRT. It serves to verify the accuracy of day-to-day patient positioning, a prerequisite for treatment success. However, image blurring attributable to different physical and geometrical effects, analysed in this work, impairs the image quality of the portal images, and anatomical structures cannot always be clearly outlined. A 2D iterative deconvolution method was developed to reduce this image blurring. The affiliated data basis was generated by the separate measurement of the components contributing to image blurring. Secondary electron transport and pixel size within the EPID, as well as geometrical penumbra due to the finite photon source size were found to be the major contributors, whereas photon scattering in the patient is less important. The underlying line-spread kernels of these components were shown to be Lorentz functions. This implies that each of these convolution kernels and also their combination can be characterized by a single characteristic, the width parameter λ of the Lorentz function. The overall resulting λ values were 0.5mm for 6 MV and 0.65 mm for 15 MV. Portal images were deconvolved using the point-spread function derived from the Lorentz function together with the experimentally determined λ values. The improvement of the portal images was quantified in terms of the modulation transfer function of a bar pattern. The resulting clinical images show a clear enhancement of sharpness and contrast.

  13. 2D optoacoustic array for high resolution imaging

    NASA Astrophysics Data System (ADS)

    Ashkenazi, S.; Witte, R. S.; Kim, K.; Huang, S.-W.; Hou, Y.; O'Donnell, M.

    2006-02-01

    An optoacoustic detector denotes the detection of acoustic signals by optical devices. Recent advances in fabrication techniques and the availability of high power tunable laser sources have greatly accelerated the development of efficient optoacoustic detectors. The unique advantages of optoacoustic technology are of special interest in applications that require high resolution imaging. For these applications optoacoustic technology enables high frequency transducer arrays with element size on the order of 10 μm. Laser generated ultrasound (photoacoustic effect) has been studied since the early observations of A.G. Bell (1880) of audible sound generated by light absorption . Modern studies have demonstrated the use of the photoacoustic effect to form a versatile imaging modality for medical and biological applications. A short laser pulse illuminates a tissue creating rapid thermal expansion and acoustic emission. Detection of the resulting acoustic field by an array enables the imaging of the tissue optical absorption using ultrasonic imaging methods. We present an integrated imaging system that employs photoacoustic sound generation and 2D optoacoustic reception. The optoacoustic receiver consists of a thin polymer Fabry-Perot etalon. The etalon is an optical resonator of a high quality factor (Q = 750). The relatively low elasticity modulus of the polymer and the high Q-factor of the resonator combine to yield high ultrasound sensitivity. The etalon thickness (10 μm) was optimized for wide bandwidth (typically above 50 MHz). An optical scanning and focusing system is used to create a large aperture and high density 2D ultrasonic receiver array. High resolution 3D images of phantom targets and biological tissue samples were obtained.

  14. Robust patella motion tracking using intensity-based 2D-3D registration on dynamic bi-plane fluoroscopy: towards quantitative assessment in MPFL reconstruction surgery

    NASA Astrophysics Data System (ADS)

    Otake, Yoshito; Esnault, Matthieu; Grupp, Robert; Kosugi, Shinichi; Sato, Yoshinobu

    2016-03-01

    The determination of in vivo motion of multiple-bones using dynamic fluoroscopic images and computed tomography (CT) is useful for post-operative assessment of orthopaedic surgeries such as medial patellofemoral ligament reconstruction. We propose a robust method to measure the 3D motion of multiple rigid objects with high accuracy using a series of bi-plane fluoroscopic images and a multi-resolution, intensity-based, 2D-3D registration. A Covariance Matrix Adaptation Evolution Strategy (CMA-ES) optimizer was used with a gradient correlation similarity metric. Four approaches to register three rigid objects (femur, tibia-fibula and patella) were implemented: 1) an individual bone approach registering one bone at a time, each with optimization of a six degrees of freedom (6DOF) parameter, 2) a sequential approach registering one bone at a time but using the previous bone results as the background in DRR generation, 3) a simultaneous approach registering all the bones together (18DOF) and 4) a combination of the sequential and the simultaneous approaches. These approaches were compared in experiments using simulated images generated from the CT of a healthy volunteer and measured fluoroscopic images. Over the 120 simulated frames of motion, the simultaneous approach showed improved registration accuracy compared to the individual approach: with less than 0.68mm root-mean-square error (RMSE) for translation and less than 1.12° RMSE for rotation. A robustness evaluation was conducted with 45 trials of a randomly perturbed initialization showed that the sequential approach improved robustness significantly (74% success rate) compared to the individual bone approach (34% success) for patella registration (femur and tibia-fibula registration had a 100% success rate with each approach).

  15. 2-D Drift Velocities from the IMAGE EUV Plasmaspheric Imager

    NASA Technical Reports Server (NTRS)

    Gallagher, D.; Adrian, M.

    2007-01-01

    The IMAGE Mission extreme ultraviolet imager (EUY) observes He+ plasmaspheric ions throughout the inner magnetosphere. Limited by ionizing radiation and viewing close to the Sun, images of the He+ distribution are available every 10 minutes for many hours as the spacecraft passes through apogee in its highly elliptical orbit. As a consistent constituent at about 15%, He+ is an excellent surrogate for monitoring all of the processes that control the dynamics of plasmaspheric plasma. In particular, the motion ofHe+ transverse to the ambient magnetic field is a direct indication of convective electric fields. The analysis of boundary motions has already achieved new insights into the electrodynamic coupling processes taking place between energetic magnetospheric plasmas and the ionosphere. Yet to be fulfilled, however, is the original promise that global EUY images of the plasmasphere might yield two-dimensional pictures of meso-scale to macro-scale electric fields in the inner magnetosphere. This work details the technique and initial application of an IMAGE EUY analysis that appears capable of following thermal plasma motion on a global basis.

  16. 2-D Drift Velocities from the IMAGE EUV Plasmaspheric Imager

    NASA Technical Reports Server (NTRS)

    Gallagher, D. L.

    2006-01-01

    The IMAGE Mission extreme ultraviolet imager (EW) observes He(+) plasmaspheric ions throughout the inner magnetosphere. Limited by ionizing radiation and viewing close to the Sun, images of the He(+) distribution are available every 10 minutes for many hours as the spacecraft passes through apogee in its highly elliptical orbit. As a consistent constituent at about 15%, He(+) is an excellent surrogate for monitoring all of the processes that control the dynamics of plasmaspheric plasma. In particular, the motion of He' transverse to the ambient magnetic field is a direct indication of convective electric fields. The analysis of boundary motions has already achieved new insights into the electrodynamic coupling processes taking place between energetic magnetospheric plasmas and the ionosphere. Yet to be fulfilled, however, is the original promise that global E W images of the plasmasphere might yield two-dimensional pictures of mesoscale to macro-scale electric fields in the inner magnetosphere. This work details the technique and initial application of an IMAGE EUV analysis that appears capable of following thermal plasma motion on a global basis.

  17. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA Marshall Space Flight Center, atmospheric scientist Paul Meyer (left) and solar physicist Dr. David Hathaway, have developed promising new software, called Video Image Stabilization and Registration (VISAR), that may help law enforcement agencies to catch criminals by improving the quality of video recorded at crime scenes, VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects; produces clearer images of moving objects; smoothes jagged edges; enhances still images; and reduces video noise of snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of Ultrasounds which are infamous for their grainy, blurred quality. It would be especially useful for tornadoes, tracking whirling objects and helping to determine the tornado's wind speed. This image shows two scientists reviewing an enhanced video image of a license plate taken from a moving automobile.

  18. Fundus image registration for vestibularis research

    NASA Astrophysics Data System (ADS)

    Ithapu, Vamsi K.; Fritsche, Armin; Oppelt, Ariane; Westhofen, Martin; Deserno, Thomas M.

    2010-03-01

    In research on vestibular nerve disorders, fundus images of both left and right eyes are acquired systematically to precisely assess the rotation of the eye ball that is induced by the rotation of entire head. The measurement is still carried out manually. Although various methods have been proposed for medical image registration, robust detection of rotation especially in images with varied quality in terms of illumination, aberrations, blur and noise still is challenging. This paper evaluates registration algorithms operating on different levels of semantics: (i) data-based using Fourier transform and log polar maps; (ii) point-based using scaled image feature transform (SIFT); (iii) edge-based using Canny edge maps; (iv) object-based using matched filters for vessel detection; (v) scene-based detecting papilla and macula automatically and (vi) manually by two independent medical experts. For evaluation, a database of 22 patients is used, where each of left and right eye images is captured in upright head position and in lateral tilt of +/-200. For 66 pairs of images (132 in total), the results are compared with ground truth, and the performance measures are tabulated. Best correctness of 89.3% were obtained using the pixel-based method and allowing 2.5° deviation from the manual measures. However, the evaluation shows that for applications in computer-aided diagnosis involving a large set of images with varied quality, like in vestibularis research, registration methods based on a single level of semantics are not sufficiently robust. A multi-level semantics approach will improve the results since failure occur on different images.

  19. Automated landmark-guided deformable image registration

    NASA Astrophysics Data System (ADS)

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-01

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency.

  20. Automated landmark-guided deformable image registration.

    PubMed

    Kearney, Vasant; Chen, Susie; Gu, Xuejun; Chiu, Tsuicheng; Liu, Honghuan; Jiang, Lan; Wang, Jing; Yordy, John; Nedzi, Lucien; Mao, Weihua

    2015-01-01

    The purpose of this work is to develop an automated landmark-guided deformable image registration (LDIR) algorithm between the planning CT and daily cone-beam CT (CBCT) with low image quality. This method uses an automated landmark generation algorithm in conjunction with a local small volume gradient matching search engine to map corresponding landmarks between the CBCT and the planning CT. The landmarks act as stabilizing control points in the following Demons deformable image registration. LDIR is implemented on graphics processing units (GPUs) for parallel computation to achieve ultra fast calculation. The accuracy of the LDIR algorithm has been evaluated on a synthetic case in the presence of different noise levels and data of six head and neck cancer patients. The results indicate that LDIR performed better than rigid registration, Demons, and intensity corrected Demons for all similarity metrics used. In conclusion, LDIR achieves high accuracy in the presence of multimodality intensity mismatch and CBCT noise contamination, while simultaneously preserving high computational efficiency. PMID:25479095

  1. Surface driven biomechanical breast image registration

    NASA Astrophysics Data System (ADS)

    Eiben, Björn; Vavourakis, Vasileios; Hipwell, John H.; Kabus, Sven; Lorenz, Cristian; Buelow, Thomas; Williams, Norman R.; Keshtgar, M.; Hawkes, David J.

    2016-03-01

    Biomechanical modelling enables large deformation simulations of breast tissues under different loading conditions to be performed. Such simulations can be utilised to transform prone Magnetic Resonance (MR) images into a different patient position, such as upright or supine. We present a novel integration of biomechanical modelling with a surface registration algorithm which optimises the unknown material parameters of a biomechanical model and performs a subsequent regularised surface alignment. This allows deformations induced by effects other than gravity, such as those due to contact of the breast and MR coil, to be reversed. Correction displacements are applied to the biomechanical model enabling transformation of the original pre-surgical images to the corresponding target position. The algorithm is evaluated for the prone-to-supine case using prone MR images and the skin outline of supine Computed Tomography (CT) scans for three patients. A mean target registration error (TRE) of 10:9 mm for internal structures is achieved. For the prone-to-upright scenario, an optical 3D surface scan of one patient is used as a registration target and the nipple distances after alignment between the transformed MRI and the surface are 10:1 mm and 6:3 mm respectively.

  2. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center, atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  3. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image stabilization and Registration (VISAR), which is illustrated in this Quick Time movie. VISAR is a computer algorithm that stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. VISAR could also have applications in law enforcement, medical, and meteorological imaging. The software can be used for defense application by improving reconnaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  4. Statistically deformable 2D/3D registration for accurate determination of post-operative cup orientation from single standard X-ray radiograph.

    PubMed

    Zheng, Guoyan

    2009-01-01

    The widely used procedure of evaluation of cup orientation following total hip arthroplasty using single standard anteroposterior (AP) radiograph is known inaccurate, largely due to the wide variability in individual pelvic orientation relative to X-ray plate. 2D/3D rigid image registration methods have been introduced for an accurate determination of the post-operative cup alignment with respect to an anatomical reference extracted from the CT data. Although encouraging results have been reported, their extensive usage in clinical routine is still limited. This may be explained by their requirement of a CAD model of the prosthesis, which is often difficult to be organized from the manufacturer due to the proprietary issue, and by their requirement of a pre-operative CT scan, which is not available for most retrospective studies. To address these issues, we developed and validated a statistically deformable 2D/3D registration approach for accurate determination of post-operative cup orientation. No CAD model and pre-operative CT data is required any more. Quantitative and qualitative results evaluated on cadaveric and clinical datasets are given, which indicate the validity of the approach. PMID:20426064

  5. Automated 3D-2D registration of X-ray microcomputed tomography with histological sections for dental implants in bone using chamfer matching and simulated annealing.

    PubMed

    Becker, Kathrin; Stauber, Martin; Schwarz, Frank; Beißbarth, Tim

    2015-09-01

    We propose a novel 3D-2D registration approach for micro-computed tomography (μCT) and histology (HI), constructed for dental implant biopsies, that finds the position and normal vector of the oblique slice from μCT that corresponds to HI. During image pre-processing, the implants and the bone tissue are segmented using a combination of thresholding, morphological filters and component labeling. After this, chamfer matching is employed to register the implant edges and fine registration of the bone tissues is achieved using simulated annealing. The method was tested on n=10 biopsies, obtained at 20 weeks after non-submerged healing in the canine mandible. The specimens were scanned with μCT 100 and processed for hard tissue sectioning. After registration, we assessed the agreement of bone to implant contact (BIC) using automated and manual measurements. Statistical analysis was conducted to test the agreement of the BIC measurements in the registered samples. Registration was successful for all specimens and agreement of the respective binary images was high (median: 0.90, 1.-3. Qu.: 0.89-0.91). Direct comparison of BIC yielded that automated (median 0.82, 1.-3. Qu.: 0.75-0.85) and manual (median 0.61, 1.-3. Qu.: 0.52-0.67) measures from μCT were significant positively correlated with HI (median 0.65, 1.-3. Qu.: 0.59-0.72) between μCT and HI groups (manual: R(2)=0.87, automated: R(2)=0.75, p<0.001). The results show that this method yields promising results and that μCT may become a valid alternative to assess osseointegration in three dimensions.

  6. Direct Image-To Registration Using Mobile Sensor Data

    NASA Astrophysics Data System (ADS)

    Kehl, C.; Buckley, S. J.; Gawthorpe, R. L.; Viola, I.; Howell, J. A.

    2016-06-01

    Adding supplementary texture and 2D image-based annotations to 3D surface models is a useful next step for domain specialists to make use of photorealistic products of laser scanning and photogrammetry. This requires a registration between the new camera imagery and the model geometry to be solved, which can be a time-consuming task without appropriate automation. The increasing availability of photorealistic models, coupled with the proliferation of mobile devices, gives users the possibility to complement their models in real time. Modern mobile devices deliver digital photographs of increasing quality, as well as on-board sensor data, which can be used as input for practical and automatic camera registration procedures. Their familiar user interface also improves manual registration procedures. This paper introduces a fully automatic pose estimation method using the on-board sensor data for initial exterior orientation, and feature matching between an acquired photograph and a synthesised rendering of the orientated 3D scene as input for fine alignment. The paper also introduces a user-friendly manual camera registration- and pose estimation interface for mobile devices, based on existing surface geometry and numerical optimisation methods. The article further assesses the automatic algorithm's accuracy compared to traditional methods, and the impact of computational- and environmental parameters. Experiments using urban and geological case studies show a significant sensitivity of the automatic procedure to the quality of the initial mobile sensor values. Changing natural lighting conditions remain a challenge for automatic pose estimation techniques, although progress is presented here. Finally, the automatically-registered mobile images are used as the basis for adding user annotations to the input textured model.

  7. Respiratory motion compensation for simultaneous PET/MR based on a 3D-2D registration of strongly undersampled radial MR data: a simulation study

    NASA Astrophysics Data System (ADS)

    Rank, Christopher M.; Heußer, Thorsten; Flach, Barbara; Brehm, Marcus; Kachelrieß, Marc

    2015-03-01

    We propose a new method for PET/MR respiratory motion compensation, which is based on a 3D-2D registration of strongly undersampled MR data and a) runs in parallel with the PET acquisition, b) can be interlaced with clinical MR sequences, and c) requires less than one minute of the total MR acquisition time per bed position. In our simulation study, we applied a 3D encoded radial stack-of-stars sampling scheme with 160 radial spokes per slice and an acquisition time of 38 s. Gated 4D MR images were reconstructed using a 4D iterative reconstruction algorithm. Based on these images, motion vector fields were estimated using our newly-developed 3D-2D registration framework. A 4D PET volume of a patient with eight hot lesions in the lungs and upper abdomen was simulated and MoCo 4D PET images were reconstructed based on the motion vector fields derived from MR. For evaluation, average SUVmean values of the artificial lesions were determined for a 3D, a gated 4D, a MoCo 4D and a reference (with ten-fold measurement time) gated 4D reconstruction. Compared to the reference, 3D reconstructions yielded an underestimation of SUVmean values due to motion blurring. In contrast, gated 4D reconstructions showed the highest variation of SUVmean due to low statistics. MoCo 4D reconstructions were only slightly affected by these two sources of uncertainty resulting in a significant visual and quantitative improvement in terms of SUVmean values. Whereas temporal resolution was comparable to the gated 4D images, signal-to-noise ratio and contrast-to-noise ratio were close to the 3D reconstructions.

  8. Mono- and multimodal registration of optical breast images

    NASA Astrophysics Data System (ADS)

    Pearlman, Paul C.; Adams, Arthur; Elias, Sjoerd G.; Mali, Willem P. Th. M.; Viergever, Max A.; Pluim, Josien P. W.

    2012-08-01

    Optical breast imaging offers the possibility of noninvasive, low cost, and high sensitivity imaging of breast cancers. Poor spatial resolution and a lack of anatomical landmarks in optical images of the breast make interpretation difficult and motivate registration and fusion of these data with subsequent optical images and other breast imaging modalities. Methods used for registration and fusion of optical breast images are reviewed. Imaging concerns relevant to the registration problem are first highlighted, followed by a focus on both monomodal and multimodal registration of optical breast imaging. Where relevant, methods pertaining to other imaging modalities or imaged anatomies are presented. The multimodal registration discussion concerns digital x-ray mammography, ultrasound, magnetic resonance imaging, and positron emission tomography.

  9. Intensity-based 3D/2D registration for percutaneous intervention of major aorto-pulmonary collateral arteries

    NASA Astrophysics Data System (ADS)

    Couet, Julien; Rivest-Henault, David; Miro, Joaquim; Lapierre, Chantal; Duong, Luc; Cheriet, Mohamed

    2012-02-01

    Percutaneous cardiac interventions rely mainly on the experience of the cardiologist to safely navigate inside soft tissues vessels under X-ray angiography guidance. Additional navigation guidance tool might contribute to improve reliability and safety of percutaneous procedures. This study focus on major aorta-pulmonary collateral arteries (MAPCAs) which are pediatric structures. We present a fully automatic intensity-based 3D/2D registration method that accurately maps pre-operatively acquired 3D tomographic vascular data of a newborn patient over intra-operatively acquired angiograms. The tomographic dataset 3D pose is evaluated by comparing the angiograms with simulated X-ray projections, computed from the pre-operative dataset with a proposed splatting-based projection technique. The rigid 3D pose is updated via a transformation matrix usually defined in respect of the C-Arm acquisition system reference frame, but it can also be defined in respect of the projection plane local reference frame. The optimization of the transformation is driven by two algorithms. First the hill climbing local search and secondly a proposed variant, the dense hill climbing. The latter makes the search space denser by considering the combinations of the registration parameters instead of neighboring solutions only. Although this study focused on the registration of pediatric structures, the same procedure could be applied for any cardiovascular structures involving CT-scan and X-ray angiography. Our preliminary results are promising that an accurate (3D TRE 0.265 +/- 0.647mm) and robust (99% success rate) bi-planes registration of the aorta and MAPCAs from a initial displacement up to 20mm and 20° can be obtained within a reasonable amount of time (13.7 seconds).

  10. Group-wise feature-based registration of CT and ultrasound images of spine

    NASA Astrophysics Data System (ADS)

    Rasoulian, Abtin; Mousavi, Parvin; Hedjazi Moghari, Mehdi; Foroughi, Pezhman; Abolmaesumi, Purang

    2010-02-01

    Registration of pre-operative CT and freehand intra-operative ultrasound of lumbar spine could aid surgeons in the spinal needle injection which is a common procedure for pain management. Patients are always in a supine position during the CT scan, and in the prone or sitting position during the intervention. This leads to a difference in the spinal curvature between the two imaging modalities, which means a single rigid registration cannot be used for all of the lumbar vertebrae. In this work, a method for group-wise registration of pre-operative CT and intra-operative freehand 2-D ultrasound images of the lumbar spine is presented. The approach utilizes a pointbased registration technique based on the unscented Kalman filter, taking as input segmented vertebrae surfaces in both CT and ultrasound data. Ultrasound images are automatically segmented using a dynamic programming approach, while the CT images are semi-automatically segmented using thresholding. Since the curvature of the spine is different between the pre-operative and the intra-operative data, the registration approach is designed to simultaneously align individual groups of points segmented from each vertebra in the two imaging modalities. A biomechanical model is used to constrain the vertebrae transformation parameters during the registration and to ensure convergence. The mean target registration error achieved for individual vertebrae on five spine phantoms generated from CT data of patients, is 2.47 mm with standard deviation of 1.14 mm.

  11. Automatic geometric rectification for patient registration in image-guided spinal surgery

    NASA Astrophysics Data System (ADS)

    Cai, Yunliang; Olson, Jonathan D.; Fan, Xiaoyao; Evans, Linton T.; Paulsen, Keith D.; Roberts, David W.; Mirza, Sohail K.; Lollis, S. Scott; Ji, Songbai

    2016-03-01

    Accurate and efficient patient registration is crucial for the success of image-guidance in open spinal surgery. Recently, we have established the feasibility of using intraoperative stereovision (iSV) to perform patient registration with respect to preoperative CT (pCT) in human subjects undergoing spinal surgery. Although a desired accuracy was achieved, the method required manual segmentation and placement of feature points on reconstructed iSV and pCT surfaces. In this study, we present an improved registration pipeline to eliminate these manual operations. Specifically, automatic geometric rectification was performed on spines extracted from pCT and iSV into pose-invariant shapes using a nonlinear principal component analysis (NLPCA). Rectified spines were obtained by projecting the reconstructed 3D surfaces into an anatomically determined orientation. Two-dimensional projection images were then created with image intensity values encoding feature "height" in the dorsal-ventral direction. Registration between the 2D depth maps yielded an initial point-wise correspondence between the 3D surfaces. A refined registration was achieved using an iterative closest point (ICP) algorithm. The technique was successfully applied to two explanted and one live porcine spines. The computational cost of the registration pipeline was less than 1 min, with an average target registration error (TRE) less than 2.2 mm in the laminae area. These results suggest the potential for the pose-invariant, rectification-based registration technique for clinical application in human subjects in the future.

  12. Registration of heat capacity mapping mission day and night images

    NASA Technical Reports Server (NTRS)

    Watson, K.; Hummer-Miller, S.; Sawatzky, D. L.

    1982-01-01

    Registration of thermal images is complicated by distinctive differences in the appearance of day and night features needed as control in the registration process. These changes are unlike those that occur between Landsat scenes and pose unique constraints. Experimentation with several potentially promising techniques has led to selection of a fairly simple scheme for registration of data from the experimental thermal satellite HCMM using an affine transformation. Two registration examples are provided.

  13. A statistical framework for inter-group image registration.

    PubMed

    Liao, Shu; Wu, Guorong; Shen, Dinggang

    2012-10-01

    Groupwise image registration plays an important role in medical image analysis. The principle of groupwise image registration is to align a given set of images to a hidden template space in an iteratively manner without explicitly selecting any individual image as the template. Although many approaches have been proposed to address the groupwise image registration problem for registering a single group of images, few attentions and efforts have been paid to the registration problem between two or more different groups of images. In this paper, we propose a statistical framework to address the registration problems between two different image groups. The main contributions of this paper lie in the following aspects: (1) In this paper, we demonstrate that directly registering the group mean images estimated from two different image groups is not sufficient to establish the reliable transformation from one image group to the other image group. (2) A novel statistical framework is proposed to extract anatomical features from the white matter, gray matter and cerebrospinal fluid tissue maps of all aligned images as morphological signatures for each voxel. The extracted features provide much richer anatomical information than the voxel intensity of the group mean image, and can be integrated with the multi-channel Demons registration algorithm to perform the registration process. (3) The proposed method has been extensively evaluated on two publicly available brain MRI databases: the LONI LPBA40 and the IXI databases, and it is also compared with a conventional inter-group image registration approach which directly performs deformable registration between the group mean images of two image groups. Experimental results show that the proposed method consistently achieves higher registration accuracy than the method under comparison.

  14. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  15. Evaluation of similarity measures for reconstruction-based registration in image-guided radiotherapy and surgery

    SciTech Connect

    Skerl, Darko . E-mail: franjo.pernus@fe.uni-lj.si; Tomazevic, Dejan; Likar, Bostjan; Pernus, Franjo

    2006-07-01

    Purpose: A promising patient positioning technique is based on registering computed tomographic (CT) or magnetic resonance (MR) images to cone-beam CT images (CBCT). The extra radiation dose delivered to the patient can be substantially reduced by using fewer projections. This approach results in lower quality CBCT images. The purpose of this study is to evaluate a number of similarity measures (SMs) suitable for registration of CT or MR images to low-quality CBCTs. Methods and Materials: Using the recently proposed evaluation protocol, we evaluated nine SMs with respect to pretreatment imaging modalities, number of two-dimensional (2D) images used for reconstruction, and number of reconstruction iterations. The image database consisted of 100 X-ray and corresponding CT and MR images of two vertebral columns. Results: Using a higher number of 2D projections or reconstruction iterations results in higher accuracy and slightly lower robustness. The similarity measures that behaved the best also yielded the best registration results. The most appropriate similarity measure was the asymmetric multi-feature mutual information (AMMI). Conclusions: The evaluation protocol proved to be a valuable tool for selecting the best similarity measure for the reconstruction-based registration. The results indicate that accurate and robust CT/CBCT or even MR/CBCT registrations are possible if the AMMI similarity measure is used.

  16. Image registration with auto-mapped control volumes

    SciTech Connect

    Schreibmann, Eduard; Xing Lei

    2006-04-15

    Many image registration algorithms rely on the use of homologous control points on the two input image sets to be registered. In reality, the interactive identification of the control points on both images is tedious, difficult, and often a source of error. We propose a two-step algorithm to automatically identify homologous regions that are used as a priori information during the image registration procedure. First, a number of small control volumes having distinct anatomical features are identified on the model image in a somewhat arbitrary fashion. Instead of attempting to find their correspondences in the reference image through user interaction, in the proposed method, each of the control regions is mapped to the corresponding part of the reference image by using an automated image registration algorithm. A normalized cross-correlation (NCC) function or mutual information was used as the auto-mapping metric and a limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm (L-BFGS) was employed to optimize the function to find the optimal mapping. For rigid registration, the transformation parameters of the system are obtained by averaging that derived from the individual control volumes. In our deformable calculation, the mapped control volumes are treated as the nodes or control points with known positions on the two images. If the number of control volumes is not enough to cover the whole image to be registered, additional nodes are placed on the model image and then located on the reference image in a manner similar to the conventional BSpline deformable calculation. For deformable registration, the established correspondence by the auto-mapped control volumes provides valuable guidance for the registration calculation and greatly reduces the dimensionality of the problem. The performance of the two-step registrations was applied to three rigid registration cases (two PET-CT registrations and a brain MRI-CT registration) and one deformable registration of

  17. DTI Image Registration under Probabilistic Fiber Bundles Tractography Learning

    PubMed Central

    Lei, Tao; Fan, Yangyu; Zhang, Xiuwei

    2016-01-01

    Diffusion Tensor Imaging (DTI) image registration is an essential step for diffusion tensor image analysis. Most of the fiber bundle based registration algorithms use deterministic fiber tracking technique to get the white matter fiber bundles, which will be affected by the noise and volume. In order to overcome the above problem, we proposed a Diffusion Tensor Imaging image registration method under probabilistic fiber bundles tractography learning. Probabilistic tractography technique can more reasonably trace to the structure of the nerve fibers. The residual error estimation step in active sample selection learning is improved by modifying the residual error model using finite sample set. The calculated deformation field is then registered on the DTI images. The results of our proposed registration method are compared with 6 state-of-the-art DTI image registration methods under visualization and 3 quantitative evaluation standards. The experimental results show that our proposed method has a good comprehensive performance. PMID:27774455

  18. High-performance computing in image registration

    NASA Astrophysics Data System (ADS)

    Zanin, Michele; Remondino, Fabio; Dalla Mura, Mauro

    2012-10-01

    Thanks to the recent technological advances, a large variety of image data is at our disposal with variable geometric, radiometric and temporal resolution. In many applications the processing of such images needs high performance computing techniques in order to deliver timely responses e.g. for rapid decisions or real-time actions. Thus, parallel or distributed computing methods, Digital Signal Processor (DSP) architectures, Graphical Processing Unit (GPU) programming and Field-Programmable Gate Array (FPGA) devices have become essential tools for the challenging issue of processing large amount of geo-data. The article focuses on the processing and registration of large datasets of terrestrial and aerial images for 3D reconstruction, diagnostic purposes and monitoring of the environment. For the image alignment procedure, sets of corresponding feature points need to be automatically extracted in order to successively compute the geometric transformation that aligns the data. The feature extraction and matching are ones of the most computationally demanding operations in the processing chain thus, a great degree of automation and speed is mandatory. The details of the implemented operations (named LARES) exploiting parallel architectures and GPU are thus presented. The innovative aspects of the implementation are (i) the effectiveness on a large variety of unorganized and complex datasets, (ii) capability to work with high-resolution images and (iii) the speed of the computations. Examples and comparisons with standard CPU processing are also reported and commented.

  19. Enhancing retinal images by nonlinear registration

    NASA Astrophysics Data System (ADS)

    Molodij, G.; Ribak, E. N.; Glanc, M.; Chenegros, G.

    2015-05-01

    Being able to image the human retina in high resolution opens a new era in many important fields, such as pharmacological research for retinal diseases, researches in human cognition, nervous system, metabolism and blood stream, to name a few. In this paper, we propose to share the knowledge acquired in the fields of optics and imaging in solar astrophysics in order to improve the retinal imaging in the perspective to perform a medical diagnosis. The main purpose would be to assist health care practitioners by enhancing the spatial resolution of the retinal images and increase the level of confidence of the abnormal feature detection. We apply a nonlinear registration method using local correlation tracking to increase the field of view and follow structure evolutions using correlation techniques borrowed from solar astronomy technique expertise. Another purpose is to define the tracer of movements after analyzing local correlations to follow the proper motions of an image from one moment to another, such as changes in optical flows that would be of high interest in a medical diagnosis.

  20. Color image registration based on quaternion Fourier transformation

    NASA Astrophysics Data System (ADS)

    Wang, Qiang; Wang, Zhengzhi

    2012-05-01

    The traditional Fourier Mellin transform is applied to quaternion algebra in order to investigate quaternion Fourier transformation properties useful for color image registration in frequency domain. Combining with the quaternion phase correlation, we propose a method for color image registration based on the quaternion Fourier transform. The registration method, which processes color image in a holistic manner, is convenient to realign color images differing in translation, rotation, and scaling. Experimental results on different types of color images indicate that the proposed method not only obtains high accuracy in similarity transform in the image plane but also is computationally efficient.

  1. Registration of clinical volumes to beams-eye-view images for real-time tracking

    SciTech Connect

    Bryant, Jonathan H.; Rottmann, Joerg; Lewis, John H.; Mishra, Pankaj; Berbeco, Ross I.; Keall, Paul J.

    2014-12-15

    Purpose: The authors combine the registration of 2D beam’s eye view (BEV) images and 3D planning computed tomography (CT) images, with relative, markerless tumor tracking to provide automatic absolute tracking of physician defined volumes such as the gross tumor volume (GTV). Methods: During treatment of lung SBRT cases, BEV images were continuously acquired with an electronic portal imaging device (EPID) operating in cine mode. For absolute registration of physician-defined volumes, an intensity based 2D/3D registration to the planning CT was performed using the end-of-exhale (EoE) phase of the four dimensional computed tomography (4DCT). The volume was converted from Hounsfield units into electron density by a calibration curve and digitally reconstructed radiographs (DRRs) were generated for each beam geometry. Using normalized cross correlation between the DRR and an EoE BEV image, the best in-plane rigid transformation was found. The transformation was applied to physician-defined contours in the planning CT, mapping them into the EPID image domain. A robust multiregion method of relative markerless lung tumor tracking quantified deviations from the EoE position. Results: The success of 2D/3D registration was demonstrated at the EoE breathing phase. By registering at this phase and then employing a separate technique for relative tracking, the authors are able to successfully track target volumes in the BEV images throughout the entire treatment delivery. Conclusions: Through the combination of EPID/4DCT registration and relative tracking, a necessary step toward the clinical implementation of BEV tracking has been completed. The knowledge of tumor volumes relative to the treatment field is important for future applications like real-time motion management, adaptive radiotherapy, and delivered dose calculations.

  2. GPUs benchmarking in subpixel image registration algorithm

    NASA Astrophysics Data System (ADS)

    Sanz-Sabater, Martin; Picazo-Bueno, Jose Angel; Micó, Vicente; Ferrerira, Carlos; Granero, Luis; Garcia, Javier

    2015-05-01

    Image registration techniques are used among different scientific fields, like medical imaging or optical metrology. The straightest way to calculate shifting between two images is using the cross correlation, taking the highest value of this correlation image. Shifting resolution is given in whole pixels which cannot be enough for certain applications. Better results can be achieved interpolating both images, as much as the desired resolution we want to get, and applying the same technique described before, but the memory needed by the system is significantly higher. To avoid memory consuming we are implementing a subpixel shifting method based on FFT. With the original images, subpixel shifting can be achieved multiplying its discrete Fourier transform by a linear phase with different slopes. This method is high time consuming method because checking a concrete shifting means new calculations. The algorithm, highly parallelizable, is very suitable for high performance computing systems. GPU (Graphics Processing Unit) accelerated computing became very popular more than ten years ago because they have hundreds of computational cores in a reasonable cheap card. In our case, we are going to register the shifting between two images, doing the first approach by FFT based correlation, and later doing the subpixel approach using the technique described before. We consider it as `brute force' method. So we will present a benchmark of the algorithm consisting on a first approach (pixel resolution) and then do subpixel resolution approaching, decreasing the shifting step in every loop achieving a high resolution in few steps. This program will be executed in three different computers. At the end, we will present the results of the computation, with different kind of CPUs and GPUs, checking the accuracy of the method, and the time consumed in each computer, discussing the advantages, disadvantages of the use of GPUs.

  3. 3D breast image registration--a review.

    PubMed

    Sivaramakrishna, Radhika

    2005-02-01

    Image registration is an important problem in breast imaging. It is used in a wide variety of applications that include better visualization of lesions on pre- and post-contrast breast MRI images, speckle tracking and image compounding in breast ultrasound images, alignment of positron emission, and standard mammography images on hybrid machines et cetera. It is a prerequisite to align images taken at different times to isolate small interval lesions. Image registration also has useful applications in monitoring cancer therapy. The field of breast image registration has gained considerable interest in recent years. While the primary focus of interest continues to be the registration of pre- and post-contrast breast MRI images, other areas like breast ultrasound registration have gained more attention in recent years. The focus of registration algorithms has also shifted from control point based semi-automated techniques, to more sophisticated voxel based automated techniques that use mutual information as a similarity measure. This paper visits the problem of breast image registration and provides an overview of the current state-of-the-art in this area. PMID:15649086

  4. SAR/LANDSAT image registration study

    NASA Technical Reports Server (NTRS)

    Murphrey, S. W. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. Temporal registration of synthetic aperture radar data with LANDSAT-MSS data is both feasible (from a technical standpoint) and useful (from an information-content viewpoint). The greatest difficulty in registering aircraft SAR data to corrected LANDSAT-MSS data is control-point location. The differences in SAR and MSS data impact the selection of features that will serve as a good control points. The SAR and MSS data are unsuitable for automatic computer correlation of digital control-point data. The gray-level data can not be compared by the computer because of the different response characteristics of the MSS and SAR images.

  5. A volumetric model-based 2D to 3D registration method for measuring kinematics of natural knees with single-plane fluoroscopy

    SciTech Connect

    Tsai, Tsung-Yuan; Lu, Tung-Wu; Chen, Chung-Ming; Kuo, Mei-Ying; Hsu, Horng-Chaung

    2010-03-15

    Purpose: Accurate measurement of the three-dimensional (3D) rigid body and surface kinematics of the natural human knee is essential for many clinical applications. Existing techniques are limited either in their accuracy or lack more realistic experimental evaluation of the measurement errors. The purposes of the study were to develop a volumetric model-based 2D to 3D registration method, called the weighted edge-matching score (WEMS) method, for measuring natural knee kinematics with single-plane fluoroscopy to determine experimentally the measurement errors and to compare its performance with that of pattern intensity (PI) and gradient difference (GD) methods. Methods: The WEMS method gives higher priority to matching of longer edges of the digitally reconstructed radiograph and fluoroscopic images. The measurement errors of the methods were evaluated based on a human cadaveric knee at 11 flexion positions. Results: The accuracy of the WEMS method was determined experimentally to be less than 0.77 mm for the in-plane translations, 3.06 mm for out-of-plane translation, and 1.13 deg. for all rotations, which is better than that of the PI and GD methods. Conclusions: A new volumetric model-based 2D to 3D registration method has been developed for measuring 3D in vivo kinematics of natural knee joints with single-plane fluoroscopy. With the equipment used in the current study, the accuracy of the WEMS method is considered acceptable for the measurement of the 3D kinematics of the natural knee in clinical applications.

  6. Registration of structurally dissimilar images in MRI-based brachytherapy

    NASA Astrophysics Data System (ADS)

    Berendsen, F. F.; Kotte, A. N. T. J.; de Leeuw, A. A. C.; Jürgenliemk-Schulz, I. M.; Viergever, M. A.; Pluim, J. P. W.

    2014-08-01

    A serious challenge in image registration is the accurate alignment of two images in which a certain structure is present in only one of the two. Such topological changes are problematic for conventional non-rigid registration algorithms. We propose to incorporate in a conventional free-form registration framework a geometrical penalty term that minimizes the volume of the missing structure in one image. We demonstrate our method on cervical MR images for brachytherapy. The intrapatient registration problem involves one image in which a therapy applicator is present and one in which it is not. By including the penalty term, a substantial improvement in the surface distance to the gold standard anatomical position and the residual volume of the applicator void are obtained. Registration of neighboring structures, i.e. the rectum and the bladder is generally improved as well, albeit to a lesser degree.

  7. Evaluation of various deformable image registration algorithms for thoracic images.

    PubMed

    Kadoya, Noriyuki; Fujita, Yukio; Katsuta, Yoshiyuki; Dobashi, Suguru; Takeda, Ken; Kishi, Kazuma; Kubozono, Masaki; Umezawa, Rei; Sugawara, Toshiyuki; Matsushita, Haruo; Jingu, Keiichi

    2014-01-01

    We evaluated the accuracy of one commercially available and three publicly available deformable image registration (DIR) algorithms for thoracic four-dimensional (4D) computed tomography (CT) images. Five patients with esophagus cancer were studied. Datasets of the five patients were provided by DIR-lab (dir-lab.com) and consisted of thoracic 4D CT images and a coordinate list of anatomical landmarks that had been manually identified. Expert landmark correspondence was used for evaluating DIR spatial accuracy. First, the manually measured displacement vector field (mDVF) was obtained from the coordinate list of anatomical landmarks. Then the automatically calculated displacement vector field (aDVF) was calculated by using the following four DIR algorithms: B-spine implemented in Velocity AI (Velocity Medical, Atlanta, GA, USA), free-form deformation (FFD), Horn-Schunk optical flow (OF) and Demons in DIRART of MATLAB software. Registration error is defined as the difference between mDVF and aDVF. The mean 3D registration errors were 2.7 ± 0.8 mm for B-spline, 3.6 ± 1.0 mm for FFD, 2.4 ± 0.9 mm for OF and 2.4 ± 1.2 mm for Demons. The results showed that reasonable accuracy was achieved in B-spline, OF and Demons, and that these algorithms have the potential to be used for 4D dose calculation, automatic image segmentation and 4D CT ventilation imaging in patients with thoracic cancer. However, for all algorithms, the accuracy might be improved by using the optimized parameter setting. Furthermore, for B-spline in Velocity AI, the 3D registration error was small with displacements of less than ∼10 mm, indicating that this software may be useful in this range of displacements. PMID:23869025

  8. Shearlet Features for Registration of Remotely Sensed Multitemporal Images

    NASA Technical Reports Server (NTRS)

    Murphy, James M.; Le Moigne, Jacqueline

    2015-01-01

    We investigate the role of anisotropic feature extraction methods for automatic image registration of remotely sensed multitemporal images. Building on the classical use of wavelets in image registration, we develop an algorithm based on shearlets, a mathematical generalization of wavelets that offers increased directional sensitivity. Initial experimental results on LANDSAT images are presented, which indicate superior performance of the shearlet algorithm when compared to classical wavelet algorithms.

  9. Lucas-Kanade image registration using camera parameters

    NASA Astrophysics Data System (ADS)

    Cho, Sunghyun; Cho, Hojin; Tai, Yu-Wing; Moon, Young Su; Cho, Junguk; Lee, Shihwa; Lee, Seungyong

    2012-01-01

    The Lucas-Kanade algorithm and its variants have been successfully used for numerous works in computer vision, which include image registration as a component in the process. In this paper, we propose a Lucas-Kanade based image registration method using camera parameters. We decompose a homography into camera intrinsic and extrinsic parameters, and assume that the intrinsic parameters are given, e.g., from the EXIF information of a photograph. We then estimate only the extrinsic parameters for image registration, considering two types of camera motions, 3D rotations and full 3D motions with translations and rotations. As the known information about the camera is fully utilized, the proposed method can perform image registration more reliably. In addition, as the number of extrinsic parameters is smaller than the number of homography elements, our method runs faster than the Lucas-Kanade based registration method that estimates a homography itself.

  10. Research Issues in Image Registration for Remote Sensing

    NASA Technical Reports Server (NTRS)

    Eastman, Roger D.; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    Image registration is an important element in data processing for remote sensing with many applications and a wide range of solutions. Despite considerable investigation the field has not settled on a definitive solution for most applications and a number of questions remain open. This article looks at selected research issues by surveying the experience of operational satellite teams, application-specific requirements for Earth science, and our experiments in the evaluation of image registration algorithms with emphasis on the comparison of algorithms for subpixel accuracy. We conclude that remote sensing applications put particular demands on image registration algorithms to take into account domain-specific knowledge of geometric transformations and image content.

  11. Anatomy-based multimodal medical image registration for computer-integrated surgery

    NASA Astrophysics Data System (ADS)

    Hamadeh, Ali; Cinquin, Philippe; Szeliski, Richard; Lavallee, Stephane

    1994-10-01

    In Computer Assisted Surgery, the registration between pre-operative images, intra-operative images, anatomical models and guiding systems such as robots is a crucial step. This paper presents the methodology and the algorithms that we have developed to address the problem of rigid-body registration in this context. Our technique has been validated for many clinical cases where we had to register 3D anatomical surfaces with various sensory data. These sensory data can have 3D representation (3D images, range images, digitized 3D points, 2.5D ultrasound data) or they can be 2D projections (X-ray images, video images). This paper presents an overview of the results we have obtained.

  12. High-accuracy registration of intraoperative CT imaging

    NASA Astrophysics Data System (ADS)

    Oentoro, A.; Ellis, R. E.

    2010-02-01

    Image-guided interventions using intraoperative 3D imaging can be less cumbersome than systems dependent on preoperative images, especially by needing neither potentially invasive image-to-patient registration nor a lengthy process of segmenting and generating a 3D surface model. In this study, a method for computer-assisted surgery using direct navigation on intraoperative imaging is presented. In this system the registration step of a navigated procedure was divided into two stages: preoperative calibration of images to a ceiling-mounted optical tracking system, and intraoperative tracking during acquisition of the 3D medical image volume. The preoperative stage used a custom-made multi-modal calibrator that could be optically tracked and also contained fiducial spheres for radiological detection; a robust registration algorithm was used to compensate for the very high false-detection rate that was due to the high physical density of the optical light-emitting diodes. Intraoperatively, a tracking device was attached to plastic bone models that were also instrumented with radio-opaque spheres; A calibrated pointer was used to contact the latter spheres as a validation of the registration. Experiments showed that the fiducial registration error of the preoperative calibration stage was approximately 0.1 mm. The target registration error in the validation stage was approximately 1.2 mm. This study suggests that direct registration, coupled with procedure-specific graphical rendering, is potentially a highly accurate means of performing image-guided interventions in a fast, simple manner.

  13. Automatic 3D image registration using voxel similarity measurements based on a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Sullivan, John M., Jr.; Kulkarni, Praveen; Murugavel, Murali

    2006-03-01

    An automatic 3D non-rigid body registration system based upon the genetic algorithm (GA) process is presented. The system has been successfully applied to 2D and 3D situations using both rigid-body and affine transformations. Conventional optimization techniques and gradient search strategies generally require a good initial start location. The GA approach avoids the local minima/maxima traps of conventional optimization techniques. Based on the principles of Darwinian natural selection (survival of the fittest), the genetic algorithm has two basic steps: 1. Randomly generate an initial population. 2. Repeated application of the natural selection operation until a termination measure is satisfied. The natural selection process selects individuals based on their fitness to participate in the genetic operations; and it creates new individuals by inheritance from both parents, genetic recombination (crossover) and mutation. Once the termination criteria are satisfied, the optimum is selected from the population. The algorithm was applied on 2D and 3D magnetic resonance images (MRI). It does not require any preprocessing such as threshold, smoothing, segmentation, or definition of base points or edges. To evaluate the performance of the GA registration, the results were compared with results of the Automatic Image Registration technique (AIR) and manual registration which was used as the gold standard. Results showed that our GA implementation was a robust algorithm and gives very close results to the gold standard. A pre-cropping strategy was also discussed as an efficient preprocessing step to enhance the registration accuracy.

  14. Registration of in vivo MR to histology of rodent brains using blockface imaging

    NASA Astrophysics Data System (ADS)

    Uberti, Mariano; Liu, Yutong; Dou, Huanyu; Mosley, R. Lee; Gendelman, Howard E.; Boska, Michael

    2009-02-01

    Registration of MRI to histopathological sections can enhance bioimaging validation for use in pathobiologic, diagnostic, and therapeutic evaluations. However, commonly used registration methods fall short of this goal due to tissue shrinkage and tearing after brain extraction and preparation. In attempts to overcome these limitations we developed a software toolbox using 3D blockface imaging as the common space of reference. This toolbox includes a semi-automatic brain extraction technique using constraint level sets (CLS), 3D reconstruction methods for the blockface and MR volume, and a 2D warping technique using thin-plate splines with landmark optimization. Using this toolbox, the rodent brain volume is first extracted from the whole head MRI using CLS. The blockface volume is reconstructed followed by 3D brain MRI registration to the blockface volume to correct the global deformations due to brain extraction and fixation. Finally, registered MRI and histological slices are warped to corresponding blockface images to correct slice specific deformations. The CLS brain extraction technique was validated by comparing manual results showing 94% overlap. The image warping technique was validated by calculating target registration error (TRE). Results showed a registration accuracy of a TRE < 1 pixel. Lastly, the registration method and the software tools developed were used to validate cell migration in murine human immunodeficiency virus type one encephalitis.

  15. Large deformation diffeomorphic registration of diffusion-weighted imaging data

    PubMed Central

    Zhang, Pei; Niethammer, Marc; Shen, Dinggang; Yap, Pew-Thian

    2014-01-01

    Registration plays an important role in group analysis of diffusion-weighted imaging (DWI) data. It can be used to build a reference anatomy for investigating structural variation or tracking changes in white matter. Unlike traditional scalar image registration where spatial alignment is the only focus, registration of DWI data requires both spatial alignment of structures and reorientation of local signal profiles. As such, DWI registration is much more complex and challenging than scalar image registration. Although a variety of algorithms has been proposed to tackle the problem, most of them are restricted by the zdiffusion model used for registration, making it difficult to fit to the registered data a different model. In this paper we describe a method that allows any diffusion model to be fitted after registration for subsequent multifaceted analysis. This is achieved by directly aligning DWI data using a large deformation diffeomorphic registration framework. Our algorithm seeks the optimal coordinate mapping by simultaneously considering structural alignment, local signal profile reorientation, and deformation regularization. Our algorithm also incorporates a multi-kernel strategy to concurrently register anatomical structures at different scales. We demonstrate the efficacy of our approach using in vivo data and report detailed qualitative and quantitative results in comparison with several different registration strategies. PMID:25106710

  16. Tendon strain imaging using non-rigid image registration: a validation study

    NASA Astrophysics Data System (ADS)

    Almeida, Nuno M.; Slagmolen, Pieter; Barbosa, Daniel; Scheys, Lennart; Geukens, Leonie; Fukagawa, Shingo; Peers, Koen; Bellemans, Johan; Suetens, Paul; D'Hooge, Jan

    2012-03-01

    Ultrasound image has already been proved to be a useful tool for non-invasive strain quantifications in soft tissue. While clinical applications only include cardiac imaging, the development of techniques suitable for musculoskeletal system is an active area of research. On this study, a technique for speckle tracking on ultrasound images using non-rigid image registration is presented. This approach is based on a single 2D+t registration procedure, in which the temporal changes on the B-mode speckle patterns are locally assessed. This allows estimating strain from ultrasound image sequences of tissues under deformation while imposing temporal smoothness in the deformation field, originating smooth strain curves. METHODS: The tracking algorithm was systematically tested on synthetic images and gelatin phantoms, under sinusoidal deformations with amplitudes between 0.5% and 4.0%, at frequencies between 0.25Hz and 2.0Hz. Preliminary tests were also performed on Achilles tendons isolated from human cadavers. RESULTS: The strain was estimated with deviations of -0.011%+/-0.053% on the synthetic images and agreements of +/-0.28% on the phantoms. Some tests with real tendons show good tracking results. However, significant variability between the trials still exists. CONCLUSIONS: The proposed image registration methodology constitutes a robust tool for motion and deformation tracking in both simulated and real phantom data. Strain estimation in both cases reveals that the proposed method is accurate and provides good precision. Although the ex-vivo results are still preliminary, the potential of the proposed algorithm is promising. This suggests that further improvements, together with systematic testing, can lead to in-vivo and clinical applications.

  17. Automatic localization of vertebral levels in x-ray fluoroscopy using 3D-2D registration: a tool to reduce wrong-site surgery

    NASA Astrophysics Data System (ADS)

    Otake, Y.; Schafer, S.; Stayman, J. W.; Zbijewski, W.; Kleinszig, G.; Graumann, R.; Khanna, A. J.; Siewerdsen, J. H.

    2012-09-01

    Surgical targeting of the incorrect vertebral level (wrong-level surgery) is among the more common wrong-site surgical errors, attributed primarily to the lack of uniquely identifiable radiographic landmarks in the mid-thoracic spine. The conventional localization method involves manual counting of vertebral bodies under fluoroscopy, is prone to human error and carries additional time and dose. We propose an image registration and visualization system (referred to as LevelCheck), for decision support in spine surgery by automatically labeling vertebral levels in fluoroscopy using a GPU-accelerated, intensity-based 3D-2D (namely CT-to-fluoroscopy) registration. A gradient information (GI) similarity metric and a CMA-ES optimizer were chosen due to their robustness and inherent suitability for parallelization. Simulation studies involved ten patient CT datasets from which 50 000 simulated fluoroscopic images were generated from C-arm poses selected to approximate the C-arm operator and positioning variability. Physical experiments used an anthropomorphic chest phantom imaged under real fluoroscopy. The registration accuracy was evaluated as the mean projection distance (mPD) between the estimated and true center of vertebral levels. Trials were defined as successful if the estimated position was within the projection of the vertebral body (namely mPD <5 mm). Simulation studies showed a success rate of 99.998% (1 failure in 50 000 trials) and computation time of 4.7 s on a midrange GPU. Analysis of failure modes identified cases of false local optima in the search space arising from longitudinal periodicity in vertebral structures. Physical experiments demonstrated the robustness of the algorithm against quantum noise and x-ray scatter. The ability to automatically localize target anatomy in fluoroscopy in near-real-time could be valuable in reducing the occurrence of wrong-site surgery while helping to reduce radiation exposure. The method is applicable beyond

  18. Reduction of multi-fragment fractures of the distal radius using atlas-based 2D/3D registration

    NASA Astrophysics Data System (ADS)

    Gong, Ren Hui; Stewart, James; Abolmaesumi, Purang

    2009-02-01

    We describe a method to guide the surgical fixation of distal radius fractures. The method registers the fracture fragments to a volumetric intensity-based statistical anatomical atlas of distal radius, reconstructed from human cadavers and patient data, using a few intra-operative X-ray fluoroscopy images of the fracture. No pre-operative Computed Tomography (CT) images are required, hence radiation exposure to patients is substantially reduced. Intra-operatively, each bone fragment is roughly segmented from the X-ray images by a surgeon, and a corresponding segmentation volume is created from the back-projections of the 2D segmentations. An optimization procedure positions each segmentation volume at the appropriate pose on the atlas, while simultaneously deforming the atlas such that the overlap of the 2D projection of the atlas with individual fragments in the segmented regions is maximized. Our simulation results shows that this method can accurately identify the pose of large fragments using only two X-ray views, but for small fragments, more than two X-rays may be needed. The method does not assume any prior knowledge about the shape of the bone and the number of fragments, thus it is also potentially suitable for the fixation of other types of multi-fragment fractures.

  19. The role of image registration in brain mapping.

    PubMed

    Toga, A W; Thompson, P M

    2001-01-01

    Image registration is a key step in a great variety of biomedical imaging applications. It provides the ability to geometrically align one dataset with another, and is a prerequisite for all imaging applications that compare datasets across subjects, imaging modalities, or across time. Registration algorithms also enable the pooling and comparison of experimental findings across laboratories, the construction of population-based brain atlases, and the creation of systems to detect group patterns in structural and functional imaging data. We review the major types of registration approaches used in brain imaging today. We focus on their conceptual basis, the underlying mathematics, and their strengths and weaknesses in different contexts. We describe the major goals of registration, including data fusion, quantification of change, automated image segmentation and labeling, shape measurement, and pathology detection. We indicate that registration algorithms have great potential when used in conjunction with a digital brain atlas, which acts as a reference system in which brain images can be compared for statistical analysis. The resulting armory of registration approaches is fundamental to medical image analysis, and in a brain mapping context provides a means to elucidate clinical, demographic, or functional trends in the anatomy or physiology of the brain. PMID:19890483

  20. Piecewise nonlinear image registration using DCT basis functions

    NASA Astrophysics Data System (ADS)

    Gan, Lin; Agam, Gady

    2015-03-01

    The deformation field in nonlinear image registration is usually modeled by a global model. Such models are often faced with the problem that a locally complex deformation cannot be accurately modeled by simply increasing degrees of freedom (DOF). In addition, highly complex models require additional regularization which is usually ineffective when applied globally. Registering locally corresponding regions addresses this problem in a divide and conquer strategy. In this paper we propose a piecewise image registration approach using Discrete Cosine Transform (DCT) basis functions for a nonlinear model. The contributions of this paper are three-folds. First, we develop a multi-level piecewise registration framework that extends the concept of piecewise linear registration and works with any nonlinear deformation model. This framework is then applied to nonlinear DCT registration. Second, we show how adaptive model complexity and regularization could be applied for local piece registration, thus accounting for higher variability. Third, we show how the proposed piecewise DCT can overcome the fundamental problem of a large curvature matrix inversion in global DCT when using high degrees of freedoms. The proposed approach can be viewed as an extension of global DCT registration where the overall model complexity is increased while achieving effective local regularization. Experimental evaluation results provide comparison of the proposed approach to piecewise linear registration using an affine transformation model and a global nonlinear registration using DCT model. Preliminary results show that the proposed approach achieves improved performance.

  1. Multimodal image fusion with SIMS: Preprocessing with image registration.

    PubMed

    Tarolli, Jay Gage; Bloom, Anna; Winograd, Nicholas

    2016-06-14

    In order to utilize complementary imaging techniques to supply higher resolution data for fusion with secondary ion mass spectrometry (SIMS) chemical images, there are a number of aspects that, if not given proper consideration, could produce results which are easy to misinterpret. One of the most critical aspects is that the two input images must be of the same exact analysis area. With the desire to explore new higher resolution data sources that exists outside of the mass spectrometer, this requirement becomes even more important. To ensure that two input images are of the same region, an implementation of the insight segmentation and registration toolkit (ITK) was developed to act as a preprocessing step before performing image fusion. This implementation of ITK allows for several degrees of movement between two input images to be accounted for, including translation, rotation, and scale transforms. First, the implementation was confirmed to accurately register two multimodal images by supplying a known transform. Once validated, two model systems, a copper mesh grid and a group of RAW 264.7 cells, were used to demonstrate the use of the ITK implementation to register a SIMS image with a microscopy image for the purpose of performing image fusion.

  2. INTER-GROUP IMAGE REGISTRATION BY HIERARCHICAL GRAPH SHRINKAGE.

    PubMed

    Ying, Shihui; Wu, Guorong; Liao, Shu; Shen, Dinggang

    2013-12-31

    In this paper, we propose a novel inter-group image registration method to register different groups of images (e.g., young and elderly brains) simultaneously. Specifically, we use a hierarchical two-level graph to model the distribution of entire images on the manifold, with intra-graph representing the image distribution in each group and the inter-graph describing the relationship between two groups. Then the procedure of inter-group registration is formulated as a dynamic evolution of graph shrinkage. The advantage of our method is that the topology of entire image distribution is explored to guide the image registration. In this way, each image coordinates with its neighboring images on the manifold to deform towards the population center, by following the deformation pathway simultaneously optimized within the graph. Our proposed method has been also compared with other state-of-the-art inter-group registration methods, where our method achieves better registration results in terms of registration accuracy and robustness.

  3. INTER-GROUP IMAGE REGISTRATION BY HIERARCHICAL GRAPH SHRINKAGE

    PubMed Central

    Ying, Shihui; Wu, Guorong; Liao, Shu; Shen, Dinggang

    2013-01-01

    In this paper, we propose a novel inter-group image registration method to register different groups of images (e.g., young and elderly brains) simultaneously. Specifically, we use a hierarchical two-level graph to model the distribution of entire images on the manifold, with intra-graph representing the image distribution in each group and the inter-graph describing the relationship between two groups. Then the procedure of inter-group registration is formulated as a dynamic evolution of graph shrinkage. The advantage of our method is that the topology of entire image distribution is explored to guide the image registration. In this way, each image coordinates with its neighboring images on the manifold to deform towards the population center, by following the deformation pathway simultaneously optimized within the graph. Our proposed method has been also compared with other state-of-the-art inter-group registration methods, where our method achieves better registration results in terms of registration accuracy and robustness. PMID:24443692

  4. Laser range scanning for image-guided neurosurgery: investigation of image-to-physical space registrations.

    PubMed

    Cao, Aize; Thompson, R C; Dumpuri, P; Dawant, B M; Galloway, R L; Ding, S; Miga, M I

    2008-04-01

    In this article a comprehensive set of registration methods is utilized to provide image-to-physical space registration for image-guided neurosurgery in a clinical study. Central to all methods is the use of textured point clouds as provided by laser range scanning technology. The objective is to perform a systematic comparison of registration methods that include both extracranial (skin marker point-based registration (PBR), and face-based surface registration) and intracranial methods (feature PBR, cortical vessel-contour registration, a combined geometry/intensity surface registration method, and a constrained form of that method to improve robustness). The platform facilitates the selection of discrete soft-tissue landmarks that appear on the patient's intraoperative cortical surface and the preoperative gadolinium-enhanced magnetic resonance (MR) image volume, i.e., true corresponding novel targets. In an 11 patient study, data were taken to allow statistical comparison among registration methods within the context of registration error. The results indicate that intraoperative face-based surface registration is statistically equivalent to traditional skin marker registration. The four intracranial registration methods were investigated and the results demonstrated a target registration error of 1.6 +/- 0.5 mm, 1.7 +/- 0.5 mm, 3.9 +/- 3.4 mm, and 2.0 +/- 0.9 mm, for feature PBR, cortical vessel-contour registration, unconstrained geometric/intensity registration, and constrained geometric/intensity registration, respectively. When analyzing the results on a per case basis, the constrained geometric/intensity registration performed best, followed by feature PBR, and finally cortical vessel-contour registration. Interestingly, the best target registration errors are similar to targeting errors reported using bone-implanted markers within the context of rigid targets. The experience in this study as with others is that brain shift can compromise extracranial

  5. Optimal atlas construction through hierarchical image registration

    NASA Astrophysics Data System (ADS)

    Grevera, George J.; Udupa, Jayaram K.; Odhner, Dewey; Torigian, Drew A.

    2016-03-01

    Atlases (digital or otherwise) are common in medicine. However, there is no standard framework for creating them from medical images. One traditional approach is to pick a representative subject and then proceed to label structures/regions of interest in this image. Another is to create a "mean" or average subject. Atlases may also contain more than a single representative (e.g., the Visible Human contains both a male and a female data set). Other criteria besides gender may be used as well, and the atlas may contain many examples for a given criterion. In this work, we propose that atlases be created in an optimal manner using a well-established graph theoretic approach using a min spanning tree (or more generally, a collection of them). The resulting atlases may contain many examples for a given criterion. In fact, our framework allows for the addition of new subjects to the atlas to allow it to evolve over time. Furthermore, one can apply segmentation methods to the graph (e.g., graph-cut, fuzzy connectedness, or cluster analysis) which allow it to be separated into "sub-atlases" as it evolves. We demonstrate our method by applying it to 50 3D CT data sets of the chest region, and by comparing it to a number of traditional methods using measures such as Mean Squared Difference, Mattes Mutual Information, and Correlation, and for rigid registration. Our results demonstrate that optimal atlases can be constructed in this manner and outperform other methods of construction using freely available software.

  6. Registration of fast cine cardiac MR slices to 3D preprocedural images: toward real-time registration for MRI-guided procedures

    NASA Astrophysics Data System (ADS)

    Smolikova, Renata; Wachowiak, Mark P.; Drangova, Maria

    2004-05-01

    Interventional cardiac magnetic resonance (MR) procedures are the subject of an increasing number of research studies. Typically, during the procedure only two-dimensional images of oblique slices can be presented to the interventionalist in real time. There is a clear benefit to being able to register the real-time 2D slices to a previously acquired 3D computed tomography (CT) or MR image of the heart. Results from a study of the accuracy of registration of 2D cardiac images of an anesthetized pig to a 3D volume obtained in diastole are presented. Fast cine MR images representing twenty phases of the cardiac cycle were obtained of a 2D slice in a known oblique orientation. The 2D images were initially mis-oriented at distances ranging from 2 to 20 mm, and rotations of +/-10 degrees about all three axes. Images from all 20 cardiac phases were registered to examine the effect of timing between the 2D image and the 3D pre-procedural image. Linear registration using mutual information computed with 64 histogram bins yielded the highest accuracy. For the diastolic phases, mean translation and rotation errors ranged between 0.91 and 1.32 mm and between 1.73 and 2.10 degrees. Scans acquired at other phases also had high accuracy. These results are promising for the use of real time MR in image-guided cardiac interventions, and demonstrate the feasibility of registering 2D oblique MR slices to previously acquired single-phase volumes without preprocessing.

  7. Interferometric SAR to EO image registration problem

    NASA Astrophysics Data System (ADS)

    Rogers, George W.; Mansfield, Arthur W.; Rais, Houra

    2000-08-01

    Historically, SAR to EO registration accuracy has been at the multiple pixel level compared to sub-pixel EO to EO registration accuracies. This is due to a variety of factors including the different scattering characteristics of the ground for EO and SAR, SAR speckle, and terrain induced geometric distortion. One approach to improving the SAR to EO registration accuracy is to utilize the full information from multiple SAR surveys using interferometric techniques. In this paper we will examine this problem in detail with an example using ERS SAR imagery. Estimates of the resulting accuracy based on ERS are included.

  8. Parallel image registration with a thin client interface

    NASA Astrophysics Data System (ADS)

    Saiprasad, Ganesh; Lo, Yi-Jung; Plishker, William; Lei, Peng; Ahmad, Tabassum; Shekhar, Raj

    2010-03-01

    Despite its high significance, the clinical utilization of image registration remains limited because of its lengthy execution time and a lack of easy access. The focus of this work was twofold. First, we accelerated our course-to-fine, volume subdivision-based image registration algorithm by a novel parallel implementation that maintains the accuracy of our uniprocessor implementation. Second, we developed a thin-client computing model with a user-friendly interface to perform rigid and nonrigid image registration. Our novel parallel computing model uses the message passing interface model on a 32-core cluster. The results show that, compared with the uniprocessor implementation, the parallel implementation of our image registration algorithm is approximately 5 times faster for rigid image registration and approximately 9 times faster for nonrigid registration for the images used. To test the viability of such systems for clinical use, we developed a thin client in the form of a plug-in in OsiriX, a well-known open source PACS workstation and DICOM viewer, and used it for two applications. The first application registered the baseline and follow-up MR brain images, whose subtraction was used to track progression of multiple sclerosis. The second application registered pretreatment PET and intratreatment CT of radiofrequency ablation patients to demonstrate a new capability of multimodality imaging guidance. The registration acceleration coupled with the remote implementation using a thin client should ultimately increase accuracy, speed, and access of image registration-based interpretations in a number of diagnostic and interventional applications.

  9. A survey of medical image registration on graphics hardware.

    PubMed

    Fluck, O; Vetter, C; Wein, W; Kamen, A; Preim, B; Westermann, R

    2011-12-01

    The rapidly increasing performance of graphics processors, improving programming support and excellent performance-price ratio make graphics processing units (GPUs) a good option for a variety of computationally intensive tasks. Within this survey, we give an overview of GPU accelerated image registration. We address both, GPU experienced readers with an interest in accelerated image registration, as well as registration experts who are interested in using GPUs. We survey programming models and interfaces and analyze different approaches to programming on the GPU. We furthermore discuss the inherent advantages and challenges of current hardware architectures, which leads to a description of the details of the important building blocks for successful implementations.

  10. Ultra-slim 2D- and depth-imaging camera modules for mobile imaging

    NASA Astrophysics Data System (ADS)

    Brückner, Andreas; Oberdörster, Alexander; Dunkel, Jens; Reimann, Andreas; Wippermann, Frank

    2016-03-01

    In this contribution, a microoptical imaging system is demonstrated that is inspired by the insect compound eye. The array camera module achieves HD resolution with a z-height of 2.0 mm, which is about 50% compared to traditional cameras with comparable parameters. The FOV is segmented by multiple optical channels imaging in parallel. The partial images are stitched together to form a final image of the whole FOV by image processing software. The system is able to acquire depth maps along with the 2D video and it includes light field imaging features such as software refocusing. The microlens arrays are realized by microoptical technologies on wafer-level which are suitable for a potential fabrication in high volume.

  11. Nonrigid Medical Image Registration Based on Mesh Deformation Constraints

    PubMed Central

    Qiu, TianShuang; Guo, DongMei

    2013-01-01

    Regularizing the deformation field is an important aspect in nonrigid medical image registration. By covering the template image with a triangular mesh, this paper proposes a new regularization constraint in terms of connections between mesh vertices. The connection relationship is preserved by the spring analogy method. The method is evaluated by registering cerebral magnetic resonance imaging (MRI) image data obtained from different individuals. Experimental results show that the proposed method has good deformation ability and topology-preserving ability, providing a new way to the nonrigid medical image registration. PMID:23424604

  12. Registration of Heat Capacity Mapping Mission day and night images

    NASA Technical Reports Server (NTRS)

    Watson, K.; Hummer-Miller, S.; Sawatzky, D. L. (Principal Investigator)

    1982-01-01

    Neither iterative registration, using drainage intersection maps for control, nor cross correlation techniques were satisfactory in registering day and night HCMM imagery. A procedure was developed which registers the image pairs by selecting control points and mapping the night thermal image to the daytime thermal and reflectance images using an affine transformation on a 1300 by 1100 pixel image. The resulting image registration is accurate to better than two pixels (RMS) and does not exhibit the significant misregistration that was noted in the temperature-difference and thermal-inertia products supplied by NASA. The affine transformation was determined using simple matrix arithmetic, a step that can be performed rapidly on a minicomputer.

  13. MR to CT registration of brains using image synthesis

    NASA Astrophysics Data System (ADS)

    Roy, Snehashis; Carass, Aaron; Jog, Amod; Prince, Jerry L.; Lee, Junghoon

    2014-03-01

    Computed tomography (CT) is the preferred imaging modality for patient dose calculation for radiation therapy. Magnetic resonance (MR) imaging (MRI) is used along with CT to identify brain structures due to its superior soft tissue contrast. Registration of MR and CT is necessary for accurate delineation of the tumor and other structures, and is critical in radiotherapy planning. Mutual information (MI) or its variants are typically used as a similarity metric to register MRI to CT. However, unlike CT, MRI intensity does not have an accepted calibrated intensity scale. Therefore, MI-based MR-CT registration may vary from scan to scan as MI depends on the joint histogram of the images. In this paper, we propose a fully automatic framework for MR-CT registration by synthesizing a synthetic CT image from MRI using a co-registered pair of MR and CT images as an atlas. Patches of the subject MRI are matched to the atlas and the synthetic CT patches are estimated in a probabilistic framework. The synthetic CT is registered to the original CT using a deformable registration and the computed deformation is applied to the MRI. In contrast to most existing methods, we do not need any manual intervention such as picking landmarks or regions of interests. The proposed method was validated on ten brain cancer patient cases, showing 25% improvement in MI and correlation between MR and CT images after registration compared to state-of-the-art registration methods.

  14. SU-E-J-137: Image Registration Tool for Patient Setup in Korea Heavy Ion Medical Accelerator Center

    SciTech Connect

    Kim, M; Suh, T; Cho, W; Jung, W

    2015-06-15

    Purpose: A potential validation tool for compensating patient positioning error was developed using 2D/3D and 3D/3D image registration. Methods: For 2D/3D registration, digitally reconstructed radiography (DRR) and three-dimensional computed tomography (3D-CT) images were applied. The ray-casting algorithm is the most straightforward method for generating DRR. We adopted the traditional ray-casting method, which finds the intersections of a ray with all objects, voxels of the 3D-CT volume in the scene. The similarity between the extracted DRR and orthogonal image was measured by using a normalized mutual information method. Two orthogonal images were acquired from a Cyber-Knife system from the anterior-posterior (AP) and right lateral (RL) views. The 3D-CT and two orthogonal images of an anthropomorphic phantom and head and neck cancer patient were used in this study. For 3D/3D registration, planning CT and in-room CT image were applied. After registration, the translation and rotation factors were calculated to position a couch to be movable in six dimensions. Results: Registration accuracies and average errors of 2.12 mm ± 0.50 mm for transformations and 1.23° ± 0.40° for rotations were acquired by 2D/3D registration using an anthropomorphic Alderson-Rando phantom. In addition, registration accuracies and average errors of 0.90 mm ± 0.30 mm for transformations and 1.00° ± 0.2° for rotations were acquired using CT image sets. Conclusion: We demonstrated that this validation tool could compensate for patient positioning error. In addition, this research could be the fundamental step for compensating patient positioning error at the first Korea heavy-ion medical accelerator treatment center.

  15. A knowledge-driven quasi-global registration of thoracic-abdominal CT and CBCT for image-guided interventions

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Chefd'hotel, Christophe; Ordy, Vincent; Zheng, Jie; Deng, Xiang; Odry, Benjamin

    2013-03-01

    In this work, we have developed a novel knowledge-driven quasi-global method for fast and robust registration of thoracic-abdominal CT and cone beam CT (CBCT) scans. While the use of CBCT in operating rooms has become a common practice, there is an increasing demand on the registration of CBCT with pre-operative scans, in many cases, CT scans. One of the major challenges of thoracic-abdominal CT/CBCT registration is from various fields of view (FOVs) of the two imaging modalities. The proposed approach utilizes a priori knowledge of anatomy to generate 2D anatomy targeted projection (ATP) images that surrogate the original volumes. The use of lower dimension surrogate images can significantly reduce the computation cost of similarity evaluation during optimization and make it practically feasible to perform global optimization based registration for image-guided interventional procedures. Another a priori knowledge about the local optima distribution on energy curves is further used to effectively select multi-starting points for registration optimization. 20 clinical data sets were used to validate the method and the target registration error (TRE) and maximum registration error (MRE) were calculated to compare the performance of the knowledge-driven quasi-global registration against a typical local-search based registration. The local search based registration failed on 60% cases, with an average TRE of 22.9mm and MRE of 28.1mm; the knowledge-driven quasi-global registration achieved satisfactory results for all the 20 data sets, with an average TRE of 3.5mm, and MRE of 2.6mm. The average computation time for the knowledge-driven quasi-global registration is 8.7 seconds.

  16. Avoiding Stair-Step Artifacts in Image Registration for GOES-R Navigation and Registration Assessment

    NASA Technical Reports Server (NTRS)

    Grycewicz, Thomas J.; Tan, Bin; Isaacson, Peter J.; De Luccia, Frank J.; Dellomo, John

    2016-01-01

    In developing software for independent verification and validation (IVV) of the Image Navigation and Registration (INR) capability for the Geostationary Operational Environmental Satellite R Series (GOES-R) Advanced Baseline Imager (ABI), we have encountered an image registration artifact which limits the accuracy of image offset estimation at the subpixel scale using image correlation. Where the two images to be registered have the same pixel size, subpixel image registration preferentially selects registration values where the image pixel boundaries are close to lined up. Because of the shape of a curve plotting input displacement to estimated offset, we call this a stair-step artifact. When one image is at a higher resolution than the other, the stair-step artifact is minimized by correlating at the higher resolution. For validating ABI image navigation, GOES-R images are correlated with Landsat-based ground truth maps. To create the ground truth map, the Landsat image is first transformed to the perspective seen from the GOES-R satellite, and then is scaled to an appropriate pixel size. Minimizing processing time motivates choosing the map pixels to be the same size as the GOES-R pixels. At this pixel size image processing of the shift estimate is efficient, but the stair-step artifact is present. If the map pixel is very small, stair-step is not a problem, but image correlation is computation-intensive. This paper describes simulation-based selection of the scale for truth maps for registering GOES-R ABI images.

  17. Infrared thermal facial image sequence registration analysis and verification

    NASA Astrophysics Data System (ADS)

    Chen, Chieh-Li; Jian, Bo-Lin

    2015-03-01

    To study the emotional responses of subjects to the International Affective Picture System (IAPS), infrared thermal facial image sequence is preprocessed for registration before further analysis such that the variance caused by minor and irregular subject movements is reduced. Without affecting the comfort level and inducing minimal harm, this study proposes an infrared thermal facial image sequence registration process that will reduce the deviations caused by the unconscious head shaking of the subjects. A fixed image for registration is produced through the localization of the centroid of the eye region as well as image translation and rotation processes. Thermal image sequencing will then be automatically registered using the two-stage genetic algorithm proposed. The deviation before and after image registration will be demonstrated by image quality indices. The results show that the infrared thermal image sequence registration process proposed in this study is effective in localizing facial images accurately, which will be beneficial to the correlation analysis of psychological information related to the facial area.

  18. The agreement between 3D, standard 2D and triplane 2D speckle tracking: effects of image quality and 3D volume rate

    PubMed Central

    Stöbe, Stephan; Tarr, Adrienn; Pfeiffer, Dietrich; Hagendorff, Andreas

    2014-01-01

    Comparison of 3D and 2D speckle tracking performed on standard 2D and triplane 2D datasets of normal and pathological left ventricular (LV) wall-motion patterns with a focus on the effect that 3D volume rate (3DVR), image quality and tracking artifacts have on the agreement between 2D and 3D speckle tracking. 37 patients with normal LV function and 18 patients with ischaemic wall-motion abnormalities underwent 2D and 3D echocardiography, followed by offline speckle tracking measurements. The values of 3D global, regional and segmental strain were compared with the standard 2D and triplane 2D strain values. Correlation analysis with the LV ejection fraction (LVEF) was also performed. The 3D and 2D global strain values correlated good in both normally and abnormally contracting hearts, though systematic differences between the two methods were observed. Of the 3D strain parameters, the area strain showed the best correlation with the LVEF. The numerical agreement of 3D and 2D analyses varied significantly with the volume rate and image quality of the 3D datasets. The highest correlation between 2D and 3D peak systolic strain values was found between 3D area and standard 2D longitudinal strain. Regional wall-motion abnormalities were similarly detected by 2D and 3D speckle tracking. 2DST of triplane datasets showed similar results to those of conventional 2D datasets. 2D and 3D speckle tracking similarly detect normal and pathological wall-motion patterns. Limited image quality has a significant impact on the agreement between 3D and 2D numerical strain values. PMID:26693303

  19. Multifractal analysis of 2D gray soil images

    NASA Astrophysics Data System (ADS)

    González-Torres, Ivan; Losada, Juan Carlos; Heck, Richard; Tarquis, Ana M.

    2015-04-01

    Soil structure, understood as the spatial arrangement of soil pores, is one of the key factors in soil modelling processes. Geometric properties of individual and interpretation of the morphological parameters of pores can be estimated from thin sections or 3D Computed Tomography images (Tarquis et al., 2003), but there is no satisfactory method to binarized these images and quantify the complexity of their spatial arrangement (Tarquis et al., 2008, Tarquis et al., 2009; Baveye et al., 2010). The objective of this work was to apply a multifractal technique, their singularities (α) and f(α) spectra, to quantify it without applying any threshold (Gónzalez-Torres, 2014). Intact soil samples were collected from four horizons of an Argisol, formed on the Tertiary Barreiras group of formations in Pernambuco state, Brazil (Itapirema Experimental Station). The natural vegetation of the region is tropical, coastal rainforest. From each horizon, showing different porosities and spatial arrangements, three adjacent samples were taken having a set of twelve samples. The intact soil samples were imaged using an EVS (now GE Medical. London, Canada) MS-8 MicroCT scanner with 45 μm pixel-1 resolution (256x256 pixels). Though some samples required paring to fit the 64 mm diameter imaging tubes, field orientation was maintained. References Baveye, P.C., M. Laba, W. Otten, L. Bouckaert, P. Dello, R.R. Goswami, D. Grinev, A. Houston, Yaoping Hu, Jianli Liu, S. Mooney, R. Pajor, S. Sleutel, A. Tarquis, Wei Wang, Qiao Wei, Mehmet Sezgin. Observer-dependent variability of the thresholding step in the quantitative analysis of soil images and X-ray microtomography data. Geoderma, 157, 51-63, 2010. González-Torres, Iván. Theory and application of multifractal analysis methods in images for the study of soil structure. Master thesis, UPM, 2014. Tarquis, A.M., R.J. Heck, J.B. Grau; J. Fabregat, M.E. Sanchez and J.M. Antón. Influence of Thresholding in Mass and Entropy Dimension of 3-D

  20. Temporal mammogram image registration using optimized curvilinear coordinates.

    PubMed

    Abdel-Nasser, Mohamed; Moreno, Antonio; Puig, Domenec

    2016-04-01

    Registration of mammograms plays an important role in breast cancer computer-aided diagnosis systems. Radiologists usually compare mammogram images in order to detect abnormalities. The comparison of mammograms requires a registration between them. A temporal mammogram registration method is proposed in this paper. It is based on the curvilinear coordinates, which are utilized to cope both with global and local deformations in the breast area. Temporal mammogram pairs are used to validate the proposed method. After registration, the similarity between the mammograms is maximized, and the distance between manually defined landmarks is decreased. In addition, a thorough comparison with the state-of-the-art mammogram registration methods is performed to show its effectiveness.

  1. A 2-D imaging heat-flux gauge

    SciTech Connect

    Noel, B.W.; Borella, H.M. ); Beshears, D.L.; Sartory, W.K.; Tobin, K.W.; Williams, R.K. ); Turley, W.D. . Santa Barbara Operations)

    1991-07-01

    This report describes a new leadless two-dimensional imaging optical heat-flux gauge. The gauge is made by depositing arrays of thermorgraphic-phosphor (TP) spots onto the faces of a polymethylpentene is insulator. In the first section of the report, we describe several gauge configurations and their prototype realizations. A satisfactory configuration is an array of right triangles on each face that overlay to form squares when the gauge is viewed normal to the surface. The next section of the report treats the thermal conductivity of TPs. We set up an experiment using a comparative longitudinal heat-flow apparatus to measure the previously unknown thermal conductivity of these materials. The thermal conductivity of one TP, Y{sub 2}O{sub 3}:Eu, is 0.0137 W/cm{center dot}K over the temperature range from about 300 to 360 K. The theories underlying the time response of TP gauges and the imaging characteristics are discussed in the next section. Then we discuss several laboratory experiments to (1) demonstrate that the TP heat-flux gauge can be used in imaging applications; (2) obtain a quantum yield that enumerates what typical optical output signal amplitudes can be obtained from TP heat-flux gauges; and (3) determine whether LANL-designed intensified video cameras have sufficient sensitivity to acquire images from the heat-flux gauges. We obtained positive results from all the measurements. Throughout the text, we note limitations, areas where improvements are needed, and where further research is necessary. 12 refs., 25 figs., 4 tabs.

  2. Non-rigid image registration with SalphaS filters.

    PubMed

    Liao, Shu; Chung, Albert C S

    2008-01-01

    In this paper, based on the SalphaS distributions, we design SalphaS filters and use the filters as a new feature extraction method for non-rigid medical image registration. In brain MR images, the energy distributions of different frequency bands often exhibit heavy-tailed behavior. Such non-Gaussian behavior is essential for non-rigid image registration but cannot be satisfactorily modeled by the conventional Gabor filters. This leads to unsatisfactory modeling of voxels located at the salient regions of the images. To this end, we propose the SalphaS filters for modeling the heavy-tailed behavior of the energy distributions of brain MR images, and show that the Gabor filter is a special case of the SalphaS filter. The maximum response orientation selection criterion is defined for each frequency band to achieve rotation invariance. In our framework, if the brain MR images are already segmented, each voxel can be automatically assigned a weighting factor based on the Fisher's separation criterion and it is shown that the registration performance can be further improved. The proposed method has been compared with the free-form-deformation based method, Demons algorithm and a method using Gabor features by conducting non-rigid image registration experiments. It is observed that the proposed method achieves the best registration accuracy among all the compared methods in both the simulated and real datasets obtained from the BrainWeb and IBSR respectively.

  3. 3D/2D convertible projection-type integral imaging using concave half mirror array.

    PubMed

    Hong, Jisoo; Kim, Youngmin; Park, Soon-gi; Hong, Jong-Ho; Min, Sung-Wook; Lee, Sin-Doo; Lee, Byoungho

    2010-09-27

    We propose a new method for implementing 3D/2D convertible feature in the projection-type integral imaging by using concave half mirror array. The concave half mirror array has the partially reflective characteristic to the incident light. And the reflected term is modulated by the concave mirror array structure, while the transmitted term is unaffected. With such unique characteristic, 3D/2D conversion or even the simultaneous display of 3D and 2D images is also possible. The prototype was fabricated by the aluminum coating and the polydimethylsiloxane molding process. We could experimentally verify the 3D/2D conversion and the display of 3D image on 2D background with the fabricated prototype.

  4. Semi-automatic elastic registration on thyroid gland ultrasonic image

    NASA Astrophysics Data System (ADS)

    Xu, Xia; Zhong, Yue; Luo, Yan; Li, Deyu; Lin, Jiangli; Wang, Tianfu

    2007-12-01

    Knowledge of in vivo thyroid volume has both diagnostic and therapeutic importance and could lead to a more precise quantification of absolute activity contained in the thyroid gland. However, the shape of thyroid gland is irregular and difficult to calculate. For precise estimation of thyroid volume by ultrasound imaging, this paper presents a novel semiautomatic minutiae matching method in thyroid gland ultrasonic image by means of thin-plate spline model. Registration consists of four basic steps: feature detection, feature matching, mapping function design, and image transformation and resampling. Due to the connectivity of thyroid gland boundary, we choose active contour model as feature detector, and radials from centric points for feature matching. The proposed approach has been used in thyroid gland ultrasound images registration. Registration results of 18 healthy adults' thyroid gland ultrasound images show this method consumes less time and energy with good objectivity than algorithms selecting landmarks manually.

  5. Registration and 3D visualization of large microscopy images

    NASA Astrophysics Data System (ADS)

    Mosaliganti, Kishore; Pan, Tony; Sharp, Richard; Ridgway, Randall; Iyengar, Srivathsan; Gulacy, Alexandra; Wenzel, Pamela; de Bruin, Alain; Machiraju, Raghu; Huang, Kun; Leone, Gustavo; Saltz, Joel

    2006-03-01

    Inactivation of the retinoblastoma gene in mouse embryos causes tissue infiltrations into critical sections of the placenta, which has been shown to affect fetal survivability. Our collaborators in cancer genetics are extremely interested in examining the three dimensional nature of these infiltrations given a stack of two dimensional light microscopy images. Three sets of wildtype and mutant placentas was sectioned serially and digitized using a commercial light microscopy scanner. Each individual placenta dataset consisted of approximately 1000 images totaling 700 GB in size, which were registered into a volumetric dataset using National Library of Medicine's (NIH/NLM) Insight Segmentation and Registration Toolkit (ITK). This paper describes our method for image registration to aid in volume visualization of tissue level intermixing for both wildtype and Rb - specimens. The registration process faces many challenges arising from the large image sizes, damages during sectioning, staining gradients both within and across sections, and background noise. These issues limit the direct application of standard registration techniques due to frequent convergence to local solutions. In this work, we develop a mixture of automated and semi-automated enhancements with ground-truth validation for the mutual information-based registration algorithm. Our final volume renderings clearly show tissue intermixing differences between both wildtype and Rb - specimens which are not obvious prior to registration.

  6. On removing interpolation and resampling artifacts in rigid image registration.

    PubMed

    Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R; Fischl, Bruce

    2013-02-01

    We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration.

  7. Evaluating the utility of 3D TRUS image information in guiding intra-procedure registration for motion compensation

    NASA Astrophysics Data System (ADS)

    De Silva, Tharindu; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2014-03-01

    In targeted 3D transrectal ultrasound (TRUS)-guided biopsy, patient and prostate movement during the procedure can cause target misalignments that hinder accurate sampling of pre-planned suspicious tissue locations. Multiple solutions have been proposed for motion compensation via registration of intra-procedural TRUS images to a baseline 3D TRUS image acquired at the beginning of the biopsy procedure. While 2D TRUS images are widely used for intra-procedural guidance, some solutions utilize richer intra-procedural images such as bi- or multi-planar TRUS or 3D TRUS, acquired by specialized probes. In this work, we measured the impact of such richer intra-procedural imaging on motion compensation accuracy, to evaluate the tradeoff between cost and complexity of intra-procedural imaging versus improved motion compensation. We acquired baseline and intra-procedural 3D TRUS images from 29 patients at standard sextant-template biopsy locations. We used the planes extracted from the 3D intra-procedural scans to simulate 2D and 3D information available in different clinically relevant scenarios for registration. The registration accuracy was evaluated by calculating the target registration error (TRE) using manually identified homologous fiducial markers (micro-calcifications). Our results indicate that TRE improves gradually when the number of intra-procedural imaging planes used in registration is increased. Full 3D TRUS information helps the registration algorithm to robustly converge to more accurate solutions. These results can also inform the design of a fail-safe workflow during motion compensation in a system using a tracked 2D TRUS probe, by prescribing rotational acquisitions that can be performed quickly and easily by the physician immediately prior to needle targeting.

  8. 3D non-rigid surface-based MR-TRUS registration for image-guided prostate biopsy

    NASA Astrophysics Data System (ADS)

    Sun, Yue; Qiu, Wu; Romagnoli, Cesare; Fenster, Aaron

    2014-03-01

    Two dimensional (2D) transrectal ultrasound (TRUS) guided prostate biopsy is the standard approach for definitive diagnosis of prostate cancer (PCa). However, due to the lack of image contrast of prostate tumors needed to clearly visualize early-stage PCa, prostate biopsy often results in false negatives, requiring repeat biopsies. Magnetic Resonance Imaging (MRI) has been considered to be a promising imaging modality for noninvasive identification of PCa, since it can provide a high sensitivity and specificity for the detection of early stage PCa. Our main objective is to develop and validate a registration method of 3D MR-TRUS images, allowing generation of volumetric 3D maps of targets identified in 3D MR images to be biopsied using 3D TRUS images. Our registration method first makes use of an initial rigid registration of 3D MR images to 3D TRUS images using 6 manually placed approximately corresponding landmarks in each image. Following the manual initialization, two prostate surfaces are segmented from 3D MR and TRUS images and then non-rigidly registered using a thin-plate spline (TPS) algorithm. The registration accuracy was evaluated using 4 patient images by measuring target registration error (TRE) of manually identified corresponding intrinsic fiducials (calcifications and/or cysts) in the prostates. Experimental results show that the proposed method yielded an overall mean TRE of 2.05 mm, which is favorably comparable to a clinical requirement for an error of less than 2.5 mm.

  9. The adaptive FEM elastic model for medical image registration.

    PubMed

    Zhang, Jingya; Wang, Jiajun; Wang, Xiuying; Feng, Dagan

    2014-01-01

    This paper proposes an adaptive mesh refinement strategy for the finite element method (FEM) based elastic registration model. The signature matrix for mesh refinement takes into account the regional intensity variance and the local deformation displacement. The regional intensity variance reflects detailed information for improving registration accuracy and the deformation displacement fine-tunes the mesh refinement for a more efficient algorithm. The gradient flows of two different similarity metrics, the sum of the squared difference and the spatially encoded mutual information for the mono-modal and multi-modal registrations, are used to derive external forces to drive the model to the equilibrium state. We compared our approach to three other models: (1) the conventional multi-resolution FEM registration algorithm; (2) the FEM elastic method that uses variation information for mesh refinement; and (3) the robust block matching based registration. Comparisons among different methods in a dataset with 20 CT image pairs upon artificial deformation demonstrate that our registration method achieved significant improvement in accuracies. Experimental results in another dataset of 40 real medical image pairs for both mono-modal and multi-modal registrations also show that our model outperforms the other three models in its accuracy.

  10. Microsecond time-resolved 2D X-ray imaging

    NASA Astrophysics Data System (ADS)

    Sarvestani, A.; Sauer, N.; Strietzel, C.; Besch, H. J.; Orthen, A.; Pavel, N.; Walenta, A. H.; Menk, R. H.

    2001-06-01

    A method is presented which allows to take two-dimensional X-ray images of repetitive processes with recording times in the sub-microsecond range. Various measurements have been performed with a recently introduced novel two-dimensional single photon counter which has been slightly modified in order to determine the exact arrival time of each detected photon. For this purpose a special clock signal is synchronized with the process and is digitized contemporaneously with each event. This technique can be applied even with rate limited detectors and low flux sources, since—unlike in conventional methods, where chopped beams or gated read out electronics are used—all photons are used for the image formation. For the measurements, rapidly moving mechanical systems and conventional X-ray sources have been used, reaching time resolutions of some 10 μs. The technique presented here opens a variety of new biological, medical and industrial applications which will be discussed. As a first application example, three dimensional tomographic reconstructions of rapidly rotating objects (4000 turns/min) are presented.

  11. HipMatch: an object-oriented cross-platform program for accurate determination of cup orientation using 2D-3D registration of single standard X-ray radiograph and a CT volume.

    PubMed

    Zheng, Guoyan; Zhang, Xuan; Steppacher, Simon D; Murphy, Stephen B; Siebenrock, Klaus A; Tannast, Moritz

    2009-09-01

    The widely used procedure of evaluation of cup orientation following total hip arthroplasty using single standard anteroposterior (AP) radiograph is known inaccurate, largely due to the wide variability in individual pelvic orientation relative to X-ray plate. 2D-3D image registration methods have been introduced for an accurate determination of the post-operative cup alignment with respect to an anatomical reference extracted from the CT data. Although encouraging results have been reported, their extensive usage in clinical routine is still limited. This may be explained by their requirement of a CAD model of the prosthesis, which is often difficult to be organized from the manufacturer due to the proprietary issue, and by their requirement of either multiple radiographs or a radiograph-specific calibration, both of which are not available for most retrospective studies. To address these issues, we developed and validated an object-oriented cross-platform program called "HipMatch" where a hybrid 2D-3D registration scheme combining an iterative landmark-to-ray registration with a 2D-3D intensity-based registration was implemented to estimate a rigid transformation between a pre-operative CT volume and the post-operative X-ray radiograph for a precise estimation of cup alignment. No CAD model of the prosthesis is required. Quantitative and qualitative results evaluated on cadaveric and clinical datasets are given, which indicate the robustness and the accuracy of the program. HipMatch is written in object-oriented programming language C++ using cross-platform software Qt (TrollTech, Oslo, Norway), VTK, and Coin3D and is transportable to any platform. PMID:19328585

  12. Diffeomorphic Registration of Images with Variable Contrast Enhancement

    PubMed Central

    Janssens, Guillaume; Jacques, Laurent; Orban de Xivry, Jonathan; Geets, Xavier; Macq, Benoit

    2011-01-01

    Nonrigid image registration is widely used to estimate tissue deformations in highly deformable anatomies. Among the existing methods, nonparametric registration algorithms such as optical flow, or Demons, usually have the advantage of being fast and easy to use. Recently, a diffeomorphic version of the Demons algorithm was proposed. This provides the advantage of producing invertible displacement fields, which is a necessary condition for these to be physical. However, such methods are based on the matching of intensities and are not suitable for registering images with different contrast enhancement. In such cases, a registration method based on the local phase like the Morphons has to be used. In this paper, a diffeomorphic version of the Morphons registration method is proposed and compared to conventional Morphons, Demons, and diffeomorphic Demons. The method is validated in the context of radiotherapy for lung cancer patients on several 4D respiratory-correlated CT scans of the thorax with and without variable contrast enhancement. PMID:21197460

  13. Robust methods for automatic image-to-world registration in cone-beam CT interventional guidance

    SciTech Connect

    Dang, H.; Otake, Y.; Schafer, S.; Stayman, J. W.; Kleinszig, G.; Siewerdsen, J. H.

    2012-10-15

    Purpose: Real-time surgical navigation relies on accurate image-to-world registration to align the coordinate systems of the image and patient. Conventional manual registration can present a workflow bottleneck and is prone to manual error and intraoperator variability. This work reports alternative means of automatic image-to-world registration, each method involving an automatic registration marker (ARM) used in conjunction with C-arm cone-beam CT (CBCT). The first involves a Known-Model registration method in which the ARM is a predefined tool, and the second is a Free-Form method in which the ARM is freely configurable. Methods: Studies were performed using a prototype C-arm for CBCT and a surgical tracking system. A simple ARM was designed with markers comprising a tungsten sphere within infrared reflectors to permit detection of markers in both x-ray projections and by an infrared tracker. The Known-Model method exercised a predefined specification of the ARM in combination with 3D-2D registration to estimate the transformation that yields the optimal match between forward projection of the ARM and the measured projection images. The Free-Form method localizes markers individually in projection data by a robust Hough transform approach extended from previous work, backprojected to 3D image coordinates based on C-arm geometric calibration. Image-domain point sets were transformed to world coordinates by rigid-body point-based registration. The robustness and registration accuracy of each method was tested in comparison to manual registration across a range of body sites (head, thorax, and abdomen) of interest in CBCT-guided surgery, including cases with interventional tools in the radiographic scene. Results: The automatic methods exhibited similar target registration error (TRE) and were comparable or superior to manual registration for placement of the ARM within {approx}200 mm of C-arm isocenter. Marker localization in projection data was robust across all

  14. Robust methods for automatic image-to-world registration in cone-beam CT interventional guidance

    PubMed Central

    Dang, H.; Otake, Y.; Schafer, S.; Stayman, J. W.; Kleinszig, G.; Siewerdsen, J. H.

    2012-01-01

    Purpose: Real-time surgical navigation relies on accurate image-to-world registration to align the coordinate systems of the image and patient. Conventional manual registration can present a workflow bottleneck and is prone to manual error and intraoperator variability. This work reports alternative means of automatic image-to-world registration, each method involving an automatic registration marker (ARM) used in conjunction with C-arm cone-beam CT (CBCT). The first involves a Known-Model registration method in which the ARM is a predefined tool, and the second is a Free-Form method in which the ARM is freely configurable. Methods: Studies were performed using a prototype C-arm for CBCT and a surgical tracking system. A simple ARM was designed with markers comprising a tungsten sphere within infrared reflectors to permit detection of markers in both x-ray projections and by an infrared tracker. The Known-Model method exercised a predefined specification of the ARM in combination with 3D-2D registration to estimate the transformation that yields the optimal match between forward projection of the ARM and the measured projection images. The Free-Form method localizes markers individually in projection data by a robust Hough transform approach extended from previous work, backprojected to 3D image coordinates based on C-arm geometric calibration. Image-domain point sets were transformed to world coordinates by rigid-body point-based registration. The robustness and registration accuracy of each method was tested in comparison to manual registration across a range of body sites (head, thorax, and abdomen) of interest in CBCT-guided surgery, including cases with interventional tools in the radiographic scene. Results: The automatic methods exhibited similar target registration error (TRE) and were comparable or superior to manual registration for placement of the ARM within ∼200 mm of C-arm isocenter. Marker localization in projection data was robust across all

  15. An Iterative Image Registration Algorithm by Optimizing Similarity Measurement.

    PubMed

    Chu, Wei; Ma, Li; Song, John; Vorburger, Theodore

    2010-01-01

    A new registration algorithm based on Newton-Raphson iteration is proposed to align images with rigid body transformation. A set of transformation parameters consisting of translation in x and y and rotation angle around z is calculated by optimizing a specified similarity metric using the Newton-Raphson method. This algorithm has been tested by registering and correlating pairs of topography measurements of nominally identical NIST Standard Reference Material (SRM 2461) standard cartridge cases, and very good registration accuracy has been obtained.

  16. Quantitative validation of 3D image registration techniques

    NASA Astrophysics Data System (ADS)

    Holton Tainter, Kerrie S.; Taneja, Udita; Robb, Richard A.

    1995-05-01

    Multimodality images obtained from different medical imaging systems such as magnetic resonance (MR), computed tomography (CT), ultrasound (US), positron emission tomography (PET), single photon emission computed tomography (SPECT) provide largely complementary characteristic or diagnostic information. Therefore, it is an important research objective to `fuse' or combine this complementary data into a composite form which would provide synergistic information about the objects under examination. An important first step in the use of complementary fused images is 3D image registration, where multi-modality images are brought into spatial alignment so that the point-to-point correspondence between image data sets is known. Current research in the field of multimodality image registration has resulted in the development and implementation of several different registration algorithms, each with its own set of requirements and parameters. Our research has focused on the development of a general paradigm for measuring, evaluating and comparing the performance of different registration algorithms. Rather than evaluating the results of one algorithm under a specific set of conditions, we suggest a general approach to validation using simulation experiments, where the exact spatial relationship between data sets is known, along with phantom data, to characterize the behavior of an algorithm via a set of quantitative image measurements. This behavior may then be related to the algorithm's performance with real patient data, where the exact spatial relationship between multimodality images is unknown. Current results indicate that our approach is general enough to apply to several different registration algorithms. Our methods are useful for understanding the different sources of registration error and for comparing the results between different algorithms.

  17. Biomechanical based image registration for head and neck radiation treatment

    NASA Astrophysics Data System (ADS)

    Al-Mayah, Adil; Moseley, Joanne; Hunter, Shannon; Velec, Mike; Chau, Lily; Breen, Stephen; Brock, Kristy

    2010-02-01

    Deformable image registration of four head and neck cancer patients was conducted using biomechanical based model. Patient specific 3D finite element models have been developed using CT and cone beam CT image data of the planning and a radiation treatment session. The model consists of seven vertebrae (C1 to C7), mandible, larynx, left and right parotid glands, tumor and body. Different combinations of boundary conditions are applied in the model in order to find the configuration with a minimum registration error. Each vertebra in the planning session is individually aligned with its correspondence in the treatment session. Rigid alignment is used for each individual vertebra and to the mandible since deformation is not expected in the bones. In addition, the effect of morphological differences in external body between the two image sessions is investigated. The accuracy of the registration is evaluated using the tumor, and left and right parotid glands by comparing the calculated Dice similarity index of these structures following deformation in relation to their true surface defined in the image of the second session. The registration improves when the vertebrae and mandible are aligned in the two sessions with the highest Dice index of 0.86+/-0.08, 0.84+/-0.11, and 0.89+/-0.04 for the tumor, left and right parotid glands, respectively. The accuracy of the center of mass location of tumor and parotid glands is also improved by deformable image registration where the error in the tumor and parotid glands decreases from 4.0+/-1.1, 3.4+/-1.5, and 3.8+/-0.9 mm using rigid registration to 2.3+/-1.0, 2.5+/-0.8 and 2.0+/-0.9 mm in the deformable image registration when alignment of vertebrae and mandible is conducted in addition to the surface projection of the body.

  18. Improved image registration by sparse patch-based deformation estimation.

    PubMed

    Kim, Minjeong; Wu, Guorong; Wang, Qian; Lee, Seong-Whan; Shen, Dinggang

    2015-01-15

    Despite intensive efforts for decades, deformable image registration is still a challenging problem due to the potential large anatomical differences across individual images, which limits the registration performance. Fortunately, this issue could be alleviated if a good initial deformation can be provided for the two images under registration, which are often termed as the moving subject and the fixed template, respectively. In this work, we present a novel patch-based initial deformation prediction framework for improving the performance of existing registration algorithms. Our main idea is to estimate the initial deformation between subject and template in a patch-wise fashion by using the sparse representation technique. We argue that two image patches should follow the same deformation toward the template image if their patch-wise appearance patterns are similar. To this end, our framework consists of two stages, i.e., the training stage and the application stage. In the training stage, we register all training images to the pre-selected template, such that the deformation of each training image with respect to the template is known. In the application stage, we apply the following four steps to efficiently calculate the initial deformation field for the new test subject: (1) We pick a small number of key points in the distinctive regions of the test subject; (2) for each key point, we extract a local patch and form a coupled appearance-deformation dictionary from training images where each dictionary atom consists of the image intensity patch as well as their respective local deformations; (3) a small set of training image patches in the coupled dictionary are selected to represent the image patch of each subject key point by sparse representation. Then, we can predict the initial deformation for each subject key point by propagating the pre-estimated deformations on the selected training patches with the same sparse representation coefficients; and (4) we

  19. Registration of multitemporal aerial optical images using line features

    NASA Astrophysics Data System (ADS)

    Zhao, Chenyang; Goshtasby, A. Ardeshir

    2016-07-01

    Registration of multitemporal images is generally considered difficult because scene changes can occur between the times the images are obtained. Since the changes are mostly radiometric in nature, features are needed that are insensitive to radiometric differences between the images. Lines are geometric features that represent straight edges of rigid man-made structures. Because such structures rarely change over time, lines represent stable geometric features that can be used to register multitemporal remote sensing images. An algorithm to establish correspondence between lines in two images of a planar scene is introduced and formulas to relate the parameters of a homography transformation to the parameters of corresponding lines in images are derived. Results of the proposed image registration on various multitemporal images are presented and discussed.

  20. PCA-based groupwise image registration for quantitative MRI.

    PubMed

    Huizinga, W; Poot, D H J; Guyader, J-M; Klaassen, R; Coolen, B F; van Kranenburg, M; van Geuns, R J M; Uitterdijk, A; Polfliet, M; Vandemeulebroucke, J; Leemans, A; Niessen, W J; Klein, S

    2016-04-01

    Quantitative magnetic resonance imaging (qMRI) is a technique for estimating quantitative tissue properties, such as the T1 and T2 relaxation times, apparent diffusion coefficient (ADC), and various perfusion measures. This estimation is achieved by acquiring multiple images with different acquisition parameters (or at multiple time points after injection of a contrast agent) and by fitting a qMRI signal model to the image intensities. Image registration is often necessary to compensate for misalignments due to subject motion and/or geometric distortions caused by the acquisition. However, large differences in image appearance make accurate image registration challenging. In this work, we propose a groupwise image registration method for compensating misalignment in qMRI. The groupwise formulation of the method eliminates the requirement of choosing a reference image, thus avoiding a registration bias. The method minimizes a cost function that is based on principal component analysis (PCA), exploiting the fact that intensity changes in qMRI can be described by a low-dimensional signal model, but not requiring knowledge on the specific acquisition model. The method was evaluated on 4D CT data of the lungs, and both real and synthetic images of five different qMRI applications: T1 mapping in a porcine heart, combined T1 and T2 mapping in carotid arteries, ADC mapping in the abdomen, diffusion tensor mapping in the brain, and dynamic contrast-enhanced mapping in the abdomen. Each application is based on a different acquisition model. The method is compared to a mutual information-based pairwise registration method and four other state-of-the-art groupwise registration methods. Registration accuracy is evaluated in terms of the precision of the estimated qMRI parameters, overlap of segmented structures, distance between corresponding landmarks, and smoothness of the deformation. In all qMRI applications the proposed method performed better than or equally well as

  1. PCA-based groupwise image registration for quantitative MRI.

    PubMed

    Huizinga, W; Poot, D H J; Guyader, J-M; Klaassen, R; Coolen, B F; van Kranenburg, M; van Geuns, R J M; Uitterdijk, A; Polfliet, M; Vandemeulebroucke, J; Leemans, A; Niessen, W J; Klein, S

    2016-04-01

    Quantitative magnetic resonance imaging (qMRI) is a technique for estimating quantitative tissue properties, such as the T1 and T2 relaxation times, apparent diffusion coefficient (ADC), and various perfusion measures. This estimation is achieved by acquiring multiple images with different acquisition parameters (or at multiple time points after injection of a contrast agent) and by fitting a qMRI signal model to the image intensities. Image registration is often necessary to compensate for misalignments due to subject motion and/or geometric distortions caused by the acquisition. However, large differences in image appearance make accurate image registration challenging. In this work, we propose a groupwise image registration method for compensating misalignment in qMRI. The groupwise formulation of the method eliminates the requirement of choosing a reference image, thus avoiding a registration bias. The method minimizes a cost function that is based on principal component analysis (PCA), exploiting the fact that intensity changes in qMRI can be described by a low-dimensional signal model, but not requiring knowledge on the specific acquisition model. The method was evaluated on 4D CT data of the lungs, and both real and synthetic images of five different qMRI applications: T1 mapping in a porcine heart, combined T1 and T2 mapping in carotid arteries, ADC mapping in the abdomen, diffusion tensor mapping in the brain, and dynamic contrast-enhanced mapping in the abdomen. Each application is based on a different acquisition model. The method is compared to a mutual information-based pairwise registration method and four other state-of-the-art groupwise registration methods. Registration accuracy is evaluated in terms of the precision of the estimated qMRI parameters, overlap of segmented structures, distance between corresponding landmarks, and smoothness of the deformation. In all qMRI applications the proposed method performed better than or equally well as

  2. Automatic image registration performance for two different CBCT systems; variation with imaging dose

    NASA Astrophysics Data System (ADS)

    Barber, J.; Sykes, J. R.; Holloway, L.; Thwaites, D. I.

    2014-03-01

    The performance of an automatic image registration algorithm was compared on image sets collected with two commercial CBCT systems, and the relationship with imaging dose was explored. CBCT images of a CIRS Virtually Human Male Pelvis phantom (VHMP) were collected on Varian TrueBeam/OBI and Elekta Synergy/XVI linear accelerators, across a range of mAs settings. Each CBCT image was registered 100 times, with random initial offsets introduced. Image registration was performed using the grey value correlation ratio algorithm in the Elekta XVI software, to a mask of the prostate volume with 5 mm expansion. Residual registration errors were calculated after correcting for the initial introduced phantom set-up error. Registration performance with the OBI images was similar to that of XVI. There was a clear dependence on imaging dose for the XVI images with residual errors increasing below 4mGy. It was not possible to acquire images with doses lower than ~5mGy with the OBI system and no evidence of reduced performance was observed at this dose. Registration failures (maximum target registration error > 3.6 mm on the surface of a 30mm sphere) occurred in 5% to 9% of registrations except for the lowest dose XVI scan (31%). The uncertainty in automatic image registration with both OBI and XVI images was found to be adequate for clinical use within a normal range of acquisition settings.

  3. Mass Preserving Registration for Heart MR Images

    PubMed Central

    Zhu, Lei; Haker, Steven; Tannenbaum, Allen

    2013-01-01

    This paper presents a new algorithm for non-rigid registration between two doubly-connected regions. Our algorithm is based on harmonic analysis and the theory of optimal mass transport. It assumes an underlining continuum model, in which the total amount of mass is exactly preserved during the transformation of tissues. We use a finite element approach to numerically implement the algorithm. PMID:16685954

  4. Mass preserving registration for heart MR images.

    PubMed

    Zhu, Lei; Haker, Steven; Tannenbaum, Allen

    2005-01-01

    This paper presents a new algorithm for non-rigid registration between two doubly-connected regions. Our algorithm is based on harmonic analysis and the theory of optimal mass transport. It assumes an underlining continuum model, in which the total amount of mass is exactly preserved during the transformation of tissues. We use a finite element approach to numerically implement the algorithm. PMID:16685954

  5. Joint registration and super-resolution with omnidirectional images.

    PubMed

    Arican, Zafer; Frossard, Pascal

    2011-11-01

    This paper addresses the reconstruction of high-resolution omnidirectional images from multiple low-resolution images with inexact registration. When omnidirectional images from low-resolution vision sensors can be uniquely mapped on the 2-sphere, such a reconstruction can be described as a transform-domain super-resolution problem in a spherical imaging framework. We describe how several spherical images with arbitrary rotations in the SO(3) rotation group contribute to the reconstruction of a high-resolution image with help of the spherical Fourier transform (SFT). As low-resolution images might not be perfectly registered in practice, the impact of inaccurate alignment on the transform coefficients is analyzed. We then cast the joint registration and super-resolution problem as a total least-squares norm minimization problem in the SFT domain. A l(1)-regularized total least-squares problem is considered and solved efficiently by interior point methods. Experiments with synthetic and natural images show that the proposed methods lead to effective reconstruction of high-resolution images even when large registration errors exist in the low-resolution images. The quality of the reconstructed images also increases rapidly with the number of low-resolution images, which demonstrates the benefits of the proposed solution in super-resolution schemes. Finally, we highlight the benefit of the additional regularization constraint that clearly leads to reduced noise and improved reconstruction quality.

  6. Intraoperative ultrasound to stereocamera registration using interventional photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Vyas, Saurabh; Su, Steven; Kim, Robert; Kuo, Nathanael; Taylor, Russell H.; Kang, Jin U.; Boctor, Emad M.

    2012-02-01

    There are approximately 6000 hospitals in the United States, of which approximately 5400 employ minimally invasive surgical robots for a variety of procedures. Furthermore, 95% of these robots require extensive registration before they can be fitted into the operating room. These "registrations" are performed by surgical navigation systems, which allow the surgical tools, the robot and the surgeon to be synchronized together-hence operating in concert. The most common surgical navigation modalities include: electromagnetic (EM) tracking and optical tracking. Currently, these navigation systems are large, intrusive, come with a steep learning curve, require sacrifices on the part of the attending medical staff, and are quite expensive (since they require several components). Recently, photoacoustic (PA) imaging has become a practical and promising new medical imaging technology. PA imaging only requires the minimal equipment standard with most modern ultrasound (US) imaging systems as well as a common laser source. In this paper, we demonstrate that given a PA imaging system, as well as a stereocamera (SC), the registration between the US image of a particular anatomy and the SC image of the same anatomy can be obtained with reliable accuracy. In our experiments, we collected data for N = 80 trials of sample 3D US and SC coordinates. We then computed the registration between the SC and the US coordinates. Upon validation, the mean error and standard deviation between the predicted sample coordinates and the corresponding ground truth coordinates were found to be 3.33 mm and 2.20 mm respectively.

  7. Simultaneous registration of multiple images: similarity metrics and efficient optimization.

    PubMed

    Wachinger, Christian; Navab, Nassir

    2013-05-01

    We address the alignment of a group of images with simultaneous registration. Therefore, we provide further insights into a recently introduced framework for multivariate similarity measures, referred to as accumulated pair-wise estimates (APE), and derive efficient optimization methods for it. More specifically, we show a strict mathematical deduction of APE from a maximum-likelihood framework and establish a connection to the congealing framework. This is only possible after an extension of the congealing framework with neighborhood information. Moreover, we address the increased computational complexity of simultaneous registration by deriving efficient gradient-based optimization strategies for APE: Gauss-Newton and the efficient second-order minimization (ESM). We present next to SSD the usage of intrinsically nonsquared similarity measures in this least squares optimization framework. The fundamental assumption of ESM, the approximation of the perfectly aligned moving image through the fixed image, limits its application to monomodal registration. We therefore incorporate recently proposed structural representations of images which allow us to perform multimodal registration with ESM. Finally, we evaluate the performance of the optimization strategies with respect to the similarity measures, leading to very good results for ESM. The extension to multimodal registration is in this context very interesting because it offers further possibilities for evaluations, due to publicly available datasets with ground-truth alignment.

  8. Multi-image registration for an enhanced vision system

    NASA Astrophysics Data System (ADS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2003-08-01

    An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.

  9. Multi-Image Registration for an Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn

    2002-01-01

    An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.

  10. Cross Correlation versus Normalized Mutual Information on Image Registration

    NASA Technical Reports Server (NTRS)

    Tan, Bin; Tilton, James C.; Lin, Guoqing

    2016-01-01

    This is the first study to quantitatively assess and compare cross correlation and normalized mutual information methods used to register images in subpixel scale. The study shows that the normalized mutual information method is less sensitive to unaligned edges due to the spectral response differences than is cross correlation. This characteristic makes the normalized image resolution a better candidate for band to band registration. Improved band-to-band registration in the data from satellite-borne instruments will result in improved retrievals of key science measurements such as cloud properties, vegetation, snow and fire.

  11. Towards local estimation of emphysema progression using image registration

    NASA Astrophysics Data System (ADS)

    Staring, M.; Bakker, M. E.; Shamonin, D. P.; Stolk, J.; Reiber, J. H. C.; Stoel, B. C.

    2009-02-01

    Progression measurement of emphysema is required to evaluate the health condition of a patient and the effect of drugs. To locally estimate progression we use image registration, which allows for volume correction using the determinant of the Jacobian of the transformation. We introduce an adaptation of the so-called sponge model that circumvents its constant-mass assumption. Preliminary results from CT scans of a lung phantom and from CT data sets of three patients suggest that image registration may be a suitable method to locally estimate emphysema progression.

  12. Analysis of deformable image registration accuracy using computational modeling

    SciTech Connect

    Zhong Hualiang; Kim, Jinkoo; Chetty, Indrin J.

    2010-03-15

    Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results show that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter

  13. Scope and applications of translation invariant wavelets to image registration

    NASA Technical Reports Server (NTRS)

    Chettri, Samir; LeMoigne, Jacqueline; Campbell, William

    1997-01-01

    The first part of this article introduces the notion of translation invariance in wavelets and discusses several wavelets that have this property. The second part discusses the possible applications of such wavelets to image registration. In the case of registration of affinely transformed images, we would conclude that the notion of translation invariance is not really necessary. What is needed is affine invariance and one way to do this is via the method of moment invariants. Wavelets or, in general, pyramid processing can then be combined with the method of moment invariants to reduce the computational load.

  14. 3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine

    NASA Astrophysics Data System (ADS)

    Hamamoto, Kazuhiko; Sato, Motoyoshi

    3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.

  15. Agile multi-scale decompositions for automatic image registration

    NASA Astrophysics Data System (ADS)

    Murphy, James M.; Leija, Omar Navarro; Le Moigne, Jacqueline

    2016-05-01

    In recent works, the first and third authors developed an automatic image registration algorithm based on a multiscale hybrid image decomposition with anisotropic shearlets and isotropic wavelets. This prototype showed strong performance, improving robustness over registration with wavelets alone. However, this method imposed a strict hierarchy on the order in which shearlet and wavelet features were used in the registration process, and also involved an unintegrated mixture of MATLAB and C code. In this paper, we introduce a more agile model for generating features, in which a flexible and user-guided mix of shearlet and wavelet features are computed. Compared to the previous prototype, this method introduces a flexibility to the order in which shearlet and wavelet features are used in the registration process. Moreover, the present algorithm is now fully coded in C, making it more efficient and portable than the mixed MATLAB and C prototype. We demonstrate the versatility and computational efficiency of this approach by performing registration experiments with the fully-integrated C algorithm. In particular, meaningful timing studies can now be performed, to give a concrete analysis of the computational costs of the flexible feature extraction. Examples of synthetically warped and real multi-modal images are analyzed.

  16. Multimodality medical image fusion: probabilistic quantification, segmentation, and registration

    NASA Astrophysics Data System (ADS)

    Wang, Yue J.; Freedman, Matthew T.; Xuan, Jian Hua; Zheng, Qinfen; Mun, Seong K.

    1998-06-01

    Multimodality medical image fusion is becoming increasingly important in clinical applications, which involves information processing, registration and visualization of interventional and/or diagnostic images obtained from different modalities. This work is to develop a multimodality medical image fusion technique through probabilistic quantification, segmentation, and registration, based on statistical data mapping, multiple feature correlation, and probabilistic mean ergodic theorems. The goal of image fusion is to geometrically align two or more image areas/volumes so that pixels/voxels representing the same underlying anatomical structure can be superimposed meaningfully. Three steps are involved. To accurately extract the regions of interest, we developed the model supported Bayesian relaxation labeling, and edge detection and region growing integrated algorithms to segment the images into objects. After identifying the shift-invariant features (i.e., edge and region information), we provided an accurate and robust registration technique which is based on matching multiple binary feature images through a site model based image re-projection. The image was initially segmented into specified number of regions. A rough contour can be obtained by delineating and merging some of the segmented regions. We applied region growing and morphological filtering to extract the contour and get rid of some disconnected residual pixels after segmentation. The matching algorithm is implemented as follows: (1) the centroids of PET/CT and MR images are computed and then translated to the center of both images. (2) preliminary registration is performed first to determine an initial range of scaling factors and rotations, and the MR image is then resampled according to the specified parameters. (3) the total binary difference of the corresponding binary maps in both images is calculated for the selected registration parameters, and the final registration is achieved when the

  17. Morphological Feature Extraction for Automatic Registration of Multispectral Images

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.

  18. Landsat image registration - A study of system parameters

    NASA Technical Reports Server (NTRS)

    Wacker, A. G.; Juday, R. D.; Wolfe, R. H., Jr.

    1984-01-01

    Some applications of Landsat data, particularily agricultural and forestry applications, require the ability to geometrically superimpose or register data acquired at different times and possibly by different satellites. An experimental investigation relating to a registration processor used by the Johnson Space Center for this purpose is the subject of this paper. Correlation of small subareas of images is at the heart of this registration processor and the manner in which various system parameters affect the correlation process is the prime area of investigation. Parameters investigated include preprocessing methods, methods for detecting sucessful correlations, fitting a surface to the correlation patch, fraction of pixels designated as edge pixels in edge detection adn local versus global generation of edge images. A suboptimum search procedure is used to find a good parameter set for this registration processor.

  19. Simultaneous registration and segmentation of images in wavelet domain

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki

    1999-10-01

    A novel method for simultaneous registration and segmentation is developed. The method is designed to register two similar images while a region with significant difference is adaptively segmented. This is achieved by minimization of a non-linear functional that models the statistical properties of the subtraction of the two images. Minimization is performed in the wavelet domain by a coarse- to-fine approach to yield a mapping that yields the registration and the boundary that yields the segmentation. The new method was applied to the registration of the left and the right lung regions in chest radiographs for extraction of lung nodules while the normal anatomic structures such as ribs are removed. A preliminary result shows that our method is very effective in reducing the number of false detections obtained with our computer-aided diagnosis scheme for detection of lung nodules in chest radiographs.

  20. 2D electron cyclotron emission imaging at ASDEX Upgrade (invited)a)

    NASA Astrophysics Data System (ADS)

    Classen, I. G. J.; Boom, J. E.; Suttrop, W.; Schmid, E.; Tobias, B.; Domier, C. W.; Luhmann, N. C.; Donné, A. J. H.; Jaspers, R. J. E.; de Vries, P. C.; Park, H. K.; Munsat, T.; García-Muñoz, M.; Schneider, P. A.

    2010-10-01

    The newly installed electron cyclotron emission imaging diagnostic on ASDEX Upgrade provides measurements of the 2D electron temperature dynamics with high spatial and temporal resolution. An overview of the technical and experimental properties of the system is presented. These properties are illustrated by the measurements of the edge localized mode and the reversed shear Alfvén eigenmode, showing both the advantage of having a two-dimensional (2D) measurement, as well as some of the limitations of electron cyclotron emission measurements. Furthermore, the application of singular value decomposition as a powerful tool for analyzing and filtering 2D data is presented.

  1. Automated Image Registration Using Morphological Region of Interest Feature Extraction

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2005-01-01

    With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.

  2. A translational registration system for LANDSAT image segments

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Erthal, G. J.; Velasco, F. R. D.; Mascarenhas, N. D. D.

    1983-01-01

    The use of satellite images obtained from various dates is essential for crop forecast systems. In order to make possible a multitemporal analysis, it is necessary that images belonging to each acquisition have pixel-wise correspondence. A system developed to obtain, register and record image segments from LANDSAT images in computer compatible tapes is described. The translational registration of the segments is performed by correlating image edges in different acquisitions. The system was constructed for the Burroughs B6800 computer in ALGOL language.

  3. The Insight ToolKit image registration framework

    PubMed Central

    Avants, Brian B.; Tustison, Nicholas J.; Stauffer, Michael; Song, Gang; Wu, Baohua; Gee, James C.

    2014-01-01

    Publicly available scientific resources help establish evaluation standards, provide a platform for teaching and improve reproducibility. Version 4 of the Insight ToolKit (ITK4) seeks to establish new standards in publicly available image registration methodology. ITK4 makes several advances in comparison to previous versions of ITK. ITK4 supports both multivariate images and objective functions; it also unifies high-dimensional (deformation field) and low-dimensional (affine) transformations with metrics that are reusable across transform types and with composite transforms that allow arbitrary series of geometric mappings to be chained together seamlessly. Metrics and optimizers take advantage of multi-core resources, when available. Furthermore, ITK4 reduces the parameter optimization burden via principled heuristics that automatically set scaling across disparate parameter types (rotations vs. translations). A related approach also constrains steps sizes for gradient-based optimizers. The result is that tuning for different metrics and/or image pairs is rarely necessary allowing the researcher to more easily focus on design/comparison of registration strategies. In total, the ITK4 contribution is intended as a structure to support reproducible research practices, will provide a more extensive foundation against which to evaluate new work in image registration and also enable application level programmers a broad suite of tools on which to build. Finally, we contextualize this work with a reference registration evaluation study with application to pediatric brain labeling.1 PMID:24817849

  4. Warped document image correction method based on heterogeneous registration strategies

    NASA Astrophysics Data System (ADS)

    Tong, Lijing; Zhan, Guoliang; Peng, Quanyao; Li, Yang; Li, Yifan

    2013-03-01

    With the popularity of digital camera and the application requirement of digitalized document images, using digital cameras to digitalize document images has become an irresistible trend. However, the warping of the document surface impacts on the quality of the Optical Character Recognition (OCR) system seriously. To improve the warped document image's vision quality and the OCR rate, this paper proposed a warped document image correction method based on heterogeneous registration strategies. This method mosaics two warped images of the same document from different viewpoints. Firstly, two feature points are selected from one image. Then the two feature points are registered in the other image base on heterogeneous registration strategies. At last, image mosaics are done for the two images, and the best mosaiced image is selected by OCR recognition results. As a result, for the best mosaiced image, the distortions are mostly removed and the OCR results are improved markedly. Experimental results show that the proposed method can resolve the issue of warped document image correction more effectively.

  5. Registration of head volume images using implantable fiducial markers

    NASA Astrophysics Data System (ADS)

    Maurer, Calvin R., Jr.; Fitzpatrick, J. Michael; Wang, Matthew Y.; Galloway, Robert L., Jr.; Maciunas, Robert J.; Allen, George S.

    1997-04-01

    In this paper, we describe an extrinsic point-based, interactive image-guided neurosurgical system designed at Vanderbilt University as part of a collaborative effort among the departments of neurological surgery, computer science, and biomedical engineering. Multimodal image-to- image and image-to-physical registration is accomplished using implantable markers. Physical space tracking is accomplished with optical triangulation. We investigate the theoretical accuracy of point-based registration using numerical simulations, the experimental accuracy of our system using data obtained with a phantom, and the clinical accuracy of our system using data acquired in a prospective clinical trial by six neurosurgeons at four medical centers from 158 patients undergoing craniotomies to resect cerebral lesions. We can determine the position of our markers with an error of approximately 0.4 mm in x-ray computed tomography (CT) and magnetic resonance (MR) images and 0.3 mm in physical space. The theoretical registration error using four such markers distributed around the head in a configuration that is clinically practical is approximately 0.5 - 0.6 mm. The mean CT-physical registration error for the phantom experiments is 0.5 mm and for the clinical data obtained with rigid head fixation during scanning is 0.7 mm. The mean CT-MR registration error for the clinical data obtained without rigid head fixation during scanning is 1.4 mm, which is the highest mean error that we observed. These theoretical and experimental findings indicate that this system is an accurate navigational aid that can provide real-time feedback to the surgeon about anatomical structures encountered in the surgical field.

  6. Evaluation of automated image registration algorithm for image-guided radiotherapy (IGRT).

    PubMed

    Sharma, Shamurailatpam Dayananda; Dongre, Prabhakar; Mhatre, Vaibhav; Heigrujam, Malhotra

    2012-09-01

    The performance of an image registration (IR) software was evaluated for automatically detecting known errors simulated through the movement of ExactCouch using an onboard imager. Twenty-seven set-up errors (11 translations, 10 rotations, 6 translation and rotation) were simulated by introducing offset up to ± 15 mm in three principal axes and 0° to ± 1° in yaw. For every simulated error, orthogonal kV radiograph and cone beam CT were acquired in half-fan (CBCT_HF) and full-fan (CBCT_FF) mode. The orthogonal radiographs and CBCTs were automatically co-registered to reference digitally reconstructed radiographs (DRRs) and planning CT using 2D-2D and 3D-3D matching software based on mutual information transformation. A total of 79 image sets (ten pairs of kV X-rays and 69 session of CBCT) were analyzed to determine the (a) reproducibility of IR outcome and (b) residual error, defined as the deviation between the known and IR software detected displacement in translation and rotation. The reproducibility of automatic IR of planning CT and repeat CBCTs taken with and without kilovoltage detector and kilovoltage X-ray source arm movement was excellent with mean SD of 0.1 mm in the translation and 0.0° in rotation. The average residual errors in translation and rotation were within ± 0.5 mm and ± 0.2°, ± 0.9 mm and ± 0.3°, and ± 0.4 mm and ± 0.2° for setup simulated only in translation, rotation, and both translation and rotation. The mean (SD) 3D vector was largest when only translational error was simulated and was 1.7 (1.1) mm for 2D-2D match of reference DRR with radiograph, 1.4 (0.6) and 1.3 (0.5) mm for 3D-3D match of reference CT and CBCT with full fan and half fan, respectively. In conclusion, the image-guided radiation therapy (IGRT) system is accurate within 1.8 mm and 0.4° and reproducible under control condition. Inherent error from any IGRT process should be taken into account while setting clinical IGRT protocol.

  7. Registration, segmentation, and visualization of multimodal brain images.

    PubMed

    Viergever, M A; Maintz, J B; Niessen, W J; Noordmans, H J; Pluim, J P; Stokking, R; Vincken, K L

    2001-01-01

    This paper gives an overview of the studies performed at our institute over the last decade on the processing and visualization of brain images, in the context of international developments in the field. The focus is on multimodal image registration and multimodal visualization, while segmentation is touched upon as a preprocessing step for visualization. The state-of-the-art in these areas is discussed and suggestions for future research are given. PMID:11137791

  8. Geometric uncertainty of 2D projection imaging in monitoring 3D tumor motion

    NASA Astrophysics Data System (ADS)

    Suh, Yelin; Dieterich, Sonja; Keall, Paul J.

    2007-07-01

    The purpose of this study was to investigate the accuracy of two-dimensional (2D) projection imaging methods in three-dimensional (3D) tumor motion monitoring. Many commercial linear accelerator types have projection imaging capabilities, and tumor motion monitoring is useful for motion inclusive, respiratory gated or tumor tracking strategies. Since 2D projection imaging is limited in its ability to resolve the motion along the imaging beam axis, there is unresolved motion when monitoring 3D tumor motion. From the 3D tumor motion data of 160 treatment fractions for 46 thoracic and abdominal cancer patients, the unresolved motion due to the geometric limitation of 2D projection imaging was calculated as displacement in the imaging beam axis for different beam angles and time intervals. The geometric uncertainty to monitor 3D motion caused by the unresolved motion of 2D imaging was quantified using the root-mean-square (rms) metric. Geometric uncertainty showed interfractional and intrafractional variation. Patient-to-patient variation was much more significant than variation for different time intervals. For the patient cohort studied, as the time intervals increase, the rms, minimum and maximum values of the rms uncertainty show decreasing tendencies for the lung patients but increasing for the liver and retroperitoneal patients, which could be attributed to patient relaxation. Geometric uncertainty was smaller for coplanar treatments than non-coplanar treatments, as superior-inferior (SI) tumor motion, the predominant motion from patient respiration, could be always resolved for coplanar treatments. Overall rms of the rms uncertainty was 0.13 cm for all treatment fractions and 0.18 cm for the treatment fractions whose average breathing peak-trough ranges were more than 0.5 cm. The geometric uncertainty for 2D imaging varies depending on the tumor site, tumor motion range, time interval and beam angle as well as between patients, between fractions and within a

  9. Finite-Dimensional Lie Algebras for Fast Diffeomorphic Image Registration.

    PubMed

    Zhang, Miaomiao; Fletcher, P Thomas

    2015-01-01

    This paper presents a fast geodesic shooting algorithm for diffeomorphic image registration. We first introduce a novel finite-dimensional Lie algebra structure on the space of bandlimited velocity fields. We then show that this space can effectively represent initial velocities for diffeomorphic image registration at much lower dimensions than typically used, with little to no loss in registration accuracy. We then leverage the fact that the geodesic evolution equations, as well as the adjoint Jacobi field equations needed for gradient descent methods, can be computed entirely in this finite-dimensional Lie algebra. The result is a geodesic shooting method for large deformation metric mapping (LDDMM) that is dramatically faster and less memory intensive than state-of-the-art methods. We demonstrate the effectiveness of our model to register 3D brain images and compare its registration accuracy, run-time, and memory consumption with leading LDDMM methods. We also show how our algorithm breaks through the prohibitive time and memory requirements of diffeomorphic atlas building.

  10. Intensity-based image registration by minimizing residual complexity.

    PubMed

    Myronenko, Andriy; Song, Xubo

    2010-11-01

    Accurate definition of the similarity measure is a key component in image registration. Most commonly used intensity-based similarity measures rely on the assumptions of independence and stationarity of the intensities from pixel to pixel. Such measures cannot capture the complex interactions among the pixel intensities, and often result in less satisfactory registration performances, especially in the presence of spatially-varying intensity distortions. We propose a novel similarity measure that accounts for intensity nonstationarities and complex spatially-varying intensity distortions in mono-modal settings. We derive the similarity measure by analytically solving for the intensity correction field and its adaptive regularization. The final measure can be interpreted as one that favors a registration with minimum compression complexity of the residual image between the two registered images. One of the key advantages of the new similarity measure is its simplicity in terms of both computational complexity and implementation. This measure produces accurate registration results on both artificial and real-world problems that we have tested, and outperforms other state-of-the-art similarity measures in these cases.

  11. Improving JWST Coronagraphic Performance with Accurate Image Registration

    NASA Astrophysics Data System (ADS)

    Van Gorkom, Kyle; Pueyo, Laurent; Lajoie, Charles-Philippe; JWST Coronagraphs Working Group

    2016-06-01

    The coronagraphs on the James Webb Space Telescope (JWST) will enable high-contrast observations of faint objects at small separations from bright hosts, such as circumstellar disks, exoplanets, and quasar disks. Despite attenuation by the coronagraphic mask, bright speckles in the host’s point spread function (PSF) remain, effectively washing out the signal from the faint companion. Suppression of these bright speckles is typically accomplished by repeating the observation with a star that lacks a faint companion, creating a reference PSF that can be subtracted from the science image to reveal any faint objects. Before this reference PSF can be subtracted, however, the science and reference images must be aligned precisely, typically to 1/20 of a pixel. Here, we present several such algorithms for performing image registration on JWST coronagraphic images. Using both simulated and pre-flight test data (taken in cryovacuum), we assess (1) the accuracy of each algorithm at recovering misaligned scenes and (2) the impact of image registration on achievable contrast. Proper image registration, combined with post-processing techniques such as KLIP or LOCI, will greatly improve the performance of the JWST coronagraphs.

  12. Uncertainty driven probabilistic voxel selection for image registration.

    PubMed

    Oreshkin, Boris N; Arbel, Tal

    2013-10-01

    This paper presents a novel probabilistic voxel selection strategy for medical image registration in time-sensitive contexts, where the goal is aggressive voxel sampling (e.g., using less than 1% of the total number) while maintaining registration accuracy and low failure rate. We develop a Bayesian framework whereby, first, a voxel sampling probability field (VSPF) is built based on the uncertainty on the transformation parameters. We then describe a practical, multi-scale registration algorithm, where, at each optimization iteration, different voxel subsets are sampled based on the VSPF. The approach maximizes accuracy without committing to a particular fixed subset of voxels. The probabilistic sampling scheme developed is shown to manage the tradeoff between the robustness of traditional random voxel selection (by permitting more exploration) and the accuracy of fixed voxel selection (by permitting a greater proportion of informative voxels).

  13. Nanohole-array-based device for 2D snapshot multispectral imaging

    PubMed Central

    Najiminaini, Mohamadreza; Vasefi, Fartash; Kaminska, Bozena; Carson, Jeffrey J. L.

    2013-01-01

    We present a two-dimensional (2D) snapshot multispectral imager that utilizes the optical transmission characteristics of nanohole arrays (NHAs) in a gold film to resolve a mixture of input colors into multiple spectral bands. The multispectral device consists of blocks of NHAs, wherein each NHA has a unique periodicity that results in transmission resonances and minima in the visible and near-infrared regions. The multispectral device was illuminated over a wide spectral range, and the transmission was spectrally unmixed using a least-squares estimation algorithm. A NHA-based multispectral imaging system was built and tested in both reflection and transmission modes. The NHA-based multispectral imager was capable of extracting 2D multispectral images representative of four independent bands within the spectral range of 662 nm to 832 nm for a variety of targets. The multispectral device can potentially be integrated into a variety of imaging sensor systems. PMID:24005065

  14. Video image stabilization and registration--plus

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor)

    2009-01-01

    A method of stabilizing a video image displayed in multiple video fields of a video sequence includes the steps of: subdividing a selected area of a first video field into nested pixel blocks; determining horizontal and vertical translation of each of the pixel blocks in each of the pixel block subdivision levels from the first video field to a second video field; and determining translation of the image from the first video field to the second video field by determining a change in magnification of the image from the first video field to the second video field in each of horizontal and vertical directions, and determining shear of the image from the first video field to the second video field in each of the horizontal and vertical directions.

  15. Separation of image parts using 2-D parallel form recursive filters.

    PubMed

    Sivaramakrishna, R

    1996-01-01

    This correspondence deals with a new technique to separate objects or image parts in a composite image. A parallel form extension of a 2-D Steiglitz-McBride method is applied to the discrete cosine transform (DCT) of the image containing the objects that are to be separated. The obtained parallel form is the sum of several filters or systems, where the impulse response of each filter corresponds to the DCT of one object in the original image. Preliminary results on an image with two objects show that the algorithm works well, even in the case where one object occludes another as well as in the case of moderate noise. PMID:18285105

  16. 3D reconstruction of a carotid bifurcation from 2D transversal ultrasound images.

    PubMed

    Yeom, Eunseop; Nam, Kweon-Ho; Jin, Changzhu; Paeng, Dong-Guk; Lee, Sang-Joon

    2014-12-01

    Visualizing and analyzing the morphological structure of carotid bifurcations are important for understanding the etiology of carotid atherosclerosis, which is a major cause of stroke and transient ischemic attack. For delineation of vasculatures in the carotid artery, ultrasound examinations have been widely employed because of a noninvasive procedure without ionizing radiation. However, conventional 2D ultrasound imaging has technical limitations in observing the complicated 3D shapes and asymmetric vasodilation of bifurcations. This study aims to propose image-processing techniques for better 3D reconstruction of a carotid bifurcation in a rat by using 2D cross-sectional ultrasound images. A high-resolution ultrasound imaging system with a probe centered at 40MHz was employed to obtain 2D transversal images. The lumen boundaries in each transverse ultrasound image were detected by using three different techniques; an ellipse-fitting, a correlation mapping to visualize the decorrelation of blood flow, and the ellipse-fitting on the correlation map. When the results are compared, the third technique provides relatively good boundary extraction. The incomplete boundaries of arterial lumen caused by acoustic artifacts are somewhat resolved by adopting the correlation mapping and the distortion in the boundary detection near the bifurcation apex was largely reduced by using the ellipse-fitting technique. The 3D lumen geometry of a carotid artery was obtained by volumetric rendering of several 2D slices. For the 3D vasodilatation of the carotid bifurcation, lumen geometries at the contraction and expansion states were simultaneously depicted at various view angles. The present 3D reconstruction methods would be useful for efficient extraction and construction of the 3D lumen geometries of carotid bifurcations from 2D ultrasound images.

  17. Elastic image registration via rigid object motion induced deformation

    NASA Astrophysics Data System (ADS)

    Zheng, Xiaofen; Udupa, Jayaram K.; Hirsch, Bruce E.

    2011-03-01

    In this paper, we estimate the deformations induced on soft tissues by the rigid independent movements of hard objects and create an admixture of rigid and elastic adaptive image registration transformations. By automatically segmenting and independently estimating the movement of rigid objects in 3D images, we can maintain rigidity in bones and hard tissues while appropriately deforming soft tissues. We tested our algorithms on 20 pairs of 3D MRI datasets pertaining to a kinematic study of the flexibility of the ankle complex of normal feet as well as ankles affected by abnormalities in foot architecture and ligament injuries. The results show that elastic image registration via rigid object-induced deformation outperforms purely rigid and purely nonrigid approaches.

  18. Simultaneous reconstruction of the activity image and registration of the CT image in TOF-PET.

    PubMed

    Rezaei, Ahmadreza; Michel, Christian; Casey, Michael E; Nuyts, Johan

    2016-02-21

    Previously, maximum-likelihood methods have been proposed to jointly estimate the activity image and the attenuation image or the attenuation sinogram from time-of-flight (TOF) positron emission tomography (PET) data. In this contribution, we propose a method that addresses the possible alignment problem of the TOF-PET emission data and the computed tomography (CT) attenuation data, by combining reconstruction and registration. The method, called MLRR, iteratively reconstructs the activity image while registering the available CT-based attenuation image, so that the pair of activity and attenuation images maximise the likelihood of the TOF emission sinogram. The algorithm is slow to converge, but some acceleration could be achieved by using Nesterov's momentum method and by applying a multi-resolution scheme for the non-rigid displacement estimation. The latter also helps to avoid local optima, although convergence to the global optimum cannot be guaranteed. The results are evaluated on 2D and 3D simulations as well as a respiratory gated clinical scan. Our experiments indicate that the proposed method is able to correct for possible misalignment of the CT-based attenuation image, and is therefore a very promising approach to suppressing attenuation artefacts in clinical PET/CT. When applied to respiratory gated data of a patient scan, it produced deformations that are compatible with breathing motion and which reduced the well known attenuation artefact near the dome of the liver. Since the method makes use of the energy-converted CT attenuation image, the scale problem of joint reconstruction is automatically solved. PMID:26854817

  19. SU-E-J-237: Image Feature Based DRR and Portal Image Registration

    SciTech Connect

    Wang, X; Chang, J

    2014-06-01

    Purpose: Two-dimensional (2D) matching of the kV X-ray and digitally reconstructed radiography (DRR) images is an important setup technique for image-guided radiotherapy (IGRT). In our clinics, mutual information based methods are used for this purpose on commercial linear accelerators, but with often needs for manual corrections. This work proved the feasibility that feature based image transform can be used to register kV and DRR images. Methods: The scale invariant feature transform (SIFT) method was implemented to detect the matching image details (or key points) between the kV and DRR images. These key points represent high image intensity gradients, and thus the scale invariant features. Due to the poor image contrast from our kV image, direct application of the SIFT method yielded many detection errors. To assist the finding of key points, the center coordinates of the kV and DRR images were read from the DICOM header, and the two groups of key points with similar relative positions to their corresponding centers were paired up. Using these points, a rigid transform (with scaling, horizontal and vertical shifts) was estimated. We also artificially introduced vertical and horizontal shifts to test the accuracy of our registration method on anterior-posterior (AP) and lateral pelvic images. Results: The results provided a satisfactory overlay of the transformed kV onto the DRR image. The introduced vs. detected shifts were fit into a linear regression. In the AP image experiments, linear regression analysis showed a slope of 1.15 and 0.98 with an R2 of 0.89 and 0.99 for the horizontal and vertical shifts, respectively. The results are 1.2 and 1.3 with R2 of 0.72 and 0.82 for the lateral image shifts. Conclusion: This work provided an alternative technique for kV to DRR alignment. Further improvements in the estimation accuracy and image contrast tolerance are underway.

  20. The ANACONDA algorithm for deformable image registration in radiotherapy

    SciTech Connect

    Weistrand, Ola; Svensson, Stina

    2015-01-15

    Purpose: The purpose of this work was to describe a versatile algorithm for deformable image registration with applications in radiotherapy and to validate it on thoracic 4DCT data as well as CT/cone beam CT (CBCT) data. Methods: ANAtomically CONstrained Deformation Algorithm (ANACONDA) combines image information (i.e., intensities) with anatomical information as provided by contoured image sets. The registration problem is formulated as a nonlinear optimization problem and solved with an in-house developed solver, tailored to this problem. The objective function, which is minimized during optimization, is a linear combination of four nonlinear terms: 1. image similarity term; 2. grid regularization term, which aims at keeping the deformed image grid smooth and invertible; 3. a shape based regularization term which works to keep the deformation anatomically reasonable when regions of interest are present in the reference image; and 4. a penalty term which is added to the optimization problem when controlling structures are used, aimed at deforming the selected structure in the reference image to the corresponding structure in the target image. Results: To validate ANACONDA, the authors have used 16 publically available thoracic 4DCT data sets for which target registration errors from several algorithms have been reported in the literature. On average for the 16 data sets, the target registration error is 1.17 ± 0.87 mm, Dice similarity coefficient is 0.98 for the two lungs, and image similarity, measured by the correlation coefficient, is 0.95. The authors have also validated ANACONDA using two pelvic cases and one head and neck case with planning CT and daily acquired CBCT. Each image has been contoured by a physician (radiation oncologist) or experienced radiation therapist. The results are an improvement with respect to rigid registration. However, for the head and neck case, the sample set is too small to show statistical significance. Conclusions: ANACONDA

  1. Tensor representation of color images and fast 2D quaternion discrete Fourier transform

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.; Agaian, Sos S.

    2015-03-01

    In this paper, a general, efficient, split algorithm to compute the two-dimensional quaternion discrete Fourier transform (2-D QDFT), by using the special partitioning in the frequency domain, is introduced. The partition determines an effective transformation, or color image representation in the form of 1-D quaternion signals which allow for splitting the N × M-point 2-D QDFT into a set of 1-D QDFTs. Comparative estimates revealing the efficiency of the proposed algorithms with respect to the known ones are given. In particular, a proposed method of calculating the 2r × 2r -point 2-D QDFT uses 18N2 less multiplications than the well-known column-row method and method of calculation based on the symplectic decomposition. The proposed algorithm is simple to apply and design, which makes it very practical in color image processing in the frequency domain.

  2. Automatic 2D-to-3D image conversion using 3D examples from the internet

    NASA Astrophysics Data System (ADS)

    Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.

    2012-03-01

    The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D

  3. Parameterising root system growth models using 2D neutron radiography images

    NASA Astrophysics Data System (ADS)

    Schnepf, Andrea; Felderer, Bernd; Vontobel, Peter; Leitner, Daniel

    2013-04-01

    Root architecture is a key factor for plant acquisition of water and nutrients from soil. In particular in view of a second green revolution where the below ground parts of agricultural crops are important, it is essential to characterise and quantify root architecture and its effect on plant resource acquisition. Mathematical models can help to understand the processes occurring in the soil-plant system, they can be used to quantify the effect of root and rhizosphere traits on resource acquisition and the response to environmental conditions. In order to do so, root architectural models are coupled with a model of water and solute transport in soil. However, dynamic root architectural models are difficult to parameterise. Novel imaging techniques such as x-ray computed tomography, neutron radiography and magnetic resonance imaging enable the in situ visualisation of plant root systems. Therefore, these images facilitate the parameterisation of dynamic root architecture models. These imaging techniques are capable of producing 3D or 2D images. Moreover, 2D images are also available in the form of hand drawings or from images of standard cameras. While full 3D imaging tools are still limited in resolutions, 2D techniques are a more accurate and less expensive option for observing roots in their environment. However, analysis of 2D images has additional difficulties compared to the 3D case, because of overlapping roots. We present a novel algorithm for the parameterisation of root system growth models based on 2D images of root system. The algorithm analyses dynamic image data. These are a series of 2D images of the root system at different points in time. Image data has already been adjusted for missing links and artefacts and segmentation was performed by applying a matched filter response. From this time series of binary 2D images, we parameterise the dynamic root architecture model in the following way: First, a morphological skeleton is derived from the binary

  4. Single particle 3D reconstruction for 2D crystal images of membrane proteins.

    PubMed

    Scherer, Sebastian; Arheit, Marcel; Kowal, Julia; Zeng, Xiangyan; Stahlberg, Henning

    2014-03-01

    In cases where ultra-flat cryo-preparations of well-ordered two-dimensional (2D) crystals are available, electron crystallography is a powerful method for the determination of the high-resolution structures of membrane and soluble proteins. However, crystal unbending and Fourier-filtering methods in electron crystallography three-dimensional (3D) image processing are generally limited in their performance for 2D crystals that are badly ordered or non-flat. Here we present a single particle image processing approach, which is implemented as an extension of the 2D crystallographic pipeline realized in the 2dx software package, for the determination of high-resolution 3D structures of membrane proteins. The algorithm presented, addresses the low single-to-noise ratio (SNR) of 2D crystal images by exploiting neighborhood correlation between adjacent proteins in the 2D crystal. Compared with conventional single particle processing for randomly oriented particles, the computational costs are greatly reduced due to the crystal-induced limited search space, which allows a much finer search space compared to classical single particle processing. To reduce the considerable computational costs, our software features a hybrid parallelization scheme for multi-CPU clusters and computer with high-end graphic processing units (GPUs). We successfully apply the new refinement method to the structure of the potassium channel MloK1. The calculated 3D reconstruction shows more structural details and contains less noise than the map obtained by conventional Fourier-filtering based processing of the same 2D crystal images.

  5. Detection of Leptomeningeal Metastasis by Contrast-Enhanced 3D T1-SPACE: Comparison with 2D FLAIR and Contrast-Enhanced 2D T1-Weighted Images

    PubMed Central

    Gil, Bomi; Hwang, Eo-Jin; Lee, Song; Jang, Jinhee; Jung, So-Lyung; Ahn, Kook-Jin; Kim, Bum-soo

    2016-01-01

    Introduction To compare the diagnostic accuracy of contrast-enhanced 3D(dimensional) T1-weighted sampling perfection with application-optimized contrasts by using different flip angle evolutions (T1-SPACE), 2D fluid attenuated inversion recovery (FLAIR) images and 2D contrast-enhanced T1-weighted image in detection of leptomeningeal metastasis except for invasive procedures such as a CSF tapping. Materials and Methods Three groups of patients were included retrospectively for 9 months (from 2013-04-01 to 2013-12-31). Group 1 patients with positive malignant cells in CSF cytology (n = 22); group 2, stroke patients with steno-occlusion in ICA or MCA (n = 16); and group 3, patients with negative results on MRI, whose symptom were dizziness or headache (n = 25). A total of 63 sets of MR images are separately collected and randomly arranged: (1) CE 3D T1-SPACE; (2) 2D FLAIR; and (3) CE T1-GRE using a 3-Tesla MR system. A faculty neuroradiologist with 8-year-experience and another 2nd grade trainee in radiology reviewed each MR image- blinded by the results of CSF cytology and coded their observations as positives or negatives of leptomeningeal metastasis. The CSF cytology result was considered as a gold standard. Sensitivity and specificity of each MR images were calculated. Diagnostic accuracy was compared using a McNemar’s test. A Cohen's kappa analysis was performed to assess inter-observer agreements. Results Diagnostic accuracy was not different between 3D T1-SPACE and CSF cytology by both raters. However, the accuracy test of 2D FLAIR and 2D contrast-enhanced T1-weighted GRE was inconsistent by the two raters. The Kappa statistic results were 0.657 (3D T1-SPACE), 0.420 (2D FLAIR), and 0.160 (2D contrast-enhanced T1-weighted GRE). The 3D T1-SPACE images showed the highest inter-observer agreements between the raters. Conclusions Compared to 2D FLAIR and 2D contrast-enhanced T1-weighted GRE, contrast-enhanced 3D T1 SPACE showed a better detection rate of

  6. 2D electron temperature diagnostic using soft x-ray imaging technique

    SciTech Connect

    Nishimura, K. Sanpei, A. Tanaka, H.; Ishii, G.; Kodera, R.; Ueba, R.; Himura, H.; Masamune, S.; Ohdachi, S.; Mizuguchi, N.

    2014-03-15

    We have developed a two-dimensional (2D) electron temperature (T{sub e}) diagnostic system for thermal structure studies in a low-aspect-ratio reversed field pinch (RFP). The system consists of a soft x-ray (SXR) camera with two pin holes for two-kinds of absorber foils, combined with a high-speed camera. Two SXR images with almost the same viewing area are formed through different absorber foils on a single micro-channel plate (MCP). A 2D T{sub e} image can then be obtained by calculating the intensity ratio for each element of the images. We have succeeded in distinguishing T{sub e} image in quasi-single helicity (QSH) from that in multi-helicity (MH) RFP states, where the former is characterized by concentrated magnetic fluctuation spectrum and the latter, by broad spectrum of edge magnetic fluctuations.

  7. Avoiding symmetry-breaking spatial non-uniformity in deformable image registration via a quasi-volume-preserving constraint.

    PubMed

    Aganj, Iman; Reuter, Martin; Sabuncu, Mert R; Fischl, Bruce

    2015-02-01

    The choice of a reference image typically influences the results of deformable image registration, thereby making it asymmetric. This is a consequence of a spatially non-uniform weighting in the cost function integral that leads to general registration inaccuracy. The inhomogeneous integral measure--which is the local volume change in the transformation, thus varying through the course of the registration--causes image regions to contribute differently to the objective function. More importantly, the optimization algorithm is allowed to minimize the cost function by manipulating the volume change, instead of aligning the images. The approaches that restore symmetry to deformable registration successfully achieve inverse-consistency, but do not eliminate the regional bias that is the source of the error. In this work, we address the root of the problem: the non-uniformity of the cost function integral. We introduce a new quasi-volume-preserving constraint that allows for volume change only in areas with well-matching image intensities, and show that such a constraint puts a bound on the error arising from spatial non-uniformity. We demonstrate the advantages of adding the proposed constraint to standard (asymmetric and symmetrized) demons and diffeomorphic demons algorithms through experiments on synthetic images, and real X-ray and 2D/3D brain MRI data. Specifically, the results show that our approach leads to image alignment with more accurate matching of manually defined neuroanatomical structures, better tradeoff between image intensity matching and registration-induced distortion, improved native symmetry, and lower susceptibility to local optima. In summary, the inclusion of this space- and time-varying constraint leads to better image registration along every dimension that we have measured it. PMID:25449738

  8. Combining 2D synchrosqueezed wave packet transform with optimization for crystal image analysis

    NASA Astrophysics Data System (ADS)

    Lu, Jianfeng; Wirth, Benedikt; Yang, Haizhao

    2016-04-01

    We develop a variational optimization method for crystal analysis in atomic resolution images, which uses information from a 2D synchrosqueezed transform (SST) as input. The synchrosqueezed transform is applied to extract initial information from atomic crystal images: crystal defects, rotations and the gradient of elastic deformation. The deformation gradient estimate is then improved outside the identified defect region via a variational approach, to obtain more robust results agreeing better with the physical constraints. The variational model is optimized by a nonlinear projected conjugate gradient method. Both examples of images from computer simulations and imaging experiments are analyzed, with results demonstrating the effectiveness of the proposed method.

  9. Estimation of lung lobar sliding using image registration

    NASA Astrophysics Data System (ADS)

    Amelon, Ryan; Cao, Kunlin; Reinhardt, Joseph M.; Christensen, Gary E.; Raghavan, Madhavan

    2012-03-01

    MOTIVATION: The lobes of the lungs slide relative to each other during breathing. Quantifying lobar sliding can aid in better understanding lung function, better modeling of lung dynamics, and a better understanding of the limits of image registration performance near fissures. We have developed a method to estimate lobar sliding in the lung from image registration of CT scans. METHODS: Six human lungs were analyzed using CT scans spanning functional residual capacity (FRC) to total lung capacity (TLC). The lung lobes were segmented and registered on a lobe-by-lobe basis. The displacement fields from the independent lobe registrations were then combined into a single image. This technique allows for displacement discontinuity at lobar boundaries. The displacement field was then analyzed as a continuum by forming finite elements from the voxel grid of the FRC image. Elements at a discontinuity will appear to have undergone significantly elevated 'shear stretch' compared to those within the parenchyma. Shear stretch is shown to be a good measure of sliding magnitude in this context. RESULTS: The sliding map clearly delineated the fissures of the lung. The fissure between the right upper and right lower lobes showed the greatest sliding in all subjects while the fissure between the right upper and right middle lobe showed the least sliding.

  10. Robust 2D phase correction for echo planar imaging under a tight field-of-view.

    PubMed

    Xu, Dan; King, Kevin F; Zur, Yuval; Hinks, R Scott

    2010-12-01

    Nyquist ghost artifacts are a serious issue in echo planar imaging. These artifacts primarily originate from phase difference between even and odd echo images and can be removed or reduced using phase correction methods. The commonly used 1D phase correction can only correct phase difference along readout axis. 2D correction is, therefore, necessary when phase difference presents along both readout and phase encoding axes. However, existing 2D methods have several unaddressed issues that affect their practicality. These issues include uncharacterized noise behavior, image artifact due to unoptimized phase estimation, Gibbs ringing artifact when directly applying to partial k(y) data, and most seriously a new image artifact under tight field-of-view (i.e., field-of-view slightly smaller than object size). All these issues are addressed in this article. Specifically, theoretical analysis of noise amplification and effect of phase estimation error is provided, and tradeoff between noise and ghost is studied. A new 2D phase correction method with improved polynomial fitting, joint homodyne processing and phase correction, compatibility with tight field-of-view is then proposed. Various results show that the proposed method can robustly generate images free of Nyquist ghosts and other image artifacts even in oblique scans or when cross-term eddy current terms are significant. PMID:20806354

  11. Estimating elastic properties of tissues from standard 2D ultrasound images

    NASA Astrophysics Data System (ADS)

    Kybic, Jan; Smutek, Daniel

    2005-04-01

    We propose a way of measuring elastic properties of tissues in-vivo, using standard medical image ultrasound machine without any special hardware. Images are acquired while the tissue is being deformed by a varying pressure applied by the operator on the hand-held ultrasound probe. The local elastic shear modulus is either estimated from a local displacement field reconstructed by an elastic registration algorithm, or both the modulus and the displacement are estimated simultaneously. The relation between modulus and displacement is calculated using a finite element method (FEM). The estimation algorithms were tested on both synthetic, phantom and real subject data.

  12. Evaluation of registration strategies for multi-modality images of rat brain slices

    NASA Astrophysics Data System (ADS)

    Palm, Christoph; Vieten, Andrea; Salber, Dagmar; Pietrzyk, Uwe

    2009-05-01

    In neuroscience, small-animal studies frequently involve dealing with series of images from multiple modalities such as histology and autoradiography. The consistent and bias-free restacking of multi-modality image series is obligatory as a starting point for subsequent non-rigid registration procedures and for quantitative comparisons with positron emission tomography (PET) and other in vivo data. Up to now, consistency between 2D slices without cross validation using an inherent 3D modality is frequently presumed to be close to the true morphology due to the smooth appearance of the contours of anatomical structures. However, in multi-modality stacks consistency is difficult to assess. In this work, consistency is defined in terms of smoothness of neighboring slices within a single modality and between different modalities. Registration bias denotes the distortion of the registered stack in comparison to the true 3D morphology and shape. Based on these metrics, different restacking strategies of multi-modality rat brain slices are experimentally evaluated. Experiments based on MRI-simulated and real dual-tracer autoradiograms reveal a clear bias of the restacked volume despite quantitatively high consistency and qualitatively smooth brain structures. However, different registration strategies yield different inter-consistency metrics. If no genuine 3D modality is available, the use of the so-called SOP (slice-order preferred) or MOSOP (modality-and-slice-order preferred) strategy is recommended.

  13. Adaptive registration of diffusion tensor images on lie groups

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Chen, LeiTing; Cai, HongBin; Qiu, Hang; Fei, Nanxi

    2016-08-01

    With diffusion tensor imaging (DTI), more exquisite information on tissue microstructure is provided for medical image processing. In this paper, we present a locally adaptive topology preserving method for DTI registration on Lie groups. The method aims to obtain more plausible diffeomorphisms for spatial transformations via accurate approximation for the local tangent space on the Lie group manifold. In order to capture an exact geometric structure of the Lie group, the local linear approximation is efficiently optimized by using the adaptive selection of the local neighborhood sizes on the given set of data points. Furthermore, numerical comparative experiments are conducted on both synthetic data and real DTI data to demonstrate that the proposed method yields a higher degree of topology preservation on a dense deformation tensor field while improving the registration accuracy.

  14. Concepts for on-board satellite image registration, volume 1

    NASA Technical Reports Server (NTRS)

    Ruedger, W. H.; Daluge, D. R.; Aanstoos, J. V.

    1980-01-01

    The NASA-NEEDS program goals present a requirement for on-board signal processing to achieve user-compatible, information-adaptive data acquisition. One very specific area of interest is the preprocessing required to register imaging sensor data which have been distorted by anomalies in subsatellite-point position and/or attitude control. The concepts and considerations involved in using state-of-the-art positioning systems such as the Global Positioning System (GPS) in concert with state-of-the-art attitude stabilization and/or determination systems to provide the required registration accuracy are discussed with emphasis on assessing the accuracy to which a given image picture element can be located and identified, determining those algorithms required to augment the registration procedure and evaluating the technology impact on performing these procedures on-board the satellite.

  15. 2D ESR image reconstruction from 1D projections using the modulated field gradient method

    NASA Astrophysics Data System (ADS)

    Páli, T.; Sass, L.; Horvat, L. I.; Ebert, B.

    A method for the reconstruction of 2D ESR images from 1 D projections which is based on the modulated field gradient method has been explored. The 2D distribution of spin-labeled stearic acid in oriented and unoriented dimyristoyl phosphatidylcholine multilayers on a flat quartz support was determined. Such samples are potentially useful for the determination of lipid lateral diffusion in oriented multilayers by monitoring the spreading of a sharp concentration profile in one or two dimensions. The limitations of the method are discussed and the improvements which are needed for dynamic measurements are outlined.

  16. Improving multispectral satellite image compression using onboard subpixel registration

    NASA Astrophysics Data System (ADS)

    Albinet, Mathieu; Camarero, Roberto; Isnard, Maxime; Poulet, Christophe; Perret, Jokin

    2013-09-01

    Future CNES earth observation missions will have to deal with an ever increasing telemetry data rate due to improvements in resolution and addition of spectral bands. Current CNES image compressors implement a discrete wavelet transform (DWT) followed by a bit plane encoding (BPE) but only on a mono spectral basis and do not profit from the multispectral redundancy of the observed scenes. Recent CNES studies have proven a substantial gain on the achievable compression ratio, +20% to +40% on selected scenarios, by implementing a multispectral compression scheme based on a Karhunen Loeve transform (KLT) followed by the classical DWT+BPE. But such results can be achieved only on perfectly registered bands; a default of registration as low as 0.5 pixel ruins all the benefits of multispectral compression. In this work, we first study the possibility to implement a multi-bands subpixel onboard registration based on registration grids generated on-the-fly by the satellite attitude control system and simplified resampling and interpolation techniques. Indeed bands registration is usually performed on ground using sophisticated techniques too computationally intensive for onboard use. This fully quantized algorithm is tuned to meet acceptable registration performances within stringent image quality criteria, with the objective of onboard real-time processing. In a second part, we describe a FPGA implementation developed to evaluate the design complexity and, by extrapolation, the data rate achievable on a spacequalified ASIC. Finally, we present the impact of this approach on the processing chain not only onboard but also on ground and the impacts on the design of the instrument.

  17. 2D Doppler backscattering using synthetic aperture microwave imaging of MAST edge plasmas

    NASA Astrophysics Data System (ADS)

    Thomas, D. A.; Brunner, K. J.; Freethy, S. J.; Huang, B. K.; Shevchenko, V. F.; Vann, R. G. L.

    2016-02-01

    Doppler backscattering (DBS) is already established as a powerful diagnostic; its extension to 2D enables imaging of turbulence characteristics from an extended region of the cut-off surface. The Synthetic Aperture Microwave Imaging (SAMI) diagnostic has conducted proof-of-principle 2D DBS experiments of MAST edge plasma. SAMI actively probes the plasma edge using a wide (±40° vertical and horizontal) and tuneable (10-34.5 GHz) beam. The Doppler backscattered signal is digitised in vector form using an array of eight Vivaldi PCB antennas. This allows the receiving array to be focused in any direction within the field of view simultaneously to an angular range of 6-24° FWHM at 10-34.5 GHz. This capability is unique to SAMI and is a novel way of conducting DBS experiments. In this paper the feasibility of conducting 2D DBS experiments is explored. Initial observations of phenomena previously measured by conventional DBS experiments are presented; such as momentum injection from neutral beams and an abrupt change in power and turbulence velocity coinciding with the onset of H-mode. In addition, being able to carry out 2D DBS imaging allows a measurement of magnetic pitch angle to be made; preliminary results are presented. Capabilities gained through steering a beam using a phased array and the limitations of this technique are discussed.

  18. Synthetic aperture radar/LANDSAT MSS image registration

    NASA Technical Reports Server (NTRS)

    Maurer, H. E. (Editor); Oberholtzer, J. D. (Editor); Anuta, P. E. (Editor)

    1979-01-01

    Algorithms and procedures necessary to merge aircraft synthetic aperture radar (SAR) and LANDSAT multispectral scanner (MSS) imagery were determined. The design of a SAR/LANDSAT data merging system was developed. Aircraft SAR images were registered to the corresponding LANDSAT MSS scenes and were the subject of experimental investigations. Results indicate that the registration of SAR imagery with LANDSAT MSS imagery is feasible from a technical viewpoint, and useful from an information-content viewpoint.

  19. A quantitative damage imaging technique based on enhanced CCRTM for composite plates using 2D scan

    NASA Astrophysics Data System (ADS)

    He, Jiaze; Yuan, Fuh-Gwo

    2016-10-01

    A two-dimensional (2D) non-contact areal scan system was developed to image and quantify impact damage in a composite plate using an enhanced zero-lag cross-correlation reverse-time migration (E-CCRTM) technique. The system comprises a single piezoelectric wafer mounted on the composite plate and a laser Doppler vibrometer (LDV) for scanning a region in the vicinity of the PZT to capture the scattered wavefield. The proposed damage imaging technique takes into account the amplitude, phase, geometric spreading, and all of the frequency content of the Lamb waves propagating in the plate; thus, a reflectivity coefficients of the delamination is calculated and potentially related to damage severity. Comparisons are made in terms of damage imaging quality between 2D areal scans and 1D line scans as well as between the proposed and existing imaging conditions. The experimental results show that the 2D E-CCRTM performs robustly when imaging and quantifying impact damage in large-scale composites using a single PZT actuator with a nearby areal scan using LDV.

  20. Explicit B-spline regularization in diffeomorphic image registration

    PubMed Central

    Tustison, Nicholas J.; Avants, Brian B.

    2013-01-01

    Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline “flavored” diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools. PMID:24409140

  1. Explicit B-spline regularization in diffeomorphic image registration.

    PubMed

    Tustison, Nicholas J; Avants, Brian B

    2013-01-01

    Diffeomorphic mappings are central to image registration due largely to their topological properties and success in providing biologically plausible solutions to deformation and morphological estimation problems. Popular diffeomorphic image registration algorithms include those characterized by time-varying and constant velocity fields, and symmetrical considerations. Prior information in the form of regularization is used to enforce transform plausibility taking the form of physics-based constraints or through some approximation thereof, e.g., Gaussian smoothing of the vector fields [a la Thirion's Demons (Thirion, 1998)]. In the context of the original Demons' framework, the so-called directly manipulated free-form deformation (DMFFD) (Tustison et al., 2009) can be viewed as a smoothing alternative in which explicit regularization is achieved through fast B-spline approximation. This characterization can be used to provide B-spline "flavored" diffeomorphic image registration solutions with several advantages. Implementation is open source and available through the Insight Toolkit and our Advanced Normalization Tools (ANTs) repository. A thorough comparative evaluation with the well-known SyN algorithm (Avants et al., 2008), implemented within the same framework, and its B-spline analog is performed using open labeled brain data and open source evaluation tools.

  2. Optimal grid point selection for improved nonrigid medical image registration

    NASA Astrophysics Data System (ADS)

    Fookes, Clinton; Maeder, Anthony

    2004-05-01

    Non-rigid image registration is an essential tool required for overcoming the inherent local anatomical variations that exist between medical images acquired from different individuals or atlases, among others. This type of registration defines a deformation field that gives a translation or mapping for every pixel in the image. One popular local approach for estimating this deformation field, known as block matching, is where a grid of control points are defined on an image and are each taken as the centre of a small window. These windows are then translated in the second image to maximise a local similarity criterion. This generates two corresponding sets of control points for the two images, yielding a sparse deformation field. This sparse field can then be propagated to the entire image using well known methods such as the thin-plate spline warp or simple Gaussian convolution. Previous block matching procedures all utilise uniformly distributed grid points. This results in the generation of a sparse deformation field containing displacement estimates at uniformly spaced locations. This neglects to make use of the evidence that block matching results are dependent on the amount of local information content. That is, results are better in regions of high information when compared to regions of low information. Consequently, this paper presents a solution to this drawback by proposing the use of a Reversible Jump Markov Chain Monte Carlo (RJMCMC) statistical procedure to optimally select grid points of interest. These grid points have a greater concentration in regions of high information and a lower concentration in regions of small information. Results show that non-rigid registration can by improved by using optimally selected grid points of interest.

  3. 2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.

    PubMed

    Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

    2014-01-01

    registration techniques. Different strategies for automatic serial image registration applied to MS datasets are outlined in detail. The third image modality is histology driven, i.e. a digital scan of the histological stained slices in high-resolution. After fusion of reconstructed scan images and MRI the slice-related coordinates of the mass spectra can be propagated into 3D-space. After image registration of scan images and histological stained images, the anatomical information from histology is fused with the mass spectra from MALDI-MSI. As a result of the described pipeline we have a set of 3 dimensional images representing the same anatomies, i.e. the reconstructed slice scans, the spectral images as well as corresponding clustering results, and the acquired MRI. Great emphasis is put on the fact that the co-registered MRI providing anatomical details improves the interpretation of 3D MALDI images. The ability to relate mass spectrometry derived molecular information with in vivo and in vitro imaging has potentially important implications. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. PMID:23467008

  4. Advances and challenges in deformable image registration: From image fusion to complex motion modelling.

    PubMed

    Schnabel, Julia A; Heinrich, Mattias P; Papież, Bartłomiej W; Brady, Sir J Michael

    2016-10-01

    Over the past 20 years, the field of medical image registration has significantly advanced from multi-modal image fusion to highly non-linear, deformable image registration for a wide range of medical applications and imaging modalities, involving the compensation and analysis of physiological organ motion or of tissue changes due to growth or disease patterns. While the original focus of image registration has predominantly been on correcting for rigid-body motion of brain image volumes acquired at different scanning sessions, often with different modalities, the advent of dedicated longitudinal and cross-sectional brain studies soon necessitated the development of more sophisticated methods that are able to detect and measure local structural or functional changes, or group differences. Moving outside of the brain, cine imaging and dynamic imaging required the development of deformable image registration to directly measure or compensate for local tissue motion. Since then, deformable image registration has become a general enabling technology. In this work we will present our own contributions to the state-of-the-art in deformable multi-modal fusion and complex motion modelling, and then discuss remaining challenges and provide future perspectives to the field.

  5. Advances and challenges in deformable image registration: From image fusion to complex motion modelling.

    PubMed

    Schnabel, Julia A; Heinrich, Mattias P; Papież, Bartłomiej W; Brady, Sir J Michael

    2016-10-01

    Over the past 20 years, the field of medical image registration has significantly advanced from multi-modal image fusion to highly non-linear, deformable image registration for a wide range of medical applications and imaging modalities, involving the compensation and analysis of physiological organ motion or of tissue changes due to growth or disease patterns. While the original focus of image registration has predominantly been on correcting for rigid-body motion of brain image volumes acquired at different scanning sessions, often with different modalities, the advent of dedicated longitudinal and cross-sectional brain studies soon necessitated the development of more sophisticated methods that are able to detect and measure local structural or functional changes, or group differences. Moving outside of the brain, cine imaging and dynamic imaging required the development of deformable image registration to directly measure or compensate for local tissue motion. Since then, deformable image registration has become a general enabling technology. In this work we will present our own contributions to the state-of-the-art in deformable multi-modal fusion and complex motion modelling, and then discuss remaining challenges and provide future perspectives to the field. PMID:27364430

  6. Occluded target viewing and identification high-resolution 2D imaging laser radar

    NASA Astrophysics Data System (ADS)

    Grasso, Robert J.; Dippel, George F.; Cecchetti, Kristen D.; Wikman, John C.; Drouin, David P.; Egbert, Paul I.

    2007-09-01

    BAE SYSTEMS has developed a high-resolution 2D imaging laser radar (LADAR) system that has proven its ability to detect and identify hard targets in occluded environments, through battlefield obscurants, and through naturally occurring image-degrading atmospheres. Limitations of passive infrared imaging for target identification using medium wavelength infrared (MWIR) and long wavelength infrared (LWIR) atmospheric windows are well known. Of particular concern is that as wavelength is increased the aperture must be increased to maintain resolution, hence, driving apertures to be very larger for long-range identification; impractical because of size, weight, and optics cost. Conversely, at smaller apertures and with large f-numbers images may become photon starved with long integration times. Here, images are most susceptible to distortion from atmospheric turbulence, platform vibration, or both. Additionally, long-range identification using passive thermal imaging is clutter limited arising from objects in close proximity to the target object.

  7. A computationally efficient method for automatic registration of orthogonal x-ray images with volumetric CT data

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Varley, Martin R.; Shark, Lik-Kwan; Shentall, Glyn S.; Kirby, Mike C.

    2008-02-01

    The paper presents a computationally efficient 3D-2D image registration algorithm for automatic pre-treatment validation in radiotherapy. The novel aspects of the algorithm include (a) a hybrid cost function based on partial digitally reconstructed radiographs (DRRs) generated along projected anatomical contours and a level set term for similarity measurement; and (b) a fast search method based on parabola fitting and sensitivity-based search order. Using CT and orthogonal x-ray images from a skull and a pelvis phantom, the proposed algorithm is compared with the conventional ray-casting full DRR based registration method. Not only is the algorithm shown to be computationally more efficient with registration time being reduced by a factor of 8, but also the algorithm is shown to offer 50% higher capture range allowing the initial patient displacement up to 15 mm (measured by mean target registration error). For the simulated data, high registration accuracy with average errors of 0.53 mm ± 0.12 mm for translation and 0.61° ± 0.29° for rotation within the capture range has been achieved. For the tested phantom data, the algorithm has also shown to be robust without being affected by artificial markers in the image.

  8. Reconfigurable 2D cMUT-ASIC arrays for 3D ultrasound image

    NASA Astrophysics Data System (ADS)

    Song, Jongkeun; Jung, Sungjin; Kim, Youngil; Cho, Kyungil; Kim, Baehyung; Lee, Seunghun; Na, Junseok; Yang, Ikseok; Kwon, Oh-kyong; Kim, Dongwook

    2012-03-01

    This paper describes the design and implementations of the complete 2D capacitive micromachined ultrasound transducer electronics and its analog front-end module for transmitting high voltage ultrasound pulses and receiving its echo signals to realize 3D ultrasound image. In order to minimize parasitic capacitances and ultimately improve signal-to- noise ratio (SNR), cMUT has to be integrate with Tx/Rx electronics. Additionally, in order to integrate 2D cMUT array module, significant optimized high voltage pulser circuitry, low voltage analog/digital circuit design and packaging challenges are required due to high density of elements and small pitch of each element. We designed 256(16x16)- element cMUT and reconfigurable driving ASIC composed of 120V high voltage pulser, T/R switch, low noise preamplifier and digital control block to set Tx frequency of ultrasound and pulse train in each element. Designed high voltage analog ASIC was successfully bonded with 2D cMUT array by flip-chip bonding process and it connected with analog front-end board to transmit pulse-echo signals. This implementation of reconfigurable cMUT-ASIC-AFE board enables us to produce large aperture 2D transducer array and acquire high quality of 3D ultrasound image.

  9. TU-A-19A-01: Image Registration I: Deformable Image Registration, Contour Propagation and Dose Mapping: 101 and 201

    SciTech Connect

    Kessler, M

    2014-06-15

    Deformable image registration, contour propagation and dose mapping have become common, possibly essential tools for modern image-guided radiation therapy. Historically, these tools have been largely developed at academic medical centers and used in a rather limited and well controlled fashion. Today these tools are now available to the radiotherapy community at large, both as stand-alone applications and as integrated components of both treatment planning and treatment delivery systems. Unfortunately, the details of how these tools work and their limitations are not generally documented or described by the vendors that provide them. Although “it looks right”, determining that unphysical deformations may have occurred is crucial. Because of this, understanding how and when to use, and not use these tools to support everyday clinical decisions is far from straight forward. The goal of this session will be to present both the theory (basic and advanced) and practical clinical use of deformable image registration, contour propagation and dose mapping. To the extent possible, the “secret sauce” that different vendor use to produce reasonable/acceptable results will be described. A detailed explanation of the possible sources of errors and actual examples of these will be presented. Knowing the underlying principles of the process and understanding the confounding factors will help the practicing medical physicist be better able to make decisions (about making decisions) using these tools available. Learning Objectives: Understand the basic (101) and advanced (201) principles of deformable image registration, contour propagation and dose mapping data mapping. Understand the sources and impact of errors in registration and data mapping and the methods for evaluating the performance of these tools. Understand the clinical use and value of these tools, especially when used as a “black box”.

  10. MIND Demons for MR-to-CT Deformable Image Registration In Image-Guided Spine Surgery

    PubMed Central

    Reaungamornrat, S.; De Silva, T.; Uneri, A.; Wolinsky, J.-P.; Khanna, A. J.; Kleinszig, G.; Vogt, S.; Prince, J. L.; Siewerdsen, J. H.

    2016-01-01

    Purpose Localization of target anatomy and critical structures defined in preoperative MR images can be achieved by means of multi-modality deformable registration to intraoperative CT. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. Method The method, called MIND Demons, solves for the deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the velocity fields and the diffeomorphisms, a modality-insensitive similarity function suitable to multi-modality images, and constraints on geodesics in Lagrangian coordinates. Direct optimization (without relying on an exponential map of stationary velocity fields used in conventional diffeomorphic Demons) is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, in phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to conventional mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, and normalized MI (NMI) Demons. Result The method yielded sub-voxel invertibility (0.006 mm) and nonsingular spatial Jacobians with capability to preserve local orientation and topology. It demonstrated improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.5 mm compared to 10.9, 2.3, and 4.6 mm for MI FFD, LMI FFD, and NMI Demons methods, respectively. Validation in clinical studies demonstrated realistic deformation with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. Conclusions A modality-independent deformable registration method has been developed to estimate a viscoelastic diffeomorphic map between preoperative MR and intraoperative CT. The

  11. MIND Demons for MR-to-CT deformable image registration in image-guided spine surgery

    NASA Astrophysics Data System (ADS)

    Reaungamornrat, S.; De Silva, T.; Uneri, A.; Wolinsky, J.-P.; Khanna, A. J.; Kleinszig, G.; Vogt, S.; Prince, J. L.; Siewerdsen, J. H.

    2016-03-01

    Purpose: Localization of target anatomy and critical structures defined in preoperative MR images can be achieved by means of multi-modality deformable registration to intraoperative CT. We propose a symmetric diffeomorphic deformable registration algorithm incorporating a modality independent neighborhood descriptor (MIND) and a robust Huber metric for MR-to-CT registration. Method: The method, called MIND Demons, solves for the deformation field between two images by optimizing an energy functional that incorporates both the forward and inverse deformations, smoothness on the velocity fields and the diffeomorphisms, a modality-insensitive similarity function suitable to multi-modality images, and constraints on geodesics in Lagrangian coordinates. Direct optimization (without relying on an exponential map of stationary velocity fields used in conventional diffeomorphic Demons) is carried out using a Gauss-Newton method for fast convergence. Registration performance and sensitivity to registration parameters were analyzed in simulation, in phantom experiments, and clinical studies emulating application in image-guided spine surgery, and results were compared to conventional mutual information (MI) free-form deformation (FFD), local MI (LMI) FFD, and normalized MI (NMI) Demons. Result: The method yielded sub-voxel invertibility (0.006 mm) and nonsingular spatial Jacobians with capability to preserve local orientation and topology. It demonstrated improved registration accuracy in comparison to the reference methods, with mean target registration error (TRE) of 1.5 mm compared to 10.9, 2.3, and 4.6 mm for MI FFD, LMI FFD, and NMI Demons methods, respectively. Validation in clinical studies demonstrated realistic deformation with sub-voxel TRE in cases of cervical, thoracic, and lumbar spine. Conclusions: A modality-independent deformable registration method has been developed to estimate a

  12. A review of biomechanically informed breast image registration.

    PubMed

    Hipwell, John H; Vavourakis, Vasileios; Han, Lianghao; Mertzanidou, Thomy; Eiben, Björn; Hawkes, David J

    2016-01-21

    Breast radiology encompasses the full range of imaging modalities from routine imaging via x-ray mammography, magnetic resonance imaging and ultrasound (both two- and three-dimensional), to more recent technologies such as digital breast tomosynthesis, and dedicated breast imaging systems for positron emission mammography and ultrasound tomography. In addition new and experimental modalities, such as Photoacoustics, Near Infrared Spectroscopy and Electrical Impedance Tomography etc, are emerging. The breast is a highly deformable structure however, and this greatly complicates visual comparison of imaging modalities for the purposes of breast screening, cancer diagnosis (including image guided biopsy), tumour staging, treatment monitoring, surgical planning and simulation of the effects of surgery and wound healing etc. Due primarily to the challenges posed by these gross, non-rigid deformations, development of automated methods which enable registration, and hence fusion, of information within and across breast imaging modalities, and between the images and the physical space of the breast during interventions, remains an active research field which has yet to translate suitable methods into clinical practice. This review describes current research in the field of breast biomechanical modelling and identifies relevant publications where the resulting models have been incorporated into breast image registration and simulation algorithms. Despite these developments there remain a number of issues that limit clinical application of biomechanical modelling. These include the accuracy of constitutive modelling, implementation of representative boundary conditions, failure to meet clinically acceptable levels of computational cost, challenges associated with automating patient-specific model generation (i.e. robust image segmentation and mesh generation) and the complexity of applying biomechanical modelling methods in routine clinical practice. PMID:26733349

  13. A review of biomechanically informed breast image registration

    NASA Astrophysics Data System (ADS)

    Hipwell, John H.; Vavourakis, Vasileios; Han, Lianghao; Mertzanidou, Thomy; Eiben, Björn; Hawkes, David J.

    2016-01-01

    Breast radiology encompasses the full range of imaging modalities from routine imaging via x-ray mammography, magnetic resonance imaging and ultrasound (both two- and three-dimensional), to more recent technologies such as digital breast tomosynthesis, and dedicated breast imaging systems for positron emission mammography and ultrasound tomography. In addition new and experimental modalities, such as Photoacoustics, Near Infrared Spectroscopy and Electrical Impedance Tomography etc, are emerging. The breast is a highly deformable structure however, and this greatly complicates visual comparison of imaging modalities for the purposes of breast screening, cancer diagnosis (including image guided biopsy), tumour staging, treatment monitoring, surgical planning and simulation of the effects of surgery and wound healing etc. Due primarily to the challenges posed by these gross, non-rigid deformations, development of automated methods which enable registration, and hence fusion, of information within and across breast imaging modalities, and between the images and the physical space of the breast during interventions, remains an active research field which has yet to translate suitable methods into clinical practice. This review describes current research in the field of breast biomechanical modelling and identifies relevant publications where the resulting models have been incorporated into breast image registration and simulation algorithms. Despite these developments there remain a number of issues that limit clinical application of biomechanical modelling. These include the accuracy of constitutive modelling, implementation of representative boundary conditions, failure to meet clinically acceptable levels of computational cost, challenges associated with automating patient-specific model generation (i.e. robust image segmentation and mesh generation) and the complexity of applying biomechanical modelling methods in routine clinical practice.

  14. Multi-modal image registration: matching MRI with histology

    NASA Astrophysics Data System (ADS)

    Alic, Lejla; Haeck, Joost C.; Klein, Stefan; Bol, Karin; van Tiel, Sandra T.; Wielopolski, Piotr A.; Bijster, Magda; Niessen, Wiro J.; Bernsen, Monique; Veenland, Jifke F.; de Jong, Marion

    2010-03-01

    Spatial correspondence between histology and multi sequence MRI can provide information about the capabilities of non-invasive imaging to characterize cancerous tissue. However, shrinkage and deformation occurring during the excision of the tumor and the histological processing complicate the co registration of MR images with histological sections. This work proposes a methodology to establish a detailed 3D relation between histology sections and in vivo MRI tumor data. The key features of the methodology are a very dense histological sampling (up to 100 histology slices per tumor), mutual information based non-rigid B-spline registration, the utilization of the whole 3D data sets, and the exploitation of an intermediate ex vivo MRI. In this proof of concept paper, the methodology was applied to one tumor. We found that, after registration, the visual alignment of tumor borders and internal structures was fairly accurate. Utilizing the intermediate ex vivo MRI, it was possible to account for changes caused by the excision of the tumor: we observed a tumor expansion of 20%. Also the effects of fixation, dehydration and histological sectioning could be determined: 26% shrinkage of the tumor was found. The annotation of viable tissue, performed in histology and transformed to the in vivo MRI, matched clearly with high intensity regions in MRI. With this methodology, histological annotation can be directly related to the corresponding in vivo MRI. This is a vital step for the evaluation of the feasibility of multi-spectral MRI to depict histological groundtruth.

  15. On-line range images registration with GPGPU

    NASA Astrophysics Data System (ADS)

    Będkowski, J.; Naruniec, J.

    2013-03-01

    This paper concerns implementation of algorithms in the two important aspects of modern 3D data processing: data registration and segmentation. Solution proposed for the first topic is based on the 3D space decomposition, while the latter on image processing and local neighbourhood search. Data processing is implemented by using NVIDIA compute unified device architecture (NIVIDIA CUDA) parallel computation. The result of the segmentation is a coloured map where different colours correspond to different objects, such as walls, floor and stairs. The research is related to the problem of collecting 3D data with a RGB-D camera mounted on a rotated head, to be used in mobile robot applications. Performance of the data registration algorithm is aimed for on-line processing. The iterative closest point (ICP) approach is chosen as a registration method. Computations are based on the parallel fast nearest neighbour search. This procedure decomposes 3D space into cubic buckets and, therefore, the time of the matching is deterministic. First technique of the data segmentation uses accele-rometers integrated with a RGB-D sensor to obtain rotation compensation and image processing method for defining pre-requisites of the known categories. The second technique uses the adapted nearest neighbour search procedure for obtaining normal vectors for each range point.

  16. Rapid registration of multimodal images using a reduced number of voxels

    NASA Astrophysics Data System (ADS)

    Huang, Xishi; Hill, Nicholas A.; Ren, Jing; Peters, Terry M.

    2006-03-01

    Rapid registration of multimodal cardiac images can improve image-guided cardiac surgeries and cardiac disease diagnosis. While mutual information (MI) is arguably the most suitable registration technique, this method is too slow to converge for real time cardiac image registration; moreover, correct registration may not coincide with a global or even local maximum of MI. These limitations become quite evident when registering three-dimensional (3D) ultrasound (US) images and dynamic 3D magnetic resonance (MR) images of the beating heart. To overcome these issues, we present a registration method that uses a reduced number of voxels, while retaining adequate registration accuracy. Prior to registration we preprocess the images such that only the most representative anatomical features are depicted. By selecting samples from preprocessed images, our method dramatically speeds up the registration process, as well as ensuring correct registration. We validated this registration method for registering dynamic US and MR images of the beating heart of a volunteer. Experimental results on in vivo cardiac images demonstrate significant improvements in registration speed without compromising registration accuracy. A second validation study was performed registering US and computed tomography (CT) images of a rib cage phantom. Two similarity metrics, MI and normalized crosscorrelation (NCC) were used to register the image sets. Experimental results on the rib cage phantom indicate that our method can achieve adequate registration accuracy within 10% of the computation time of conventional registration methods. We believe this method has the potential to facilitate intra-operative image fusion for minimally invasive cardio-thoracic surgical navigation.

  17. Fluid Registration of Diffusion Tensor Images Using Information Theory

    PubMed Central

    Chiang, Ming-Chang; Leow, Alex D.; Klunder, Andrea D.; Dutton, Rebecca A.; Barysheva, Marina; Rose, Stephen E.; McMahon, Katie L.; de Zubicaray, Greig I.; Toga, Arthur W.; Thompson, Paul M.

    2008-01-01

    We apply an information-theoretic cost metric, the symmetrized Kullback-Leibler (sKL) divergence, or J-divergence, to fluid registration of diffusion tensor images. The difference between diffusion tensors is quantified based on the sKL-divergence of their associated probability density functions (PDFs). Three-dimensional DTI data from 34 subjects were fluidly registered to an optimized target image. To allow large image deformations but preserve image topology, we regularized the flow with a large-deformation diffeomorphic mapping based on the kinematics of a Navier-Stokes fluid. A driving force was developed to minimize the J-divergence between the deforming source and target diffusion functions, while reorienting the flowing tensors to preserve fiber topography. In initial experiments, we showed that the sKL-divergence based on full diffusion PDFs is adaptable to higher-order diffusion models, such as high angular resolution diffusion imaging (HARDI). The sKL-divergence was sensitive to subtle differences between two diffusivity profiles, showing promise for nonlinear registration applications and multisubject statistical analysis of HARDI data. PMID:18390342

  18. Incorporating global information in feature-based multimodal image registration

    NASA Astrophysics Data System (ADS)

    Li, Yong; Stevenson, Robert

    2014-03-01

    A multimodal image registration framework based on searching the best matched keypoints and the incorporation of global information is proposed. It comprises two key elements: keypoint detection and an iterative process. Keypoints are detected from both the reference and test images. For each test keypoint, a number of reference keypoints are chosen as mapping candidates. A triplet of keypoint mappings determine an affine transformation that is evaluated using a similarity metric between the reference image and the transformed test image by the determined transformation. An iterative process is conducted on triplets of keypoint mappings, keeping track of the best matched reference keypoint. Random sample consensus and mutual information are applied to eliminate outlier keypoint mappings. The similarity metric is defined to be the number of overlapped edge pixels over the entire images, allowing for global information to be incorporated in the evaluation of triplets of mappings. The performance of the framework is investigated with keypoints extracted by scale invariant feature transform and partial intensity invariant feature descriptor. Experimental results show that the proposed framework can provide more accurate registration than existing methods.

  19. Robust image registration using adaptive coherent point drift method

    NASA Astrophysics Data System (ADS)

    Yang, Lijuan; Tian, Zheng; Zhao, Wei; Wen, Jinhuan; Yan, Weidong

    2016-04-01

    Coherent point drift (CPD) method is a powerful registration tool under the framework of the Gaussian mixture model (GMM). However, the global spatial structure of point sets is considered only without other forms of additional attribute information. The equivalent simplification of mixing parameters and the manual setting of the weight parameter in GMM make the CPD method less robust to outlier and have less flexibility. An adaptive CPD method is proposed to automatically determine the mixing parameters by embedding the local attribute information of features into the construction of GMM. In addition, the weight parameter is treated as an unknown parameter and automatically determined in the expectation-maximization algorithm. In image registration applications, the block-divided salient image disk extraction method is designed to detect sparse salient image features and local self-similarity is used as attribute information to describe the local neighborhood structure of each feature. The experimental results on optical images and remote sensing images show that the proposed method can significantly improve the matching performance.

  20. The start-to-end chemometric image processing of 2D thin-layer videoscans.

    PubMed

    Komsta, Łukasz; Cieśla, Łukasz; Bogucka-Kocka, Anna; Józefczyk, Aleksandra; Kryszeń, Jakub; Waksmundzka-Hajnos, Monika

    2011-05-13

    The purpose of the research was to recommend a unified procedure of image preprocessing of 2D thin layer videoscans for further supervised or unsupervised chemometric analysis. All work was done with open source software. The videoscans saved as JPG files underwent the following procedures: denoising using a median filter, baseline removal with the rollerball algorithm and nonlinear warping using spline functions. The application of the proposed procedure enabled filtration of random difference between images (background intensity changes and spatial differences of the spots location). After the preprocessing only spot intensities have an influence on the performed PCA or other techniques. The proposed technique was successfully applied to recognize the differences between three Carex species from the 2D videoscans of the extracts. The proposed solution may be of value for the any chemometric task--both unsupervised and supervised.

  1. On 2-D recursive LMS algorithms using ARMA prediction for ADPCM encoding of images.

    PubMed

    Chung, Y S; Kanefsky, M

    1992-01-01

    A two-dimensional (2D) linear predictor which has an autoregressive moving average (ARMA) representation well as a bias term is adapted for adaptive differential pulse code modulation (ADPCM) encoding of nonnegative images. The predictor coefficients are updated by using a 2D recursive LMS (TRLMS) algorithm. A constraint on optimum values for the convergence factors and an updating algorithm based on the constraint are developed. The coefficient updating algorithm can be modified with a stability control factor. This realization can operate in real time and in the spatial domain. A comparison of three different types of predictors is made for real images. ARMA predictors show improved performance relative to an AR algorithm. PMID:18296174

  2. Preliminary work of real-time ultrasound imaging system for 2-D array transducer.

    PubMed

    Li, Xu; Yang, Jiali; Ding, Mingyue; Yuchi, Ming

    2015-01-01

    Ultrasound (US) has emerged as a non-invasive imaging modality that can provide anatomical structure information in real time. To enable the experimental analysis of new 2-D array ultrasound beamforming methods, a pre-beamformed parallel raw data acquisition system was developed for 3-D data capture of 2D array transducer. The transducer interconnection adopted the row-column addressing (RCA) scheme, where the columns and rows were active in sequential for transmit and receive events, respectively. The DAQ system captured the raw data in parallel and the digitized data were fed through the field programmable gate array (FPGA) to implement the pre-beamforming. Finally, 3-D images were reconstructed through the devised platform in real-time. PMID:26405923

  3. Image Pretreatment Tools II: Normalization Techniques for 2-DE and 2-D DIGE.

    PubMed

    Robotti, Elisa; Marengo, Emilio; Quasso, Fabio

    2016-01-01

    Gel electrophoresis is usually applied to identify different protein expression profiles in biological samples (e.g., control vs. pathological, control vs. treated). Information about the effect to be investigated (a pathology, a drug, a ripening effect, etc.) is however generally confounded with experimental variability that is quite large in 2-DE and may arise from small variations in the sample preparation, reagents, sample loading, electrophoretic conditions, staining and image acquisition. Obtaining valid quantitative estimates of protein abundances in each map, before the differential analysis, is therefore fundamental to provide robust candidate biomarkers. Normalization procedures are applied to reduce experimental noise and make the images comparable, improving the accuracy of differential analysis. Certainly, they may deeply influence the final results, and to this respect they have to be applied with care. Here, the most widespread normalization procedures are described both for what regards the applications to 2-DE and 2D Difference Gel-electrophoresis (2-D DIGE) maps.

  4. Interpretation of Line-Integrated Signals from 2-D Phase Contrast Imaging on LHD

    NASA Astrophysics Data System (ADS)

    Michael, Clive; Tanaka, Kenji; Vyacheslavov, Leonid; Sanin, Andrei; Kawahata, Kazuo; Okajima, S.

    Two dimensional (2D) phase contrast imaging (PCI) is an excellent method to measure core and edge turbulence with good spatial resolution (Δρ ˜ 0.1). General analytical consideration is given to the signal interpretation of the line-integrated signals, with specific application to images from 2D PCI. It is shown that the Fourier components of fluctuations having any non-zero component propagating along the line of sight are not detected. The ramifications of this constraint are discussed, including consideration of the angle between the sight line and flux surface normal. In the experimental geometry, at the point where the flux surfaces are tangent to the sight line, it is shown that it may be possible to detect large poloidally extended (though with small radial wavelength) structures, such as GAMS. The spatial localization technique of this diagnostic is illustrated with experimental data.

  5. Maximum-likelihood registration of range images with missing data.

    PubMed

    Sharp, Gregory C; Lee, Sang W; Wehe, David K

    2008-01-01

    Missing data are common in range images, due to geometric occlusions, limitations in the sensor field of view, poor reflectivity, depth discontinuities, and cast shadows. Using registration to align these data often fails, because points without valid correspondences can be incorrectly matched. This paper presents a maximum likelihood method for registration of scenes with unmatched or missing data. Using ray casting, correspondences are formed between valid and missing points in each view. These correspondences are used to classify points by their visibility properties, including occlusions, field of view, and shadow regions. The likelihood of each point match is then determined using statistical properties of the sensor, such as noise and outlier distributions. Experiments demonstrate a high rates of convergence on complex scenes with varying degrees of overlap. PMID:18000329

  6. Multislice CT brain image registration for perfusion studies

    NASA Astrophysics Data System (ADS)

    Lin, Zhong Min; Pohlman, Scott; Chandra, Shalabh

    2002-04-01

    During the last several years perfusion CT techniques have been developed as an effective technique for clinically evaluating cerebral hemodynamics. Perfusion CT techniques are capable of measurings functional parameters such as tissue perfusion, blood flow, blood volume, and mean transit time and are commonly used to evaluate stroke patients. However, the quality of functional images of the brain frequently suffers from patient head motion. Because the time window for an effective treatment of stroke patient is narrow, a fast motion correction is required. The purpose of the paper is to present a fast and accurate registration technique for motion correction of multi-slice CT and to demonstrate the effects of the registration on perfusion calculation.

  7. Gender and ethnicity specific generic elastic models from a single 2D image for novel 2D pose face synthesis and recognition.

    PubMed

    Heo, Jingu; Savvides, Marios

    2012-12-01

    In this paper, we propose a novel method for generating a realistic 3D human face from a single 2D face image for the purpose of synthesizing new 2D face images at arbitrary poses using gender and ethnicity specific models. We employ the Generic Elastic Model (GEM) approach, which elastically deforms a generic 3D depth-map based on the sparse observations of an input face image in order to estimate the depth of the face image. Particularly, we show that Gender and Ethnicity specific GEMs (GE-GEMs) can approximate the 3D shape of the input face image more accurately, achieving a better generalization of 3D face modeling and reconstruction compared to the original GEM approach. We qualitatively validate our method using publicly available databases by showing each reconstructed 3D shape generated from a single image and new synthesized poses of the same person at arbitrary angles. For quantitative comparisons, we compare our synthesized results against 3D scanned data and also perform face recognition using synthesized images generated from a single enrollment frontal image. We obtain promising results for handling pose and expression changes based on the proposed method. PMID:22201062

  8. Image quality of up-converted 2D video from frame-compatible 3D video

    NASA Astrophysics Data System (ADS)

    Speranza, Filippo; Tam, Wa James; Vázquez, Carlos; Renaud, Ronald; Blanchfield, Phil

    2011-03-01

    In the stereoscopic frame-compatible format, the separate high-definition left and high-definition right views are reduced in resolution and packed to fit within the same video frame as a conventional two-dimensional high-definition signal. This format has been suggested for 3DTV since it does not require additional transmission bandwidth and entails only small changes to the existing broadcasting infrastructure. In some instances, the frame-compatible format might be used to deliver both 2D and 3D services, e.g., for over-the-air television services. In those cases, the video quality of the 2D service is bound to decrease since the 2D signal will have to be generated by up-converting one of the two views. In this study, we investigated such loss by measuring the perceptual image quality of 1080i and 720p up-converted video as compared to that of full resolution original 2D video. The video was encoded with either a MPEG-2 or a H.264/AVC codec at different bit rates and presented for viewing with either no polarized glasses (2D viewing mode) or with polarized glasses (3D viewing mode). The results confirmed a loss of video quality of the 2D video up-converted material. The loss due to the sampling processes inherent to the frame-compatible format was rather small for both 1080i and 720p video formats; the loss became more substantial with encoding, particularly for MPEG-2 encoding. The 3D viewing mode provided higher quality ratings, possibly because the visibility of the degradations was reduced.

  9. Fully automatic detection of the vertebrae in 2D CT images

    NASA Astrophysics Data System (ADS)

    Graf, Franz; Kriegel, Hans-Peter; Schubert, Matthias; Strukelj, Michael; Cavallaro, Alexander

    2011-03-01

    Knowledge about the vertebrae is a valuable source of information for several annotation tasks. In recent years, the research community spent a considerable effort for detecting, segmenting and analyzing the vertebrae and the spine in various image modalities like CT or MR. Most of these methods rely on prior knowledge like the location of the vertebrae or other initial information like the manual detection of the spine. Furthermore, the majority of these methods require a complete volume scan. With the existence of use cases where only a single slice is available, there arises a demand for methods allowing the detection of the vertebrae in 2D images. In this paper, we propose a fully automatic and parameterless algorithm for detecting the vertebrae in 2D CT images. Our algorithm starts with detecting candidate locations by taking the density of bone-like structures into account. Afterwards, the candidate locations are extended into candidate regions for which certain image features are extracted. The resulting feature vectors are compared to a sample set of previously annotated and processed images in order to determine the best candidate region. In a final step, the result region is readjusted until convergence to a locally optimal position. Our new method is validated on a real world data set of more than 9 329 images of 34 patients being annotated by a clinician in order to provide a realistic ground truth.

  10. Extraction of Individual Filaments from 2D Confocal Microscopy Images of Flat Cells.

    PubMed

    Basu, Saurav; Chi Liu; Rohde, Gustavo Kunde

    2015-01-01

    A crucial step in understanding the architecture of cells and tissues from microscopy images, and consequently explain important biological events such as wound healing and cancer metastases, is the complete extraction and enumeration of individual filaments from the cellular cytoskeletal network. Current efforts at quantitative estimation of filament length distribution, architecture and orientation from microscopy images are predominantly limited to visual estimation and indirect experimental inference. Here we demonstrate the application of a new algorithm to reliably estimate centerlines of biological filament bundles and extract individual filaments from the centerlines by systematically disambiguating filament intersections. We utilize a filament enhancement step followed by reverse diffusion based filament localization and an integer programming based set combination to systematically extract accurate filaments automatically from microscopy images. Experiments on simulated and real confocal microscope images of flat cells (2D images) show efficacy of the new method.

  11. Night vision image fusion for target detection with improved 2D maximum entropy segmentation

    NASA Astrophysics Data System (ADS)

    Bai, Lian-fa; Liu, Ying-bin; Yue, Jiang; Zhang, Yi

    2013-08-01

    Infrared and LLL image are used for night vision target detection. In allusion to the characteristics of night vision imaging and lack of traditional detection algorithm for segmentation and extraction of targets, we propose a method of infrared and LLL image fusion for target detection with improved 2D maximum entropy segmentation. Firstly, two-dimensional histogram was improved by gray level and maximum gray level in weighted area, weights were selected to calculate the maximum entropy for infrared and LLL image segmentation by using the histogram. Compared with the traditional maximum entropy segmentation, the algorithm had significant effect in target detection, and the functions of background suppression and target extraction. And then, the validity of multi-dimensional characteristics AND operation on the infrared and LLL image feature level fusion for target detection is verified. Experimental results show that detection algorithm has a relatively good effect and application in target detection and multiple targets detection in complex background.

  12. Image restoration using 2D autoregressive texture model and structure curve construction

    NASA Astrophysics Data System (ADS)

    Voronin, V. V.; Marchuk, V. I.; Petrosov, S. P.; Svirin, I.; Agaian, S.; Egiazarian, K.

    2015-05-01

    In this paper an image inpainting approach based on the construction of a composite curve for the restoration of the edges of objects in an image using the concepts of parametric and geometric continuity is presented. It is shown that this approach allows to restore the curved edges and provide more flexibility for curve design in damaged image by interpolating the boundaries of objects by cubic splines. After edge restoration stage, a texture restoration using 2D autoregressive texture model is carried out. The image intensity is locally modeled by a first spatial autoregressive model with support in a strongly causal prediction region on the plane. Model parameters are estimated by Yule-Walker method. Several examples considered in this paper show the effectiveness of the proposed approach for large objects removal as well as recovery of small regions on several test images.

  13. Alternative representations of an image via the 2D wavelet transform: application to character recognition

    NASA Astrophysics Data System (ADS)

    Antoine, Jean-Pierre; Vandergheynst, Pierre; Bouyoucef, Karim; Murenzi, Romain

    1995-06-01

    Both in 1D (signal analysis) and 2D (image processing), the wavelet transform (WT) has become by now a standard tool. Although the discrete version, based on multiresolution analysis, is probably better known, the continous WT (CWT) plays a crucial role for the detection and analysis of particular features in a signal, and we will focus here on the latter. In 2D however, one faces a practical problem. Indeed, the full parameter space of the wavelet transform of an image is 4D. It yields a representation of the image in position parameters (range and perception angle), as well as scale and anisotropy angle. The real challenge is to compute and visualize the full continuous wavelet transform in all four variables--obviously a demanding task. Thus, in order to obtain a manageable tool, some of the variables must be frozen. In other words, one must limit oneself to sections of the parameter space, usually 2D or 3D. For 2D sections, two variables are fixed and the transform is viewed as a function of the two remaing ones, and similarly for 3D sections. Among the six possible 2D sections, two play a privileged role. They yield respectively the position representation, which is the standard one, and the scale-angle representation, which has been proposed and studied systematically by two of us in a number of works. In this paper we will review these results and investigate the four remaining 2D representations. We will also make some comments on possible applications of 3D sections. The most spectacular property of the CWT is its ability at detecting discontinuities in a signal. In an image, this means in particular the sharp boundary between two regions of different luminosity, that is, a contour or an edge. Even more prominent in the transform are the corners of a given contour, for instance the contour of a letter. In a second part, we will exploit this property of the CWT and describe how one may design an algorithm for automatic character recognition (here we

  14. 2D/3D registration using only single-view fluoroscopy to guide cardiac ablation procedures: a feasibility study

    NASA Astrophysics Data System (ADS)

    Fallavollita, Pascal

    2010-02-01

    The CARTO XP is an electroanatomical cardiac mapping system that provides 3D color-coded maps of the electrical activity of the heart, however it is expensive and it can only use a single costly magnetic catheter for each patient intervention. Aim: To develop an affordable fluoroscopic navigation system that could shorten the duration of RF ablation procedures and increase its efficacy. Methodology: A 4-step filtering technique was implemented in order to project the tip electrode of an ablation catheter visible in single-view C-arm images in order to calculate its width. The width is directly proportional to the depth of the catheter. Results: For phantom experimentation, when displacing a 7- French catheter at 1cm intervals away from an X-ray source, the recovered depth using a single image was 2.05 +/- 1.47 mm, whereas depth errors improved to 1.55 +/- 1.30 mm when using an 8-French catheter. In clinic experimentation, twenty posterior and left lateral images of a catheter inside the left ventricle of a mongrel dog were acquired. The standard error of estimate for the recovered depth of the tip-electrode of the mapping catheter was 13.1 mm and 10.1 mm respectively for the posterior and lateral views. Conclusions: A filtering implementation using single-view C-arm images showed that it was possible to recover depth in phantom study and proved adequate in clinical experimentation based on isochronal map fusion results.

  15. 2D-CELL: image processing software for extraction and analysis of 2-dimensional cellular structures

    NASA Astrophysics Data System (ADS)

    Righetti, F.; Telley, H.; Leibling, Th. M.; Mocellin, A.

    1992-01-01

    2D-CELL is a software package for the processing and analyzing of photographic images of cellular structures in a largely interactive way. Starting from a binary digitized image, the programs extract the line network (skeleton) of the structure and determine the graph representation that best models it. Provision is made for manually correcting defects such as incorrect node positions or dangling bonds. Then a suitable algorithm retrieves polygonal contours which define individual cells — local boundary curvatures are neglected for simplicity. Using elementary analytical geometry relations, a range of metric and topological parameters describing the population are then computed, organized into statistical distributions and graphically displayed.

  16. Ridge-based retinal image registration algorithm involving OCT fundus images

    NASA Astrophysics Data System (ADS)

    Li, Ying; Gregori, Giovanni; Knighton, Robert W.; Lujan, Brandon J.; Rosenfeld, Philip J.; Lam, Byron L.

    2011-03-01

    This paper proposes an algorithm for retinal image registration involving OCT fundus images (OFIs). The first application of the algorithm is to register OFIs with color fundus photographs; such registration between multimodal retinal images can help correlate features across imaging modalities, which is important for both clinical and research purposes. The second application is to perform the montage of several OFIs, which allows us to construct 3D OCT images over a large field of view out of separate OCT datasets. We use blood vessel ridges as registration features. The brute force search and an Iterative Closest Point (ICP) algorithm are employed for image pair registration. Global alignment to minimize the distance between matching pixel pairs is used to obtain the montage of OFIs. Quality of OFIs is the big limitation factor of the registration algorithm. In the first experiment, the effect of manual OFI enhancement on registration was evaluated for the affine model on 11 image pairs from diseased eyes. The average root mean square error (RMSE) decreases from 58 μm to 40 μm. This indicates that the registration algorithm is robust to manual enhancement. In the second experiment for the montage of OFIs, the algorithm was tested on 6 sets from healthy eyes and 6 sets from diseased eyes, each set having 8 partially overlapping SD-OCT images. Visual evaluation showed that the montage performance was acceptable for normal cases, and not good for abnormal cases due to low visibility of blood vessels. The average RMSE for a typical montage case from a healthy eye is 2.3 pixels (69 μm).

  17. 3D Prostate Segmentation of Ultrasound Images Combining Longitudinal Image Registration and Machine Learning

    PubMed Central

    Yang, Xiaofeng; Fei, Baowei

    2012-01-01

    We developed a three-dimensional (3D) segmentation method for transrectal ultrasound (TRUS) images, which is based on longitudinal image registration and machine learning. Using longitudinal images of each individual patient, we register previously acquired images to the new images of the same subject. Three orthogonal Gabor filter banks were used to extract texture features from each registered image. Patient-specific Gabor features from the registered images are used to train kernel support vector machines (KSVMs) and then to segment the newly acquired prostate image. The segmentation method was tested in TRUS data from five patients. The average surface distance between our and manual segmentation is 1.18 ± 0.31 mm, indicating that our automatic segmentation method based on longitudinal image registration is feasible for segmenting the prostate in TRUS images. PMID:24027622

  18. Evaluation of a robotic arm for echocardiography to X-ray image registration during cardiac catheterization procedures.

    PubMed

    Ma, Yingliang; Penney, Graeme P; Bos, Dennis; Frissen, Peter; de Fockert, George; King, Andy; Gao, Gang; Yao, Cheng; Totman, John; Ginks, Matthew; Rinaldi, C; Razavi, Reza; Rhode, Kawal S

    2009-01-01

    We present an initial evaluation of a robotic arm for positioning a 3D echo probe during cardiac catheterization procedures. By tracking the robotic arm, X-ray table and X-ray C-arm, we are able to register the 3D echo images with live 2D X-ray images. In addition, we can also use tracking data from the robotic arm combined with system calibrations to create extended field of view 3D echo images. Both these features can be used for roadmapping to guide cardiac catheterization procedures. We have carried out a validation experiment of our registration method using a cross-wire phantom. Results show our method to be accurate to 3.5 mm. We have successfully demonstrated the creation of the extended field of view data on 2 healthy volunteers and the registration of echo and X-ray data on 1 patient undergoing a pacing study. PMID:19964867

  19. Breast density measurement: 3D cone beam computed tomography (CBCT) images versus 2D digital mammograms

    NASA Astrophysics Data System (ADS)

    Han, Tao; Lai, Chao-Jen; Chen, Lingyun; Liu, Xinming; Shen, Youtao; Zhong, Yuncheng; Ge, Shuaiping; Yi, Ying; Wang, Tianpeng; Yang, Wei T.; Shaw, Chris C.

    2009-02-01

    Breast density has been recognized as one of the major risk factors for breast cancer. However, breast density is currently estimated using mammograms which are intrinsically 2D in nature and cannot accurately represent the real breast anatomy. In this study, a novel technique for measuring breast density based on the segmentation of 3D cone beam CT (CBCT) images was developed and the results were compared to those obtained from 2D digital mammograms. 16 mastectomy breast specimens were imaged with a bench top flat-panel based CBCT system. The reconstructed 3D CT images were corrected for the cupping artifacts and then filtered to reduce the noise level, followed by using threshold-based segmentation to separate the dense tissue from the adipose tissue. For each breast specimen, volumes of the dense tissue structures and the entire breast were computed and used to calculate the volumetric breast density. BI-RADS categories were derived from the measured breast densities and compared with those estimated from conventional digital mammograms. The results show that in 10 of 16 cases the BI-RADS categories derived from the CBCT images were lower than those derived from the mammograms by one category. Thus, breasts considered as dense in mammographic examinations may not be considered as dense with the CBCT images. This result indicates that the relation between breast cancer risk and true (volumetric) breast density needs to be further investigated.

  20. 2D image classification for 3D anatomy localization: employing deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    de Vos, Bob D.; Wolterink, Jelmer M.; de Jong, Pim A.; Viergever, Max A.; Išgum, Ivana

    2016-03-01

    Localization of anatomical regions of interest (ROIs) is a preprocessing step in many medical image analysis tasks. While trivial for humans, it is complex for automatic methods. Classic machine learning approaches require the challenge of hand crafting features to describe differences between ROIs and background. Deep convolutional neural networks (CNNs) alleviate this by automatically finding hierarchical feature representations from raw images. We employ this trait to detect anatomical ROIs in 2D image slices in order to localize them in 3D. In 100 low-dose non-contrast enhanced non-ECG synchronized screening chest CT scans, a reference standard was defined by manually delineating rectangular bounding boxes around three anatomical ROIs -- heart, aortic arch, and descending aorta. Every anatomical ROI was automatically identified using a combination of three CNNs, each analyzing one orthogonal image plane. While single CNNs predicted presence or absence of a specific ROI in the given plane, the combination of their results provided a 3D bounding box around it. Classification performance of each CNN, expressed in area under the receiver operating characteristic curve, was >=0.988. Additionally, the performance of ROI localization was evaluated. Median Dice scores for automatically determined bounding boxes around the heart, aortic arch, and descending aorta were 0.89, 0.70, and 0.85 respectively. The results demonstrate that accurate automatic 3D localization of anatomical structures by CNN-based 2D image classification is feasible.

  1. Non-rigid target tracking in 2D ultrasound images using hierarchical grid interpolation

    NASA Astrophysics Data System (ADS)

    Royer, Lucas; Babel, Marie; Krupa, Alexandre

    2014-03-01

    In this paper, we present a new non-rigid target tracking method within 2D ultrasound (US) image sequence. Due to the poor quality of US images, the motion tracking of a tumor or cyst during needle insertion is considered as an open research issue. Our approach is based on well-known compression algorithm in order to make our method work in real-time which is a necessary condition for many clinical applications. Toward that end, we employed a dedicated hierarchical grid interpolation algorithm (HGI) which can represent a large variety of deformations compared to other motion estimation algorithms such as Overlapped Block Motion Compensation (OBMC), or Block Motion Algorithm (BMA). The sum of squared difference of image intensity is selected as similarity criterion because it provides a good trade-off between computation time and motion estimation quality. Contrary to the others methods proposed in the literature, our approach has the ability to distinguish both rigid and non-rigid motions which are observed in ultrasound image modality. Furthermore, this technique does not take into account any prior knowledge about the target, and limits the user interaction which usually complicates the medical validation process. Finally, a technique aiming at identifying the main phases of a periodic motion (e.g. breathing motion) is introduced. The new approach has been validated from 2D ultrasound images of real human tissues which undergo rigid and non-rigid deformations.

  2. Scalable High Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning

    PubMed Central

    Wu, Guorong; Kim, Minjeong; Wang, Qian; Munsell, Brent C.

    2015-01-01

    Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data,, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked auto-encoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework image registration experiments were conducted on 7.0-tesla brain MR images. In all experiments, the results showed the new image registration framework consistently demonstrated more accurate registration results when compared to state-of-the-art. PMID:26552069

  3. Scalable High-Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning.

    PubMed

    Wu, Guorong; Kim, Minjeong; Wang, Qian; Munsell, Brent C; Shen, Dinggang

    2016-07-01

    Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked autoencoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework, image registration experiments were conducted on 7.0-T brain MR images. In all experiments, the results showed that the new image registration framework consistently demonstrated more accurate registration results when compared to state of the art. PMID:26552069

  4. Scalable High-Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning.

    PubMed

    Wu, Guorong; Kim, Minjeong; Wang, Qian; Munsell, Brent C; Shen, Dinggang

    2016-07-01

    Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked autoencoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework, image registration experiments were conducted on 7.0-T brain MR images. In all experiments, the results showed that the new image registration framework consistently demonstrated more accurate registration results when compared to state of the art.

  5. Three modality image registration of brain SPECT/CT and MR images for quantitative analysis of dopamine transporter imaging

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Yuzuho; Takeda, Yuta; Hara, Takeshi; Zhou, Xiangrong; Matsusako, Masaki; Tanaka, Yuki; Hosoya, Kazuhiko; Nihei, Tsutomu; Katafuchi, Tetsuro; Fujita, Hiroshi

    2016-03-01

    Important features in Parkinson's disease (PD) are degenerations and losses of dopamine neurons in corpus striatum. 123I-FP-CIT can visualize activities of the dopamine neurons. The activity radio of background to corpus striatum is used for diagnosis of PD and Dementia with Lewy Bodies (DLB). The specific activity can be observed in the corpus striatum on SPECT images, but the location and the shape of the corpus striatum on SPECT images only are often lost because of the low uptake. In contrast, MR images can visualize the locations of the corpus striatum. The purpose of this study was to realize a quantitative image analysis for the SPECT images by using image registration technique with brain MR images that can determine the region of corpus striatum. In this study, the image fusion technique was used to fuse SPECT and MR images by intervening CT image taken by SPECT/CT. The mutual information (MI) for image registration between CT and MR images was used for the registration. Six SPECT/CT and four MR scans of phantom materials are taken by changing the direction. As the results of the image registrations, 16 of 24 combinations were registered within 1.3mm. By applying the approach to 32 clinical SPECT/CT and MR cases, all of the cases were registered within 0.86mm. In conclusions, our registration method has a potential in superimposing MR images on SPECT images.

  6. Electron Microscopy: From 2D to 3D Images with Special Reference to Muscle

    PubMed Central

    2015-01-01

    This is a brief and necessarily very sketchy presentation of the evolution in electron microscopy (EM) imaging that was driven by the necessity of extracting 3-D views from the essentially 2-D images produced by the electron beam. The lens design of standard transmission electron microscope has not been greatly altered since its inception. However, technical advances in specimen preparation, image collection and analysis gradually induced an astounding progression over a period of about 50 years. From the early images that redefined tissues, cell and cell organelles at the sub-micron level, to the current nano-resolution reconstructions of organelles and proteins the step is very large. The review is written by an investigator who has followed the field for many years, but often from the sidelines, and with great wonder. Her interest in muscle ultrastructure colors the writing. More specific detailed reviews are presented in this issue. PMID:26913146

  7. 2D dose distribution images of a hybrid low field MRI-γ detector

    NASA Astrophysics Data System (ADS)

    Abril, A.; Agulles-Pedrós, L.

    2016-07-01

    The proposed hybrid system is a combination of a low field MRI and dosimetric gel as a γ detector. The readout system is based on the polymerization process induced by the gel radiation. A gel dose map is obtained which represents the functional part of hybrid image alongside with the anatomical MRI one. Both images should be taken while the patient with a radiopharmaceutical is located inside the MRI system with a gel detector matrix. A relevant aspect of this proposal is that the dosimetric gel has never been used to acquire medical images. The results presented show the interaction of the 99mTc source with the dosimetric gel simulated in Geant4. The purpose was to obtain the planar γ 2D-image. The different source configurations are studied to explore the ability of the gel as radiation detector through the following parameters; resolution, shape definition and radio-pharmaceutical concentration.

  8. Electron Microscopy: From 2D to 3D Images with Special Reference to Muscle.

    PubMed

    Franzini-Armstrong, Clara

    2015-01-01

    This is a brief and necessarily very sketchy presentation of the evolution in electron microscopy (EM) imaging that was driven by the necessity of extracting 3-D views from the essentially 2-D images produced by the electron beam. The lens design of standard transmission electron microscope has not been greatly altered since its inception. However, technical advances in specimen preparation, image collection and analysis gradually induced an astounding progression over a period of about 50 years. From the early images that redefined tissues, cell and cell organelles at the sub-micron level, to the current nano-resolution reconstructions of organelles and proteins the step is very large. The review is written by an investigator who has followed the field for many years, but often from the sidelines, and with great wonder. Her interest in muscle ultrastructure colors the writing. More specific detailed reviews are presented in this issue. PMID:26913146

  9. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    NASA Astrophysics Data System (ADS)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  10. Automatic parameter selection for feature-based multi-sensor image registration

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen; Tom, Victor; Webb, Helen; Chao, Alan

    2006-05-01

    Accurate image registration is critical for applications such as precision targeting, geo-location, change-detection, surveillance, and remote sensing. However, the increasing volume of image data is exceeding the current capacity of human analysts to perform manual registration. This image data glut necessitates the development of automated approaches to image registration, including algorithm parameter value selection. Proper parameter value selection is crucial to the success of registration techniques. The appropriate algorithm parameters can be highly scene and sensor dependent. Therefore, robust algorithm parameter value selection approaches are a critical component of an end-to-end image registration algorithm. In previous work, we developed a general framework for multisensor image registration which includes feature-based registration approaches. In this work we examine the problem of automated parameter selection. We apply the automated parameter selection approach of Yitzhaky and Peli to select parameters for feature-based registration of multisensor image data. The approach consists of generating multiple feature-detected images by sweeping over parameter combinations and using these images to generate estimated ground truth. The feature-detected images are compared to the estimated ground truth images to generate ROC points associated with each parameter combination. We develop a strategy for selecting the optimal parameter set by choosing the parameter combination corresponding to the optimal ROC point. We present numerical results showing the effectiveness of the approach using registration of collected SAR data to reference EO data.

  11. Distance-Dependent Multimodal Image Registration for Agriculture Tasks

    PubMed Central

    Berenstein, Ron; Hočevar, Marko; Godeša, Tone; Edan, Yael; Ben-Shahar, Ohad

    2015-01-01

    Image registration is the process of aligning two or more images of the same scene taken at different times; from different viewpoints; and/or by different sensors. This research focuses on developing a practical method for automatic image registration for agricultural systems that use multimodal sensory systems and operate in natural environments. While not limited to any particular modalities; here we focus on systems with visual and thermal sensory inputs. Our approach is based on pre-calibrating a distance-dependent transformation matrix (DDTM) between the sensors; and representing it in a compact way by regressing the distance-dependent coefficients as distance-dependent functions. The DDTM is measured by calculating a projective transformation matrix for varying distances between the sensors and possible targets. To do so we designed a unique experimental setup including unique Artificial Control Points (ACPs) and their detection algorithms for the two sensors. We demonstrate the utility of our approach using different experiments and evaluation criteria. PMID:26308000

  12. Registration and Fusion of Multiple Source Remotely Sensed Image Data

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline

    2004-01-01

    Earth and Space Science often involve the comparison, fusion, and integration of multiple types of remotely sensed data at various temporal, radiometric, and spatial resolutions. Results of this integration may be utilized for global change analysis, global coverage of an area at multiple resolutions, map updating or validation of new instruments, as well as integration of data provided by multiple instruments carried on multiple platforms, e.g. in spacecraft constellations or fleets of planetary rovers. Our focus is on developing methods to perform fast, accurate and automatic image registration and fusion. General methods for automatic image registration are being reviewed and evaluated. Various choices for feature extraction, feature matching and similarity measurements are being compared, including wavelet-based algorithms, mutual information and statistically robust techniques. Our work also involves studies related to image fusion and investigates dimension reduction and co-kriging for application-dependent fusion. All methods are being tested using several multi-sensor datasets, acquired at EOS Core Sites, and including multiple sensors such as IKONOS, Landsat-7/ETM+, EO1/ALI and Hyperion, MODIS, and SeaWIFS instruments. Issues related to the coregistration of data from the same platform (i.e., AIRS and MODIS from Aqua) or from several platforms of the A-train (i.e., MLS, HIRDLS, OMI from Aura with AIRS and MODIS from Terra and Aqua) will also be considered.

  13. Hierarchical and symmetric infant image registration by robust longitudinal-example-guided correspondence detection

    PubMed Central

    Wu, Yao; Wu, Guorong; Wang, Li; Munsell, Brent C.; Wang, Qian; Lin, Weili; Feng, Qianjin; Chen, Wufan; Shen, Dinggang

    2015-01-01

    Purpose: To investigate anatomical differences across individual subjects, or longitudinal changes in early brain development, it is important to perform accurate image registration. However, due to fast brain development and dynamic tissue appearance changes, it is very difficult to align infant brain images acquired from birth to 1-yr-old. Methods: To solve this challenging problem, a novel image registration method is proposed to align two infant brain images, regardless of age at acquisition. The main idea is to utilize the growth trajectories, or spatial-temporal correspondences, learned from a set of longitudinal training images, for guiding the registration of two different time-point images with different image appearances. Specifically, in the training stage, an intrinsic growth trajectory is first estimated for each training subject using the longitudinal images. To register two new infant images with potentially a large age gap, the corresponding images patches between each new image and its respective training images with similar age are identified. Finally, the registration between the two new images can be assisted by the learned growth trajectories from one time point to another time point that have been established in the training stage. To further improve registration accuracy, the proposed method is combined with a hierarchical and symmetric registration framework that can iteratively add new key points in both images to steer the estimation of the deformation between the two infant brain images under registration. Results: To evaluate image registration accuracy, the proposed method is used to align 24 infant subjects at five different time points (2-week-old, 3-month-old, 6-month-old, 9-month-old, and 12-month-old). Compared to the state-of-the-art methods, the proposed method demonstrated superior registration performance. Conclusions: The proposed method addresses the difficulties in the infant brain registration and produces better results

  14. [Research on non-rigid medical image registration algorithm based on SIFT feature extraction].

    PubMed

    Wang, Anna; Lu, Dan; Wang, Zhe; Fang, Zhizhen

    2010-08-01

    In allusion to non-rigid registration of medical images, the paper gives a practical feature points matching algorithm--the image registration algorithm based on the scale-invariant features transform (Scale Invariant Feature Transform, SIFT). The algorithm makes use of the image features of translation, rotation and affine transformation invariance in scale space to extract the image feature points. Bidirectional matching algorithm is chosen to establish the matching relations between the images, so the accuracy of image registrations is improved. On this basis, affine transform is chosen to complement the non-rigid registration, and normalized mutual information measure and PSO optimization algorithm are also chosen to optimize the registration process. The experimental results show that the method can achieve better registration results than the method based on mutual information.

  15. Imaging Meso-Scale Structures in TEXTOR with 2D-ECE

    NASA Astrophysics Data System (ADS)

    Classen, I. G. J.; Jaspers, R. J. E.; Park, H. K.; Spakman, G. W.; van der Pol, M. J.; Domier, C. W.; Donne, A. J. H.; Luhmann, N. C., Jr.; Westerhof, E.; Jakubowski, M. W.; TEXTOR Team

    The detection and control of instabilities in a tokamak is one of the exciting challenges in fusion research on the way to a reactor. Thanks to a combination of an innovative 2D temperature imaging technique (ECEI), a versatile ECRH/ECCD system and a unique possibility to externally induce tearing modes in the plasma, TEXTOR is able to make pioneering contributions in this field. This paper focuses on two meso-scale phenomena in tokamaks: m = 2 tearing modes and magnetic structures in the stochastic boundary. In these cases the 2D-ECEI diagnostic can resolve features not attainable before. In addition the possibility to use the diagnostic for fluctuation measurements is addressed.

  16. 2D Imaging in a Lightweight Portable MRI Scanner without Gradient Coils

    PubMed Central

    Cooley, Clarissa Zimmerman; Stockmann, Jason P.; Armstrong, Brandon D.; Sarracanie, Mathieu; Lev, Michael H.; Rosen, Matthew S.; Wald, Lawrence L.

    2014-01-01

    Purpose As the premiere modality for brain imaging, MRI could find wider applicability if lightweight, portable systems were available for siting in unconventional locations such as Intensive Care Units, physician offices, surgical suites, ambulances, emergency rooms, sports facilities, or rural healthcare sites. Methods We construct and validate a truly portable (<100kg) and silent proof-of-concept MRI scanner which replaces conventional gradient encoding with a rotating lightweight cryogen-free, low-field magnet. When rotated about the object, the inhomogeneous field pattern is used as a rotating Spatial Encoding Magnetic field (rSEM) to create generalized projections which encode the iteratively reconstructed 2D image. Multiple receive channels are used to disambiguate the non-bijective encoding field. Results The system is validated with experimental images of 2D test phantoms. Similar to other non-linear field encoding schemes, the spatial resolution is position dependent with blurring in the center, but is shown to be likely sufficient for many medical applications. Conclusion The presented MRI scanner demonstrates the potential for portability by simultaneously relaxing the magnet homogeneity criteria and eliminating the gradient coil. This new architecture and encoding scheme shows convincing proof of concept images that are expected to be further improved with refinement of the calibration and methodology. PMID:24668520

  17. Augmented depth perception visualization in 2D/3D image fusion.

    PubMed

    Wang, Jian; Kreiser, Matthias; Wang, Lejing; Navab, Nassir; Fallavollita, Pascal

    2014-12-01

    2D/3D image fusion applications are widely used in endovascular interventions. Complaints from interventionists about existing state-of-art visualization software are usually related to the strong compromise between 2D and 3D visibility or the lack of depth perception. In this paper, we investigate several concepts enabling improvement of current image fusion visualization found in the operating room. First, a contour enhanced visualization is used to circumvent hidden information in the X-ray image. Second, an occlusion and depth color-coding scheme is considered to improve depth perception. To validate our visualization technique both phantom and clinical data are considered. An evaluation is performed in the form of a questionnaire which included 24 participants: ten clinicians and fourteen non-clinicians. Results indicate that the occlusion correction method provides 100% correctness when determining the true position of an aneurysm in X-ray. Further, when integrating an RGB or RB color-depth encoding in the image fusion both perception and intuitiveness are improved.

  18. Image registration and averaging of low laser power two-photon fluorescence images of mouse retina.

    PubMed

    Alexander, Nathan S; Palczewska, Grazyna; Stremplewski, Patrycjusz; Wojtkowski, Maciej; Kern, Timothy S; Palczewski, Krzysztof

    2016-07-01

    Two-photon fluorescence microscopy (TPM) is now being used routinely to image live cells for extended periods deep within tissues, including the retina and other structures within the eye . However, very low laser power is a requirement to obtain TPM images of the retina safely. Unfortunately, a reduction in laser power also reduces the signal-to-noise ratio of collected images, making it difficult to visualize structural details. Here, image registration and averaging methods applied to TPM images of the eye in living animals (without the need for auxiliary hardware) demonstrate the structural information obtained with laser power down to 1 mW. Image registration provided between 1.4% and 13.0% improvement in image quality compared to averaging images without registrations when using a high-fluorescence template, and between 0.2% and 12.0% when employing the average of collected images as the template. Also, a diminishing return on image quality when more images were used to obtain the averaged image is shown. This work provides a foundation for obtaining informative TPM images with laser powers of 1 mW, compared to previous levels for imaging mice ranging between 6.3 mW [Palczewska G., Nat Med.20, 785 (2014) Sharma R., Biomed. Opt. Express4, 1285 (2013)].

  19. Image registration and averaging of low laser power two-photon fluorescence images of mouse retina.

    PubMed

    Alexander, Nathan S; Palczewska, Grazyna; Stremplewski, Patrycjusz; Wojtkowski, Maciej; Kern, Timothy S; Palczewski, Krzysztof

    2016-07-01

    Two-photon fluorescence microscopy (TPM) is now being used routinely to image live cells for extended periods deep within tissues, including the retina and other structures within the eye . However, very low laser power is a requirement to obtain TPM images of the retina safely. Unfortunately, a reduction in laser power also reduces the signal-to-noise ratio of collected images, making it difficult to visualize structural details. Here, image registration and averaging methods applied to TPM images of the eye in living animals (without the need for auxiliary hardware) demonstrate the structural information obtained with laser power down to 1 mW. Image registration provided between 1.4% and 13.0% improvement in image quality compared to averaging images without registrations when using a high-fluorescence template, and between 0.2% and 12.0% when employing the average of collected images as the template. Also, a diminishing return on image quality when more images were used to obtain the averaged image is shown. This work provides a foundation for obtaining informative TPM images with laser powers of 1 mW, compared to previous levels for imaging mice ranging between 6.3 mW [Palczewska G., Nat Med.20, 785 (2014) Sharma R., Biomed. Opt. Express4, 1285 (2013)]. PMID:27446697

  20. Image registration and averaging of low laser power two-photon fluorescence images of mouse retina

    PubMed Central

    Alexander, Nathan S.; Palczewska, Grazyna; Stremplewski, Patrycjusz; Wojtkowski, Maciej; Kern, Timothy S.; Palczewski, Krzysztof

    2016-01-01

    Two-photon fluorescence microscopy (TPM) is now being used routinely to image live cells for extended periods deep within tissues, including the retina and other structures within the eye . However, very low laser power is a requirement to obtain TPM images of the retina safely. Unfortunately, a reduction in laser power also reduces the signal-to-noise ratio of collected images, making it difficult to visualize structural details. Here, image registration and averaging methods applied to TPM images of the eye in living animals (without the need for auxiliary hardware) demonstrate the structural information obtained with laser power down to 1 mW. Image registration provided between 1.4% and 13.0% improvement in image quality compared to averaging images without registrations when using a high-fluorescence template, and between 0.2% and 12.0% when employing the average of collected images as the template. Also, a diminishing return on image quality when more images were used to obtain the averaged image is shown. This work provides a foundation for obtaining informative TPM images with laser powers of 1 mW, compared to previous levels for imaging mice ranging between 6.3 mW [PalczewskaG., Nat Med. 20, 785 (2014)24952647 SharmaR., Biomed. Opt. Express 4, 1285 (2013)24009992]. PMID:27446697

  1. 3D PET image reconstruction including both motion correction and registration directly into an MR or stereotaxic spatial atlas

    NASA Astrophysics Data System (ADS)

    Gravel, Paul; Verhaeghe, Jeroen; Reader, Andrew J.

    2013-01-01

    This work explores the feasibility and impact of including both the motion correction and the image registration transformation parameters from positron emission tomography (PET) image space to magnetic resonance (MR), or stereotaxic, image space within the system matrix of PET image reconstruction. This approach is motivated by the fields of neuroscience and psychiatry, where PET is used to investigate differences in activation patterns between different groups of participants, requiring all images to be registered to a common spatial atlas. Currently, image registration is performed after image reconstruction which introduces interpolation effects into the final image. Furthermore, motion correction (also requiring registration) introduces a further level of interpolation, and the overall result of these operations can lead to resolution degradation and possibly artifacts. It is important to note that performing such operations on a post-reconstruction basis means, strictly speaking, that the final images are not ones which maximize the desired objective function (e.g. maximum likelihood (ML), or maximum a posteriori reconstruction (MAP)). To correctly seek parameter estimates in the desired spatial atlas which are in accordance with the chosen reconstruction objective function, it is necessary to include the transformation parameters for both motion correction and registration within the system modeling stage of image reconstruction. Such an approach not only respects the statistically chosen objective function (e.g. ML or MAP), but furthermore should serve to reduce the interpolation effects. To evaluate the proposed method, this work investigates registration (including motion correction) using 2D and 3D simulations based on the high resolution research tomograph (HRRT) PET scanner geometry, with and without resolution modeling, using the ML expectation maximization (MLEM) reconstruction algorithm. The quality of reconstruction was assessed using bias

  2. Designing of sparse 2D arrays for Lamb wave imaging using coarray concept

    NASA Astrophysics Data System (ADS)

    Ambroziński, Łukasz; Stepinski, Tadeusz; Uhl, Tadeusz

    2015-03-01

    2D ultrasonic arrays have considerable application potential in Lamb wave based SHM systems, since they enable equivocal damage imaging and even in some cases wave-mode selection. Recently, it has been shown that the 2D arrays can be used in SHM applications in a synthetic focusing (SF) mode, which is much more effective than the classical phase array mode commonly used in NDT. The SF mode assumes a single element excitation of subsequent transmitters and off-line processing the acquired data. In the simplest implementation of the technique, only single multiplexed input and output channels are required, which results in significant hardware simplification. Application of the SF mode for 2D arrays creates additional degrees of freedom during the design of the array topology, which complicates the array design process, however, it enables sparse array designs with performance similar to that of the fully populated dense arrays. In this paper we present the coarray concept to facilitate synthesis process of an array's aperture used in the multistatic synthetic focusing approach in Lamb waves-based imaging systems. In the coherent imaging, performed in the transmit/receive mode, the sum coarray is a morphological convolution of the transmit/receive sub-arrays. It can be calculated as the set of sums of the individual sub-arrays' elements locations. The coarray framework will be presented here using a an example of a star-shaped array. The approach will be discussed in terms of beampatterns of the resulting imaging systems. Both simulated and experimental results will be included.

  3. Multi-atlas segmentation with particle-based group-wise image registration.

    PubMed

    Lee, Joohwi; Lyu, Ilwoo; Styner, Martin

    2014-03-21

    We propose a novel multi-atlas segmentation method that employs a group-wise image registration method for the brain segmentation on rodent magnetic resonance (MR) images. The core element of the proposed segmentation is the use of a particle-guided image registration method that extends the concept of particle correspondence into the volumetric image domain. The registration method performs a group-wise image registration that simultaneously registers a set of images toward the space defined by the average of particles. The particle-guided image registration method is robust with low signal-to-noise ratio images as well as differing sizes and shapes observed in the developing rodent brain. Also, the use of an implicit common reference frame can prevent potential bias induced by the use of a single template in the segmentation process. We show that the use of a particle guided-image registration method can be naturally extended to a novel multi-atlas segmentation method and improves the registration method to explicitly use the provided template labels as an additional constraint. In the experiment, we show that our segmentation algorithm provides more accuracy with multi-atlas label fusion and stability against pair-wise image registration. The comparison with previous group-wise registration method is provided as well.

  4. Adaptive optofluidic lens(es) for switchable 2D and 3D imaging

    NASA Astrophysics Data System (ADS)

    Huang, Hanyang; Wei, Kang; Zhao, Yi

    2016-03-01

    The stereoscopic image is often captured using dual cameras arranged side-by-side and optical path switching systems such as two separate solid lenses or biprism/mirrors. The miniaturization of the overall size of current stereoscopic devices down to several millimeters is at a sacrifice of further device size shrinkage. The limited light entry worsens the final image resolution and brightness. It is known that optofluidics offer good re-configurability for imaging systems. Leveraging this technique, we report a reconfigurable optofluidic system whose optical layout can be swapped between a singlet lens with 10 mm in diameter and a pair of binocular lenses with each lens of 3 mm in diameter for switchable two-dimensional (2D) and three-dimensional (3D) imaging. The singlet and the binoculars share the same optical path and the same imaging sensor. The singlet acquires a 3D image with better resolution and brightness, while the binoculars capture stereoscopic image pairs for 3D vision and depth perception. The focusing power tuning capability of the singlet and the binoculars enable image acquisition at varied object planes by adjusting the hydrostatic pressure across the lens membrane. The vari-focal singlet and binoculars thus work interchangeably and complementarily. The device is thus expected to have applications in robotic vision, stereoscopy, laparoendoscopy and miniaturized zoom lens system.

  5. Image quality degradation and retrieval errors introduced by registration and interpolation of multispectral digital images

    SciTech Connect

    Henderson, B.G.; Borel, C.C.; Theiler, J.P.; Smith, B.W.

    1996-04-01

    Full utilization of multispectral data acquired by whiskbroom and pushbroom imagers requires that the individual channels be registered accurately. Poor registration introduces errors which can be significant, especially in high contrast areas such as boundaries between regions. We simulate the acquisition of multispectral imagery in order to estimate the errors that are introduced by co-registration of different channels and interpolation within the images. We compute the Modulation Transfer Function (MTF) and image quality degradation brought about by fractional pixel shifting and calculate errors in retrieved quantities (surface temperature and water vapor) that occur as a result of interpolation. We also present a method which might be used to estimate sensor platform motion for accurate registration of images acquired by a pushbroom scanner.

  6. Open-source image registration for MRI–TRUS fusion-guided prostate interventions

    PubMed Central

    Khallaghi, Siavash; Sánchez, C. Antonio; Lasso, Andras; Fels, Sidney; Tuncali, Kemal; Sugar, Emily Neubauer; Kapur, Tina; Zhang, Chenxi; Wells, William; Nguyen, Paul L.; Abolmaesumi, Purang; Tempany, Clare

    2015-01-01

    Purpose We propose two software tools for non-rigid registration of MRI and transrectal ultrasound (TRUS) images of the prostate. Our ultimate goal is to develop an open-source solution to support MRI–TRUS fusion image guidance of prostate interventions, such as targeted biopsy for prostate cancer detection and focal therapy. It is widely hypothesized that image registration is an essential component in such systems. Methods The two non-rigid registration methods are: (1) a deformable registration of the prostate segmentation distance maps with B-spline regularization and (2) a finite element-based deformable registration of the segmentation surfaces in the presence of partial data. We evaluate the methods retrospectively using clinical patient image data collected during standard clinical procedures. Computation time and Target Registration Error (TRE) calculated at the expert-identified anatomical landmarks were used as quantitative measures for the evaluation. Results The presented image registration tools were capable of completing deformable registration computation within 5 min. Average TRE was approximately 3 mm for both methods, which is comparable with the slice thickness in our MRI data. Both tools are available under nonrestrictive open-source license. Conclusions We release open-source tools that may be used for registration during MRI–TRUS-guided prostate interventions. Our tools implement novel registration approaches and produce acceptable registration results. We believe these tools will lower the barriers in development and deployment of interventional research solutions and facilitate comparison with similar tools. PMID:25847666

  7. Evaluation of five non-rigid image registration algorithms using the NIREP framework

    NASA Astrophysics Data System (ADS)

    Wei, Ying; Christensen, Gary E.; Song, Joo Hyun; Rudrauf, David; Bruss, Joel; Kuhl, Jon G.; Grabowski, Thomas J.

    2010-03-01

    Evaluating non-rigid image registration algorithm performance is a difficult problem since there is rarely a "gold standard" (i.e., known) correspondence between two images. This paper reports the analysis and comparison of five non-rigid image registration algorithms using the Non-Rigid Image Registration Evaluation Project (NIREP) (www.nirep.org) framework. The NIREP framework evaluates registration performance using centralized databases of well-characterized images and standard evaluation statistics (methods) which are implemented in a software package. The performance of five non-rigid registration algorithms (Affine, AIR, Demons, SLE and SICLE) was evaluated using 22 images from two NIREP neuroanatomical evaluation databases. Six evaluation statistics (relative overlap, intensity variance, normalized ROI overlap, alignment of calcarine sulci, inverse consistency error and transitivity error) were used to evaluate and compare image registration performance. The results indicate that the Demons registration algorithm produced the best registration results with respect to the relative overlap statistic but produced nearly the worst registration results with respect to the inverse consistency statistic. The fact that one registration algorithm produced the best result for one criterion and nearly the worst for another illustrates the need to use multiple evaluation statistics to fully assess performance.

  8. Algorithm for image registration and clutter and jitter noise reduction

    SciTech Connect

    Brower, K.L.

    1997-02-01

    This paper presents an analytical, computational method whereby two-dimensional images of an optical source represented in terms of a set of detector array signals can be registered with respect to a reference set of detector array signals. The detector image is recovered from the detector array signals and represented over a local region by a fourth order, two-dimensional taylor series. This local detector image can then be registered by a general linear transformation with respect to a reference detector image. The detector signal in the reference frame is reconstructed by integrating this detector image over the respective reference pixel. For cases in which the general linear transformation is uncertain by up to plus-or-minus two pixels, the general linear transformation can be determined by least squares fitting the detector image to the reference detector image. This registration process reduces clutter and jitter noise to a level comparable to the electronic noise level of the detector system. Test results with and without electronic noise using an analytical test function are presented.

  9. Database-guided breast tumor detection and segmentation in 2D ultrasound images

    NASA Astrophysics Data System (ADS)

    Zhang, Jingdan; Zhou, Shaohua K.; Brunke, Shelby; Lowery, Carol; Comaniciu, Dorin

    2010-03-01

    Ultrasonography is a valuable technique for diagnosing breast cancer. Computer-aided tumor detection and segmentation in ultrasound images can reduce labor cost and streamline clinic workflows. In this paper, we propose a fully automatic system to detect and segment breast tumors in 2D ultrasound images. Our system, based on database-guided techniques, learns the knowledge of breast tumor appearance exemplified by expert annotations. For tumor detection, we train a classifier to discriminate between tumors and their background. For tumor segmentation, we propose a discriminative graph cut approach, where both the data fidelity and compatibility functions are learned discriminatively. The performance of the proposed algorithms is demonstrated on a large set of 347 images, achieving a mean contour-to-contour error of 3.75 pixels with about 4.33 seconds.

  10. Non-rigid registration of medical images based on ordinal feature and manifold learning

    NASA Astrophysics Data System (ADS)

    Li, Qi; Liu, Jin; Zang, Bo

    2015-12-01

    With the rapid development of medical imaging technology, medical image research and application has become a research hotspot. This paper offers a solution to non-rigid registration of medical images based on ordinal feature (OF) and manifold learning. The structural features of medical images are extracted by combining ordinal features with local linear embedding (LLE) to improve the precision and speed of the registration algorithm. A physical model based on manifold learning and optimization search is constructed according to the complicated characteristics of non-rigid registration. The experimental results demonstrate the robustness and applicability of the proposed registration scheme.

  11. SU-E-J-29: Automatic Image Registration Performance of Three IGRT Systems for Prostate Radiotherapy

    SciTech Connect

    Barber, J; Sykes, J; Holloway, L; Thwaites, D

    2015-06-15

    Purpose: To compare the performance of an automatic image registration algorithm on image sets collected on three commercial image guidance systems, and explore its relationship with imaging parameters such as dose and sharpness. Methods: Images of a CIRS Virtually Human Male Pelvis phantom (VHMP) were collected on the CBCT systems of Varian TrueBeam/OBI and Elekta Synergy/XVI linear accelerators, across a range of mAs settings; and MVCT on a Tomotherapy Hi-ART accelerator with a range of pitch. Using the 6D correlation ratio algorithm of XVI, each image was registered to a mask of the prostate volume with a 5 mm expansion. Registrations were repeated 100 times, with random initial offsets introduced to simulate daily matching. Residual registration errors were calculated by correcting for the initial phantom set-up error. Automatic registration was also repeated after reconstructing images with different sharpness filters. Results: All three systems showed good registration performance, with residual translations <0.5mm (1σ) for typical clinical dose and reconstruction settings. Residual rotational error had larger range, with 0.8°, 1.2° and 1.9° for 1σ in XVI, OBI and Tomotherapy respectively. The registration accuracy of XVI images showed a strong dependence on imaging dose, particularly below 4mGy. No evidence of reduced performance was observed at the lowest dose settings for OBI and Tomotherapy, but these were above 4mGy. Registration failures (maximum target registration error > 3.6 mm on the surface of a 30mm sphere) occurred in 5% to 10% of registrations. Changing the sharpness of image reconstruction had no significant effect on registration performance. Conclusions: Using the present automatic image registration algorithm, all IGRT systems tested provided satisfactory registrations for clinical use, within a normal range of acquisition settings.

  12. Image Registration of High-Resolution Uav Data: the New Hypare Algorithm

    NASA Astrophysics Data System (ADS)

    Bahr, T.; Jin, X.; Lasica, R.; Giessel, D.

    2013-08-01

    Unmanned aerial vehicles play an important role in the present-day civilian and military intelligence. Equipped with a variety of sensors, such as SAR imaging modes, E/O- and IR sensor technology, they are due to their agility suitable for many applications. Hence, the necessity arises to use fusion technologies and to develop them continuously. Here an exact image-to-image registration is essential. It serves as the basis for important image processing operations such as georeferencing, change detection, and data fusion. Therefore we developed the Hybrid Powered Auto-Registration Engine (HyPARE). HyPARE combines all available spatial reference information with a number of image registration approaches to improve the accuracy, performance, and automation of tie point generation and image registration. We demonstrate this approach by the registration of 39 still images from a high-resolution image stream, acquired with a Aeryon Photo3S™ camera on an Aeryon Scout micro-UAV™.

  13. A preliminary evaluation work on a 3D ultrasound imaging system for 2D array transducer

    NASA Astrophysics Data System (ADS)

    Zhong, Xiaoli; Li, Xu; Yang, Jiali; Li, Chunyu; Song, Junjie; Ding, Mingyue; Yuchi, Ming

    2016-04-01

    This paper presents a preliminary evaluation work on a pre-designed 3-D ultrasound imaging system. The system mainly consists of four parts, a 7.5MHz, 24×24 2-D array transducer, the transmit/receive circuit, power supply, data acquisition and real-time imaging module. The row-column addressing scheme is adopted for the transducer fabrication, which greatly reduces the number of active channels . The element area of the transducer is 4.6mm by 4.6mm. Four kinds of tests were carried out to evaluate the imaging performance, including the penetration depth range, axial and lateral resolution, positioning accuracy and 3-D imaging frame rate. Several strong reflection metal objects , fixed in a water tank, were selected for the purpose of imaging due to a low signal-to-noise ratio of the transducer. The distance between the transducer and the tested objects , the thickness of aluminum, and the seam width of the aluminum sheet were measured by a calibrated micrometer to evaluate the penetration depth, the axial and lateral resolution, respectively. The experiment al results showed that the imaging penetration depth range was from 1.0cm to 6.2cm, the axial and lateral resolution were 0.32mm and 1.37mm respectively, the imaging speed was up to 27 frames per second and the positioning accuracy was 9.2%.

  14. Photoacoustic imaging for deep targets in the breast using a multichannel 2D array transducer

    NASA Astrophysics Data System (ADS)

    Xie, Zhixing; Wang, Xueding; Morris, Richard F.; Padilla, Frederic R.; Lecarpentier, Gerald L.; Carson, Paul L.

    2011-03-01

    A photoacoustic (PA) imaging system was developed to achieve high sensitivity for the detection and characterization of vascular anomalies in the breast in the mammographic geometry. Signal detection from deep in the breast was achieved by a broadband 2D PVDF planar array that has a round shape with one side trimmed straight to improve fit near the chest wall. This array has 572 active elements and a -6dB bandwidth of 0.6-1.7 MHz. The low frequency enhances imaging depth and increases the size of vascular collections displayed without edge enhancement. The PA signals from all the elements go through low noise preamplifiers in the probe that are very close to the array elements for optimized noise control. Driven by 20 independent on-probe signal processing channels, imaging with both high sensitivity and good speed was achieved. To evaluate the imaging depth and the spatial resolution of this system,2.38mm I.D. artificial vessels embedded deeply in ex vivo breasts harvested from fresh cadavers and a 3mm I.D. tube in breast mimicking phantoms made of pork loin and fat tissues were imaged. Using near-infrared laser light with incident energy density within the ANSI safety limit, imaging depths of up to 49 mm in human breasts and 52 mm in phantoms were achieved. With a high power tunable laser working on multiple wavelengths, this system might contribute to 3D noninvasive imaging of morphological and physiological tissue features throughout the breast.

  15. Pulmonary CT image registration and warping for tracking tissue deformation during the respiratory cycle through 3D consistent image registration

    PubMed Central

    Li, Baojun; Christensen, Gary E.; Hoffman, Eric A.; McLennan, Geoffrey; Reinhardt, Joseph M.

    2008-01-01

    Tracking lung tissues during the respiratory cycle has been a challenging task for diagnostic CT and CT-guided radiotherapy. We propose an intensity- and landmark-based image registration algorithm to perform image registration and warping of 3D pulmonary CT image data sets, based on consistency constraints and matching corresponding airway branchpoints. In this paper, we demonstrate the effectivenss and accuracy of this algorithm in tracking lung tissues by both animal and human data sets. In the animal study, the result showed a tracking accuracy of 1.9 mm between 50% functional residual capacity (FRC) and 85% total lung capacity (TLC) for 12 metal seeds implanted in the lungs of a breathing sheep under precise volume control using a pulmonary ventilator. Visual inspection of the human subject results revealed the algorithm’s potential not only in matching the global shapes, but also in registering the internal structures (e.g., oblique lobe fissures, pulmonary artery branches, etc.). These results suggest that our algorithm has significant potential for warping and tracking lung tissue deformation with applications in diagnostic CT, CT-guided radiotherapy treatment planning, and therapeutic effect evaluation. PMID:19175115

  16. Pulmonary CT image registration and warping for tracking tissue deformation during the respiratory cycle through 3D consistent image registration.

    PubMed

    Li, Baojun; Christensen, Gary E; Hoffman, Eric A; McLennan, Geoffrey; Reinhardt, Joseph M

    2008-12-01

    Tracking lung tissues during the respiratory cycle has been a challenging task for diagnostic CT and CT-guided radiotherapy. We propose an intensity- and landmark-based image registration algorithm to perform image registration and warping of 3D pulmonary CT image data sets, based on consistency constraints and matching corresponding airway branchpoints. In this paper, we demonstrate the effectivenss and accuracy of this algorithm in tracking lung tissues by both animal and human data sets. In the animal study, the result showed a tracking accuracy of 1.9 mm between 50% functional residual capacity (FRC) and 85% total lung capacity (TLC) for 12 metal seeds implanted in the lungs of a breathing sheep under precise volume control using a pulmonary ventilator. Visual inspection of the human subject results revealed the algorithm's potential not only in matching the global shapes, but also in registering the internal structures (e.g., oblique lobe fissures, pulmonary artery branches, etc.). These results suggest that our algorithm has significant potential for warping and tracking lung tissue deformation with applications in diagnostic CT, CT-guided radiotherapy treatment planning, and therapeutic effect evaluation.

  17. Conoscopic holography for image registration: a feasibility study

    NASA Astrophysics Data System (ADS)

    Lathrop, Ray A.; Cheng, Tiffany T.; Webster, Robert J., III

    2009-02-01

    Preoperative image data can facilitate intrasurgical guidance by revealing interior features of opaque tissues, provided image data can be accurately registered to the physical patient. Registration is challenging in organs that are deformable and lack features suitable for use as alignment fiducials (e.g. liver, kidneys, etc.). However, provided intraoperative sensing of surface contours can be accomplished, a variety of rigid and deformable 3D surface registration techniques become applicable. In this paper, we evaluate the feasibility of conoscopic holography as a new method to sense organ surface shape. We also describe potential advantages of conoscopic holography, including the promise of replacing open surgery with a laparoscopic approach. Our feasibility study investigated use of a tracked off-the-shelf conoscopic holography unit to perform a surface scans on several types of biological and synthetic phantom tissues. After first exploring baseline accuracy and repeatability of distance measurements, we performed a number of surface scan experiments on the phantom and ex vivo tissues with a variety of surface properties and shapes. These indicate that conoscopic holography is capable of generating surface point clouds of at least comparable (and perhaps eventually improved) accuracy in comparison to published experimental laser triangulation-based surface scanning results.

  18. Automated subject-specific, hexahedral mesh generation via image registration

    PubMed Central

    Ji, Songbai; Ford, James C.; Greenwald, Richard M.; Beckwith, Jonathan G.; Paulsen, Keith D.; Flashman, Laura A.; McAllister, Thomas W.

    2011-01-01

    Generating subject-specific, all-hexahedral meshes for finite element analysis continues to be of significant interest in biomechanical research communities. To date, most automated methods “morph” an existing atlas mesh to match with a subject anatomy, which usually result in degradation in mesh quality because of mesh distortion. We present an automated meshing technique that produces satisfactory mesh quality and accuracy without mesh repair. An atlas mesh is first developed using a script. A subject-specific mesh is generated with the same script after transforming the geometry into the atlas space following rigid image registration, and is transformed back into the subject space. By meshing the brain in 11 subjects, we demonstrate that the technique’s performance is satisfactory in terms of both mesh quality (99.5% of elements had a scaled Jacobian >0.6 while <0.01% were between 0 and 0.2) and accuracy (average distance between mesh boundary and geometrical surface was 0.07 mm while <1% greater than 0.5mm). The combined computational cost for image registration and meshing was <4 min. Our results suggest that the technique is effective for generating subject-specific, all-hexahedral meshes and that it may be useful for meshing a variety of anatomical structures across different biomechanical research fields. PMID:21731153

  19. Development of ultra-fast 2D ion Doppler tomography using image intensified CMOS fast camera

    NASA Astrophysics Data System (ADS)

    Tanabe, Hiroshi; Kuwahata, Akihiro; Yamanaka, Haruki; Inomoto, Michiaki; Ono, Yasushi; TS-group Team

    2015-11-01

    The world fastest novel time-resolved 2D ion Doppler tomography diagnostics has been developed using fast camera with high-speed gated image intensifier (frame rate: 200kfps. phosphor decay time: ~ 1 μ s). Time evolution of line-integrated spectra are diffracted from a f=1m, F/8.3 and g=2400L/mm Czerny-Turner polychromator, whose output is intensified and recorded to a high-speed camera with spectral resolution of ~0.005nm/pixel. The system can accommodate up to 36 (9 ×4) spatial points recorded at 5 μs time resolution, tomographic reconstruction is applied for the line-integrated spectra, time-resolved (5 μs/frame) local 2D ion temperature measurement has been achieved without any assumption of shot repeatability. Ion heating during intermittent reconnection event which tends to happen during high guide field merging tokamak was measured around diffusion region in UTST. The measured 2D profile shows ion heating inside the acceleration channel of reconnection outflow jet, stagnation point and downstream region where reconnected field forms thick closed flux surface as in MAST. Achieved maximum ion temperature increases as a function of Brec2 and shows good fit with MAST experiment, demonstrating promising CS-less startup scenario for spherical tokamak. This work is supported by JSPS KAKENHI Grant Number 15H05750 and 15K20921.

  20. Digital image registration method based upon binary boundary maps

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R., Jr.; Andrus, J. F.; Campbell, C. W.

    1974-01-01

    A relatively fast method is presented for matching or registering the digital data of imagery from the same ground scene acquired at different times, or from different multispectral images, sensors, or both. It is assumed that the digital images can be registed by using translations and rotations only, that the images are of the same scale, and that little or no distortion exists between images. It is further assumed that by working with several local areas of the image, the rotational effects in the local areas can be neglected. Thus, by treating the misalignments of local areas as translations, it is possible to determine rotational and translational misalignments for a larger portion of the image containing the local areas. This procedure of determining the misalignment and then registering the data according to the misalignment can be repeated until the desired degree of registration is achieved. The method to be presented is based upon the use of binary boundary maps produced from the raw digital imagery rather than the raw digital data.

  1. Rolled fingerprint construction using MRF-based nonrigid image registration.

    PubMed

    Kwon, Dongjin; Yun, Il Dong; Lee, Sang Uk

    2010-12-01

    This paper proposes a new rolled fingerprint construction approach incorporating a state-of-the-art nonrigid image registration method based upon a Markov random field (MRF) energy model. The proposed method finds dense correspondences between images from a rolled fingerprint sequence and warps the entire fingerprint area to synthesize a rolled fingerprint. This method can generate conceptually more accurate rolled fingerprints by preserving the geometric properties of the finger surface as opposed to ink-based rolled impressions and other existing rolled fingerprint construction methods. To verify the accuracy of the proposed method, various comparative experiments were designed to reveal differences among the rolled construction methods. The results show that the proposed method is significantly superior in various aspects compared to previous approaches.

  2. Deformable image registration for multimodal lung-cancer staging

    NASA Astrophysics Data System (ADS)

    Cheirsilp, Ronnarit; Zang, Xiaonan; Bascom, Rebecca; Allen, Thomas W.; Mahraj, Rickhesvar P. M.; Higgins, William E.

    2016-03-01

    Positron emission tomography (PET) and X-ray computed tomography (CT) serve as major diagnostic imaging modalities in the lung-cancer staging process. Modern scanners provide co-registered whole-body PET/CT studies, collected while the patient breathes freely, and high-resolution chest CT scans, collected under a brief patient breath hold. Unfortunately, no method exists for registering a PET/CT study into the space of a high-resolution chest CT scan. If this could be done, vital diagnostic information offered by the PET/CT study could be brought seamlessly into the procedure plan used during live cancer-staging bronchoscopy. We propose a method for the deformable registration of whole-body PET/CT data into the space of a high-resolution chest CT study. We then demonstrate its potential for procedure planning and subsequent use in multimodal image-guided bronchoscopy.

  3. Visualizing 3D Objects from 2D Cross Sectional Images Displayed "In-Situ" versus "Ex-Situ"

    ERIC Educational Resources Information Center

    Wu, Bing; Klatzky, Roberta L.; Stetten, George

    2010-01-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to…

  4. 2D aperture synthesis for Lamb wave imaging using co-arrays

    NASA Astrophysics Data System (ADS)

    Ambrozinski, Lukasz; Stepinski, Tadeusz; Uhl, Tadeusz

    2014-03-01

    2D ultrasonic arrays in Lamb wave based SHM systems can operate in the phased array (PA) or synthetic focusing (SF) mode. In the real-time PA approach, multiple electronically delayed signals excite transmitting elements to form the desired wave-front, whereas receiving elements are used to sense scattered waves. Due to that, the PA mode requires multi channeled hardware and multiple excitations at numerous azimuths to scan the inspected region of interest. To the contrary, the SF mode, assumes a single element excitation of subsequent transmitters and off-line processing of the acquired data. In the simplest implementation of the SF technique, a single multiplexed input and output channels are required, which results in significant hardware simplification. Performance of a 2D imaging array depends on many parameters, such as, its topology, number of its transducers and their spacing in terms of wavelength as well as the type of weighting function (apodization). Moreover, it is possible to use sparse arrays, which means that not all array elements are used for transmitting and/ or receiving. In this paper the co-array concept is applied to facilitate the synthesis process of an array's aperture used in the multistatic synthetic focusing approach in Lamb waves-based imaging systems. In the coherent imaging, performed in the transmit/receive mode, the sum co-array is a morphological convolution of the transmit/receive sub-arrays. It can be calculated as the set of sums of the individual elements' locations in the sub-arrays used for imaging. The coarray framework will be presented here using two different array topologies, aID uniform linear array and a cross-shaped array that will result in a square coarray. The approach will be discussed in terms of array patterns and beam patterns of the resulting imaging systems. Both, theoretical and experimental results will be given.

  5. Enhanced detection of the vertebrae in 2D CT-images

    NASA Astrophysics Data System (ADS)

    Graf, Franz; Greil, Robert; Kriegel, Hans-Peter; Schubert, Matthias; Cavallaro, Alexander

    2012-02-01

    In recent years, a considerable amount of methods have been proposed for detecting and reconstructing the spine and the vertebrae from CT and MR scans. The results are either used for examining the vertebrae or serve as a preprocessing step for further detection and annotation tasks. In this paper, we propose a method for reliably detecting the position of the vertebrae on a single slice of a transversal body CT scan. Thus, our method is not restricted by the available portion of the 3D scan, but even suffices with a single 2D image. A further advantage of our method is that detection does not require adjusting parameters or direct user interaction. Technically, our method is based on an imaging pipeline comprising five steps: The input image is preprocessed. The relevant region of the image is extracted. Then, a set of candidate locations is selected based on bone density. In the next step, image features are extracted from the surrounding of the candidate locations and an instance-based learning approach is used for selecting the best candidate. Finally, a refinement step optimizes the best candidate region. Our proposed method is validated on a large diverse data set of more than 8 000 images and improves the accuracy in terms of area overlap and distance from the true position significantly compared to the only other method being proposed for this task so far.

  6. Target error for image-to-physical space registration: preliminary clinical results using laser range scanning

    NASA Astrophysics Data System (ADS)

    Cao, Aize; Miga, Michael I.; Dumpuri, P.; Ding, S.; Dawant, B. M.; Thompson, R. C.

    2007-03-01

    In this paper, preliminary results from an image-to-physical space registration platform are presented. The current platform employs traditional and novel methods of registration which use a variety of data sources to include: traditional synthetic skin-fiducial point-based registration, surface registration based on facial contours, brain feature point-based registration, brain vessel-to-vessel registration, and a more comprehensive cortical surface registration method that utilizes both geometric and intensity information from both the image volume and physical patient. The intraoperative face and cortical surfaces were digitized using a laser range scanner (LRS) capable of producing highly resolved textured point clouds. In two in vivo cases, a series of registrations were performed using these techniques and compared within the context of a true target error. One of the advantages of using a textured point cloud data stream is that true targets among the physical cortical surface and the preoperative image volume can be identified and used to assess image-to-physical registration methods. The results suggest that iterative closest point (ICP) method for intraoperative face surface registration is equivalent to point-based registration (PBR) method of skin fiducial markers. With regard to the initial image and physical space registration, for patient 1, mean target registration error (TRE) were 3.1+/-0.4 mm and 3.6 +/-0.9 mm for face ICP and skin fiducial PBR, respectively. For patient 2, the mean TRE were 5.7 +/-1.3 mm, and 6.6 +/-0.9 mm for face ICP and skin fiducial PBR, respectively. With regard to intraoperative cortical surface registration, SurfaceMI outperformed feature based PBR and vessel ICP with 1.7+/-1.8 mm for patient 1. For patient 2, the best result was achieved by using vessel ICP with 1.9+/-0.5 mm.

  7. Diesel combustion and emissions formation using multiple 2-D imaging diagnostics

    SciTech Connect

    Dec, J.E.

    1997-12-31

    Understanding how emissions are formed during diesel combustion is central to developing new engines that can comply with increasingly stringent emission standards while maintaining or improving performance levels. Laser-based planar imaging diagnostics are uniquely capable of providing the temporally and spatially resolved information required for this understanding. Using an optically accessible research engine, a variety of two-dimensional (2-D) imaging diagnostics have been applied to investigators of direct-injection (DI) diesel combustion and emissions formation. These optical measurements have included the following laser-sheet imaging data: Mie scattering to determine liquid-phase fuel distributions, Rayleigh scattering for quantitative vapor-phase-fuel/air mixture images, laser induced incandescence (LII) for relative soot concentrations, simultaneous LII and Rayleigh scattering for relative soot particle-size distributions, planar laser-induced fluorescence (PLIF) to obtain early PAH (polyaromatic hydrocarbon) distributions, PLIF images of the OH radical that show the diffusion flame structure, and PLIF images of the NO radical showing the onset of NO{sub x} production. In addition, natural-emission chemiluminescence images were obtained to investigate autoignition. The experimental setup is described, and the image data showing the most relevant results are presented. Then the conceptual model of diesel combustion is summarized in a series of idealized schematics depicting the temporal and spatial evolution of a reacting diesel fuel jet during the time period investigated. Finally, recent PLIF images of the NO distribution are presented and shown to support the timing and location of NO formation hypothesized from the conceptual model.

  8. 3D registration of intravascular optical coherence tomography and cryo-image volumes for microscopic-resolution validation

    NASA Astrophysics Data System (ADS)

    Prabhu, David; Mehanna, Emile; Gargesha, Madhusudhana; Wen, Di; Brandt, Eric; van Ditzhuijzen, Nienke S.; Chamie, Daniel; Yamamoto, Hirosada; Fujino, Yusuke; Farmazilian, Ali; Patel, Jaymin; Costa, Marco; Bezerra, Hiram G.; Wilson, David L.

    2016-03-01

    High resolution, 100 frames/sec intravascular optical coherence tomography (IVOCT) can distinguish plaque types, but further validation is needed, especially for automated plaque characterization. We developed experimental and 3D registration methods, to provide validation of IVOCT pullback volumes using microscopic, brightfield and fluorescent cryoimage volumes, with optional, exactly registered cryo-histology. The innovation was a method to match an IVOCT pullback images, acquired in the catheter reference frame, to a true 3D cryo-image volume. Briefly, an 11-parameter, polynomial virtual catheter was initialized within the cryo-image volume, and perpendicular images were extracted, mimicking IVOCT image acquisition. Virtual catheter parameters were optimized to maximize cryo and IVOCT lumen overlap. Local minima were possible, but when we started within reasonable ranges, every one of 24 digital phantom cases converged to a good solution with a registration error of only +1.34+/-2.65μm (signed distance). Registration was applied to 10 ex-vivo cadaver coronary arteries (LADs), resulting in 10 registered cryo and IVOCT volumes yielding a total of 421 registered 2D-image pairs. Image overlays demonstrated high continuity between vascular and plaque features. Bland- Altman analysis comparing cryo and IVOCT lumen area, showed mean and standard deviation of differences as 0.01+/-0.43 mm2. DICE coefficients were 0.91+/-0.04. Finally, visual assessment on 20 representative cases with easily identifiable features suggested registration accuracy within one frame of IVOCT (+/-200μm), eliminating significant misinterpretations introduced by 1mm errors in the literature. The method will provide 3D data for training of IVOCT plaque algorithms and can be used for validation of other intravascular imaging modalities.

  9. Plane-wave transverse oscillation for high-frame-rate 2-D vector flow imaging.

    PubMed

    Lenge, Matteo; Ramalli, Alessandro; Tortoli, Piero; Cachard, Christian; Liebgott, Hervé

    2015-12-01

    Transverse oscillation (TO) methods introduce oscillations in the pulse-echo field (PEF) along the direction transverse to the ultrasound propagation direction. This may be exploited to extend flow investigations toward multidimensional estimates. In this paper, the TOs are coupled with the transmission of plane waves (PWs) to reconstruct high-framerate RF images with bidirectional oscillations in the pulse-echo field. Such RF images are then processed by a 2-D phase-based displacement estimator to produce 2-D vector flow maps at thousands of frames per second. First, the capability of generating TOs after PW transmissions was thoroughly investigated by varying the lateral wavelength, the burst length, and the transmission frequency. Over the entire region of interest, the generated lateral wavelengths, compared with the designed ones, presented bias and standard deviation of -3.3 ± 5.7% and 10.6 ± 7.4% in simulations and experiments, respectively. The performance of the ultrafast vector flow mapping method was also assessed by evaluating the differences between the estimated velocities and the expected ones. Both simulations and experiments show overall biases lower than 20% when varying the beam-to-flow angle, the peak velocity, and the depth of interest. In vivo applications of the method on the common carotid and the brachial arteries are also presented. PMID:26670852

  10. Design of the 2D electron cyclotron emission imaging instrument for the J-TEXT tokamak

    NASA Astrophysics Data System (ADS)

    Pan, X. M.; Yang, Z. J.; Ma, X. D.; Zhu, Y. L.; Luhmann, N. C.; Domier, C. W.; Ruan, B. W.; Zhuang, G.

    2016-11-01

    A new 2D Electron Cyclotron Emission Imaging (ECEI) diagnostic is being developed for the J-TEXT tokamak. It will provide the 2D electron temperature information with high spatial, temporal, and temperature resolution. The new ECEI instrument is being designed to support fundamental physics investigations on J-TEXT including MHD, disruption prediction, and energy transport. The diagnostic contains two dual dipole antenna arrays corresponding to F band (90-140 GHz) and W band (75-110 GHz), respectively, and comprises a total of 256 channels. The system can observe the same magnetic surface at both the high field side and low field side simultaneously. An advanced optical system has been designed which permits the two arrays to focus on a wide continuous region or two radially separate regions with high imaging spatial resolution. It also incorporates excellent field curvature correction with field curvature adjustment lenses. An overview of the diagnostic and the technical progress including the new remote control technique are presented.

  11. 2-D array for 3-D Ultrasound Imaging Using Synthetic Aperture Techniques

    PubMed Central

    Daher, Nadim M.; Yen, Jesse T.

    2010-01-01

    A 2-D array of 256 × 256 = 65,536 elements, with total area 4 × 4 = 16 cm2, serves as a flexible platform for developing acquisition schemes for 3-D rectilinear ultrasound imaging at 10 MHz using synthetic aperture techniques. This innovative system combines a simplified interconnect scheme and synthetic aperture techniques with a 2-D array for 3-D imaging. A row-column addressing scheme is used to access different elements for different transmit events. This addressing scheme is achieved through a simple interconnect, consisting of one top, one bottom single layer flex circuits, which, compared to multi-layer flex circuits, are simpler to design, cheaper to manufacture and thinner so their effect on the acoustic response is minimized. We present three designs that prioritize different design objectives: volume acquisiton time, resolution, and sensitivity, while maintaining acceptable figures for the other design objectives. For example, one design overlooks time acquisition requirements, assumes good noise conditions, and optimizes for resolution, achieving −6 dB and −20 dB beamwidths of less than 0.2 and 0.5 millimeters, respectively, for an F/2 aperture. Another design can acquire an entire volume in 256 transmit events, with −6dB and −20 dB beamwidths in the order of 0.4 and 0.8 millimeters, respectively. PMID:16764446

  12. Plane-wave transverse oscillation for high-frame-rate 2-D vector flow imaging.

    PubMed

    Lenge, Matteo; Ramalli, Alessandro; Tortoli, Piero; Cachard, Christian; Liebgott, Hervé

    2015-12-01

    Transverse oscillation (TO) methods introduce oscillations in the pulse-echo field (PEF) along the direction transverse to the ultrasound propagation direction. This may be exploited to extend flow investigations toward multidimensional estimates. In this paper, the TOs are coupled with the transmission of plane waves (PWs) to reconstruct high-framerate RF images with bidirectional oscillations in the pulse-echo field. Such RF images are then processed by a 2-D phase-based displacement estimator to produce 2-D vector flow maps at thousands of frames per second. First, the capability of generating TOs after PW transmissions was thoroughly investigated by varying the lateral wavelength, the burst length, and the transmission frequency. Over the entire region of interest, the generated lateral wavelengths, compared with the designed ones, presented bias and standard deviation of -3.3 ± 5.7% and 10.6 ± 7.4% in simulations and experiments, respectively. The performance of the ultrafast vector flow mapping method was also assessed by evaluating the differences between the estimated velocities and the expected ones. Both simulations and experiments show overall biases lower than 20% when varying the beam-to-flow angle, the peak velocity, and the depth of interest. In vivo applications of the method on the common carotid and the brachial arteries are also presented.

  13. 2D label-free imaging of resonant grating biochips in ultraviolet.

    PubMed

    Bougot-Robin, K; Reverchon, J-L; Fromant, M; Mugherli, L; Plateau, P; Benisty, H

    2010-05-24

    2D images of label-free biochips exploiting resonant waveguide grating (RWG) are presented. They indicate sensitivities on the order of 1 pg/mm2 for proteins in air, and hence 10 pg/mm2 in water can be safely expected. A 320x256 pixels Aluminum-Gallium-Nitride-based sensor array is used, with an intrinsic narrow spectral window centered at 280 nm. The additional role of characteristic biological layer absorption at this wavelength is calculated, and regimes revealing its impact are discussed. Experimentally, the resonance of a chip coated with protein is revealed and the sensitivity evaluated through angular spectroscopy and imaging. In addition to a sensitivity similar to surface plasmon resonance (SPR), the RWGs resonance can be flexibly tailored to gain spatial, biochemical, or spectral sensitivity.

  14. High contrast 2D visualization of edge plasma instabilities by ECE imaging

    NASA Astrophysics Data System (ADS)

    Yun, G. S.; Choi, M. J.; Lee, W.; Park, H. K.; Domier, C. W.; Luhmann, N. C., Jr.

    2012-01-01

    High contrast high resolution 2D images of edge MHD instabilities have been obtained for the KSTAR H-mode plasmas in 2010 using an electron cyclotron emission (ECE) imaging system. A fast structural evolution of the edge instabilities has been identified where the validity of the observed structures, i.e., the local measurement is ensured by the high contrast. On the other hand, the exact interpretation of the ECE intensity (Trad) is not straightforward due to the marginal optical depth ( ~ 1) in the plasma edge region. The effect of the electron temperature (Te) and density (ne) profiles in the edge region on the ECE localization and intensity have been evaluated for typical KSTAR H-mode discharges.

  15. Iterative edge- and wavelet-based image registration of AVHRR and GOES satellite imagery

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; El-Saleous, Nazmi; Vermote, Eric

    1997-01-01

    Most automatic registration methods are either correlation-based, feature-based, or a combination of both. Examples of features which can be utilized for automatic image registration are edges, regions, corners, or wavelet-extracted features. In this paper, we describe two proposed approaches, based on edge or edge-like features, which are very appropriate to highlight regions of interest such as coastlines. The two iterative methods utilize the Normalized Cross-Correlation of edge and wavelet features and are applied to such problems as image-to-map registration, landmarking, and channel-to-channel co-registration, utilizing test data, AVHRR data, as well as GOES image data.

  16. Fast Confocal Raman Imaging Using a 2-D Multifocal Array for Parallel Hyperspectral Detection.

    PubMed

    Kong, Lingbo; Navas-Moreno, Maria; Chan, James W

    2016-01-19

    We present the development of a novel confocal hyperspectral Raman microscope capable of imaging at speeds up to 100 times faster than conventional point-scan Raman microscopy under high noise conditions. The microscope utilizes scanning galvomirrors to generate a two-dimensional (2-D) multifocal array at the sample plane, generating Raman signals simultaneously at each focus of the array pattern. The signals are combined into a single beam and delivered through a confocal pinhole before being focused through the slit of a spectrometer. To separate the signals from each row of the array, a synchronized scan mirror placed in front of the spectrometer slit positions the Raman signals onto different pixel rows of the detector. We devised an approach to deconvolve the superimposed signals and retrieve the individual spectra at each focal position within a given row. The galvomirrors were programmed to scan different focal arrays following Hadamard encoding patterns. A key feature of the Hadamard detection is the reconstruction of individual spectra with improved signal-to-noise ratio. Using polystyrene beads as test samples, we demonstrated not only that our system images faster than a conventional point-scan method but that it is especially advantageous under noisy conditions, such as when the CCD detector operates at fast read-out rates and high temperatures. This is the first demonstration of multifocal confocal Raman imaging in which parallel spectral detection is implemented along both axes of the CCD detector chip. We envision this novel 2-D multifocal spectral detection technique can be used to develop faster imaging spontaneous Raman microscopes with lower cost detectors. PMID:26654100

  17. Biomechanical deformable image registration of longitudinal lung CT images using vessel information.

    PubMed

    Cazoulat, Guillaume; Owen, Dawn; Matuszak, Martha M; Balter, James M; Brock, Kristy K

    2016-07-01

    Spatial correlation of lung tissue across longitudinal images, as the patient responds to treatment, is a critical step in adaptive radiotherapy. The goal of this work is to expand a biomechanical model-based deformable registration algorithm (Morfeus) to achieve accurate registration in the presence of significant anatomical changes. Six lung cancer patients previously treated with conventionally fractionated radiotherapy were retrospectively evaluated. Exhale CT scans were obtained at treatment planning and following three weeks of treatment. For each patient, the planning CT was registered to the follow-up CT using Morfeus, a biomechanical model-based deformable registration algorithm. To model the complex response of the lung, an extension to Morfeus has been developed: an initial deformation was estimated with Morfeus consisting of boundary conditions on the chest wall and incorporating a sliding interface with the lungs. It was hypothesized that the addition of boundary conditions based on vessel tree matching would provide a robust reduction of the residual registration error. To achieve this, the vessel trees were segmented on the two images by thresholding a vesselness image based on the Hessian matrix's eigenvalues. For each point on the reference vessel tree centerline, the displacement vector was estimated by applying a variant of the Demons registration algorithm between the planning CT and the deformed follow-up CT. An expert independently identified corresponding landmarks well distributed in the lung to compute target registration errors (TRE). The TRE was: [Formula: see text], [Formula: see text] and [Formula: see text] mm after rigid registration, Morfeus and Morfeus with boundary conditions on the vessel tree, respectively. In conclusion, the addition of boundary conditions on the vessels significantly improved the accuracy in modeling the response of the lung and tumor over the course of radiotherapy. Minimizing and modeling these geometrical

  18. Biomechanical deformable image registration of longitudinal lung CT images using vessel information

    NASA Astrophysics Data System (ADS)

    Cazoulat, Guillaume; Owen, Dawn; Matuszak, Martha M.; Balter, James M.; Brock, Kristy K.

    2016-07-01

    Spatial correlation of lung tissue across longitudinal images, as the patient responds to treatment, is a critical step in adaptive radiotherapy. The goal of this work is to expand a biomechanical model-based deformable registration algorithm (Morfeus) to achieve accurate registration in the presence of significant anatomical changes. Six lung cancer patients previously treated with conventionally fractionated radiotherapy were retrospectively evaluated. Exhale CT scans were obtained at treatment planning and following three weeks of treatment. For each patient, the planning CT was registered to the follow-up CT using Morfeus, a biomechanical model-based deformable registration algorithm. To model the complex response of the lung, an extension to Morfeus has been developed: an initial deformation was estimated with Morfeus consisting of boundary conditions on the chest wall and incorporating a sliding interface with the lungs. It was hypothesized that the addition of boundary conditions based on vessel tree matching would provide a robust reduction of the residual registration error. To achieve this, the vessel trees were segmented on the two images by thresholding a vesselness image based on the Hessian matrix’s eigenvalues. For each point on the reference vessel tree centerline, the displacement vector was estimated by applying a variant of the Demons registration algorithm between the planning CT and the deformed follow-up CT. An expert independently identified corresponding landmarks well distributed in the lung to compute target registration errors (TRE). The TRE was: 5.8+/- 2.9 , 3.4+/- 2.3 and 1.6+/- 1.3 mm after rigid registration, Morfeus and Morfeus with boundary conditions on the vessel tree, respectively. In conclusion, the addition of boundary conditions on the vessels significantly improved the accuracy in modeling the response of the lung and tumor over the course of radiotherapy. Minimizing and modeling these geometrical uncertainties will enable

  19. 2-D Gaussian beam imaging of multicomponent seismic data in anisotropic media

    NASA Astrophysics Data System (ADS)

    Protasov, M. I.

    2015-12-01

    An approach for true-amplitude seismic beam imaging of multicomponent seismic data in 2-D anisotropic elastic media is presented and discussed. Here, the recovered true-amplitude function is a scattering potential. This approach is a migration procedure based on the weighted summation of pre-stack data. The true-amplitude weights are computed by applying Gaussian beams (GBs). We shoot a pair of properly chosen GBs with a fixed dip and opening angles from the current imaging point towards an acquisition system. This pair of beams is used to compute a true-amplitude selective image of a rapid velocity variation. The total true-amplitude image is constructed by superimposing selective images computed for a range of available dip angles. The global regularity of the GBs allows one to disregard whether a ray field is regular or irregular. P- and S-wave GBs can be used to handle raw multicomponent data without separating the waves. The use of anisotropic GBs allows one to take into account the anisotropy of the background model.

  20. Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.

    2012-01-01

    The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.

  1. Invariant Feature Matching for Image Registration Application Based on New Dissimilarity of Spatial Features.

    PubMed

    Mousavi Kahaki, Seyed Mostafa; Nordin, Md Jan; Ashtari, Amir H; J Zahra, Sophia

    2016-01-01

    An invariant feature matching method is proposed as a spatially invariant feature matching approach. Deformation effects, such as affine and homography, change the local information within the image and can result in ambiguous local information pertaining to image points. New method based on dissimilarity values, which measures the dissimilarity of the features through the path based on Eigenvector properties, is proposed. Evidence shows that existing matching techniques using similarity metrics--such as normalized cross-correlation, squared sum of intensity differences and correlation coefficient--are insufficient for achieving adequate results under different image deformations. Thus, new descriptor's similarity metrics based on normalized Eigenvector correlation and signal directional differences, which are robust under local variation of the image information, are proposed to establish an efficient feature matching technique. The method proposed in this study measures the dissimilarity in the signal frequency along the path between two features. Moreover, these dissimilarity values are accumulated in a 2D dissimilarity space, allowing accurate corresponding features to be extracted based on the cumulative space using a voting strategy. This method can be used in image registration applications, as it overcomes the limitations of the existing approaches. The output results demonstrate that the proposed technique outperforms the other methods when evaluated using a standard dataset, in terms of precision-recall and corner correspondence. PMID:26985996

  2. Invariant Feature Matching for Image Registration Application Based on New Dissimilarity of Spatial Features

    PubMed Central

    Mousavi Kahaki, Seyed Mostafa; Nordin, Md Jan; Ashtari, Amir H.; J. Zahra, Sophia

    2016-01-01

    An invariant feature matching method is proposed as a spatially invariant feature matching approach. Deformation effects, such as affine and homography, change the local information within the image and can result in ambiguous local information pertaining to image points. New method based on dissimilarity values, which measures the dissimilarity of the features through the path based on Eigenvector properties, is proposed. Evidence shows that existing matching techniques using similarity metrics—such as normalized cross-correlation, squared sum of intensity differences and correlation coefficient—are insufficient for achieving adequate results under different image deformations. Thus, new descriptor’s similarity metrics based on normalized Eigenvector correlation and signal directional differences, which are robust under local variation of the image information, are proposed to establish an efficient feature matching technique. The method proposed in this study measures the dissimilarity in the signal frequency along the path between two features. Moreover, these dissimilarity values are accumulated in a 2D dissimilarity space, allowing accurate corresponding features to be extracted based on the cumulative space using a voting strategy. This method can be used in image registration applications, as it overcomes the limitations of the existing approaches. The output results demonstrate that the proposed technique outperforms the other methods when evaluated using a standard dataset, in terms of precision-recall and corner correspondence. PMID:26985996

  3. An automated deformable image registration evaluation of confidence tool.

    PubMed

    Kirby, Neil; Chen, Josephine; Kim, Hojin; Morin, Olivier; Nie, Ke; Pouliot, Jean

    2016-04-21

    Deformable image registration (DIR) is a powerful tool for radiation oncology, but it can produce errors. Beyond this, DIR accuracy is not a fixed quantity and varies on a case-by-case basis. The purpose of this study is to explore the possibility of an automated program to create a patient- and voxel-specific evaluation of DIR accuracy. AUTODIRECT is a software tool that was developed to perform this evaluation for the application of a clinical DIR algorithm to a set of patient images. In brief, AUTODIRECT uses algorithms to generate deformations and applies them to these images (along with processing) to generate sets of test images, with known deformations that are similar to the actual ones and with realistic noise properties. The clinical DIR algorithm is applied to these test image sets (currently 4). From these tests, AUTODIRECT generates spatial and dose uncertainty estimates for each image voxel based on a Student's t distribution. In this study, four commercially available DIR algorithms were used to deform a dose distribution associated with a virtual pelvic phantom image set, and AUTODIRECT was used to generate dose uncertainty estimates for each deformation. The virtual phantom image set has a known ground-truth deformation, so the true dose-warping errors of the DIR algorithms were also known. AUTODIRECT predicted error patterns that closely matched the actual error spatial distribution. On average AUTODIRECT overestimated the magnitude of the dose errors, but tuning the AUTODIRECT algorithms should improve agreement. This proof-of-principle test demonstrates the potential for the AUTODIRECT algorithm as an empirical method to predict DIR errors. PMID:27025957

  4. An automated deformable image registration evaluation of confidence tool

    NASA Astrophysics Data System (ADS)

    Kirby, Neil; Chen, Josephine; Kim, Hojin; Morin, Olivier; Nie, Ke; Pouliot, Jean

    2016-04-01

    Deformable image registration (DIR) is a powerful tool for radiation oncology, but it can produce errors. Beyond this, DIR accuracy is not a fixed quantity and varies on a case-by-case basis. The purpose of this study is to explore the possibility of an automated program to create a patient- and voxel-specific evaluation of DIR accuracy. AUTODIRECT is a software tool that was developed to perform this evaluation for the application of a clinical DIR algorithm to a set of patient images. In brief, AUTODIRECT uses algorithms to generate deformations and applies them to these images (along with processing) to generate sets of test images, with known deformations that are similar to the actual ones and with realistic noise properties. The clinical DIR algorithm is applied to these test image sets (currently 4). From these tests, AUTODIRECT generates spatial and dose uncertainty estimates for each image voxel based on a Student’s t distribution. In this study, four commercially available DIR algorithms were used to deform a dose distribution associated with a virtual pelvic phantom image set, and AUTODIRECT was used to generate dose uncertainty estimates for each deformation. The virtual phantom image set has a known ground-truth deformation, so the true dose-warping errors of the DIR algorithms were also known. AUTODIRECT predicted error patterns that closely matched the actual error spatial distribution. On average AUTODIRECT overestimated the magnitude of the dose errors, but tuning the AUTODIRECT algorithms should improve agreement. This proof-of-principle test demonstrates the potential for the AUTODIRECT algorithm as an empirical method to predict DIR errors.

  5. Diffeomorphic demons: efficient non-parametric image registration.

    PubMed

    Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas

    2009-03-01

    We propose an efficient non-parametric diffeomorphic image registration algorithm based on Thirion's demons algorithm. In the first part of this paper, we show that Thirion's demons algorithm can be seen as an optimization procedure on the entire space of displacement fields. We provide strong theoretical roots to the different variants of Thirion's demons algorithm. This analysis predicts a theoretical advantage for the symmetric forces variant of the demons algorithm. We show on controlled experiments that this advantage is confirmed in practice and yields a faster convergence. In the second part of this paper, we adapt the optimization procedure underlying the demons algorithm to a space of diffeomorphic transformations. In contrast to many diffeomorphic registration algorithms, our solution is computationally efficient since in practice it only replaces an addition of displacement fields by a few compositions. Our experiments show that in addition to being diffeomorphic, our algorithm provides results that are similar to the ones from the demons algorithm but with transformations that are much smoother and closer to the gold standard, available in controlled experiments, in terms of Jacobians. PMID:19041946

  6. Configurable automatic detection and registration of fiducial frames for device-to-image registration in MRI-guided prostate interventions.

    PubMed

    Tokuda, Junichi; Song, Sang-Eun; Tuncali, Kemal; Tempany, Clare; Hata, Nobuhiko

    2013-01-01

    We propose a novel automatic fiducial frame detection and registration method for device-to-image registration in MRI-guided prostate interventions. The proposed method does not require any manual selection of markers, and can be applied to a variety of fiducial frames, which consist of multiple cylindrical MR-visible markers placed in different orientations. The key idea is that automatic extraction of linear features using a line filter is more robust than that of bright spots by thresholding; by applying a line set registration algorithm to the detected markers, the frame can be registered to the MRI. The method was capable of registering the fiducial frame to the MRI with an accuracy of 1.00 +/- 0.73 mm and 1.41 +/- 1.06 degrees in a phantom study, and was sufficiently robust to detect the fiducial frame in 98% of images acquired in clinical cases despite the existence of anatomical structures in the field of view.

  7. A survey among Brazilian thoracic surgeons about the use of preoperative 2D and 3D images

    PubMed Central

    Cipriano, Federico Enrique Garcia; Arcêncio, Livia; Dessotte, Lycio Umeda; Rodrigues, Alfredo José; Vicente, Walter Villela de Andrade

    2016-01-01

    Background Describe the characteristics of how the thoracic surgeon uses the 2D/3D medical imaging to perform surgical planning, clinical practice and teaching in thoracic surgery and check the initial choice and the final choice of the Brazilian Thoracic surgeon as the 2D and 3D models pictures before and after acquiring theoretical knowledge on the generation, manipulation and interactive 3D views. Methods A descriptive research type Survey cross to data provided by the Brazilian Thoracic Surgeons (members of the Brazilian Society of Thoracic Surgery) who responded to the online questionnaire via the internet on their computers or personal devices. Results Of the 395 invitations visualized distributed by email, 107 surgeons completed the survey. There was no statically difference when comparing the 2D vs. 3D models pictures for the following purposes: diagnosis, assessment of the extent of disease, preoperative surgical planning, and communication among physicians, resident training, and undergraduate medical education. Regarding the type of tomographic image display routinely used in clinical practice (2D or 3D or 2D–3D model image) and the one preferred by the surgeon at the end of the questionnaire. Answers surgeons for exclusive use of 2D images: initial choice =50.47% and preferably end =14.02%. Responses surgeons to use 3D models in combination with 2D images: initial choice =48.60% and preferably end =85.05%. There was a significant change in the final selection of 3D models used together with the 2D images (P<0.0001). Conclusions There is a lack of knowledge of the 3D imaging, as well as the use and interactive manipulation in dedicated 3D applications, with consequent lack of uniformity in the surgical planning based on CT images. These findings certainly confirm in changing the preference of thoracic surgeons of 2D views of technologies for 3D images.

  8. A survey among Brazilian thoracic surgeons about the use of preoperative 2D and 3D images

    PubMed Central

    Cipriano, Federico Enrique Garcia; Arcêncio, Livia; Dessotte, Lycio Umeda; Rodrigues, Alfredo José; Vicente, Walter Villela de Andrade

    2016-01-01

    Background Describe the characteristics of how the thoracic surgeon uses the 2D/3D medical imaging to perform surgical planning, clinical practice and teaching in thoracic surgery and check the initial choice and the final choice of the Brazilian Thoracic surgeon as the 2D and 3D models pictures before and after acquiring theoretical knowledge on the generation, manipulation and interactive 3D views. Methods A descriptive research type Survey cross to data provided by the Brazilian Thoracic Surgeons (members of the Brazilian Society of Thoracic Surgery) who responded to the online questionnaire via the internet on their computers or personal devices. Results Of the 395 invitations visualized distributed by email, 107 surgeons completed the survey. There was no statically difference when comparing the 2D vs. 3D models pictures for the following purposes: diagnosis, assessment of the extent of disease, preoperative surgical planning, and communication among physicians, resident training, and undergraduate medical education. Regarding the type of tomographic image display routinely used in clinical practice (2D or 3D or 2D–3D model image) and the one preferred by the surgeon at the end of the questionnaire. Answers surgeons for exclusive use of 2D images: initial choice =50.47% and preferably end =14.02%. Responses surgeons to use 3D models in combination with 2D images: initial choice =48.60% and preferably end =85.05%. There was a significant change in the final selection of 3D models used together with the 2D images (P<0.0001). Conclusions There is a lack of knowledge of the 3D imaging, as well as the use and interactive manipulation in dedicated 3D applications, with consequent lack of uniformity in the surgical planning based on CT images. These findings certainly confirm in changing the preference of thoracic surgeons of 2D views of technologies for 3D images. PMID:27621874

  9. Rotationally symmetric triangulation sensor with integrated object imaging using only one 2D detector

    NASA Astrophysics Data System (ADS)

    Eckstein, Johannes; Lei, Wang; Becker, Jonathan; Jun, Gao; Ott, Peter

    2006-04-01

    In this paper a distance measurement sensor is introduced, equipped with two integrated optical systems, the first one for rotationally symmetric triangulation and the second one for imaging the object while using only one 2D detector for both purposes. Rotationally symmetric triangulation, introduced in [1], eliminates some disadvantages of classical triangulation sensors, especially at steps or strong curvatures of the object, wherefore the measurement result depends not any longer on the angular orientation of the sensor. This is achieved by imaging the scattered light from an illuminated object point to a centered and sharp ring on a low cost area detector. The diameter of the ring is proportional to the distance of the object. The optical system consists of two off axis aspheric reflecting surfaces. This system allows for integrating a second optical system in order to capture images of the object at the same 2D detector. A mock-up was realized for the first time which consists of the reflecting optics for triangulation manufactured by diamond turning. A commercially available appropriate small lens system for imaging was mechanically integrated in the reflecting optics. Alternatively, some designs of retrofocus lens system for larger field of views were investigated. The optical designs allow overlying the image of the object and the ring for distance measurement in the same plane. In this plane a CCD detector is mounted, centered to the optical axis for both channels. A fast algorithm for the evaluation of the ring is implemented. The characteristics, i.e. the ring diameter versus object distance shows very linear behavior. For illumination of the object point for distance measurement, the beam of a red laser diode system is reflected by a wavelength bandpath filter on the axis of the optical system in. Additionally, the surface of the object is illuminated by LED's in the green spectrum. The LED's are located on the outside rim of the reflecting optics. The

  10. Complexity and accuracy of image registration methods in SPECT-guided radiation therapy

    NASA Astrophysics Data System (ADS)

    Yin, L. S.; Tang, L.; Hamarneh, G.; Gill, B.; Celler, A.; Shcherbinin, S.; Fua, T. F.; Thompson, A.; Liu, M.; Duzenli, C.; Sheehan, F.; Moiseenko, V.

    2010-01-01

    The use of functional imaging in radiotherapy treatment (RT) planning requires accurate co-registration of functional imaging scans to CT scans. We evaluated six methods of image registration for use in SPECT-guided radiotherapy treatment planning. Methods varied in complexity from 3D affine transform based on control points to diffeomorphic demons and level set non-rigid registration. Ten lung cancer patients underwent perfusion SPECT-scans prior to their radiotherapy. CT images from a hybrid SPECT/CT scanner were registered to a planning CT, and then the same transformation was applied to the SPECT images. According to registration evaluation measures computed based on the intensity difference between the registered CT images or based on target registration error, non-rigid registrations provided a higher degree of accuracy than rigid methods. However, due to the irregularities in some of the obtained deformation fields, warping the SPECT using these fields may result in unacceptable changes to the SPECT intensity distribution that would preclude use in RT planning. Moreover, the differences between intensity histograms in the original and registered SPECT image sets were the largest for diffeomorphic demons and level set methods. In conclusion, the use of intensity-based validation measures alone is not sufficient for SPECT/CT registration for RTTP. It was also found that the proper evaluation of image registration requires the use of several accuracy metrics.

  11. Multiscale registration of planning CT and daily cone beam CT images for adaptive radiation therapy

    SciTech Connect

    Paquin, Dana; Levy, Doron; Xing Lei

    2009-01-15

    Adaptive radiation therapy (ART) is the incorporation of daily images in the radiotherapy treatment process so that the treatment plan can be evaluated and modified to maximize the amount of radiation dose to the tumor while minimizing the amount of radiation delivered to healthy tissue. Registration of planning images with daily images is thus an important component of ART. In this article, the authors report their research on multiscale registration of planning computed tomography (CT) images with daily cone beam CT (CBCT) images. The multiscale algorithm is based on the hierarchical multiscale image decomposition of E. Tadmor, S. Nezzar, and L. Vese [Multiscale Model. Simul. 2(4), pp. 554-579 (2004)]. Registration is achieved by decomposing the images to be registered into a series of scales using the (BV, L{sup 2}) decomposition and initially registering the coarsest scales of the image using a landmark-based registration algorithm. The resulting transformation is then used as a starting point to deformably register the next coarse scales with one another. This procedure is iterated at each stage using the transformation computed by the previous scale registration as the starting point for the current registration. The authors present the results of studies of rectum, head-neck, and prostate CT-CBCT registration, and validate their registration method quantitatively using synthetic results in which the exact transformations our known, and qualitatively using clinical deformations in which the exact results are not known.

  12. A fully automatic image-to-world registration method for image-guided procedure with intraoperative imaging updates

    NASA Astrophysics Data System (ADS)

    Li, Senhu; Sarment, David

    2016-03-01

    Image-guided procedure with intraoperative imaging updates has made a big impact on minimally invasive surgery. Compact and mobile CT imaging device combining with current commercial available image guided navigation system is a legitimate and cost-efficient solution for a typical operating room setup. However, the process of manual fiducial-based registration between image and physical spaces (image-to-world) is troublesome for surgeons during the procedure, which results in much procedure interruptions and is the main source of registration errors. In this study, we developed a novel method to eliminate the manual registration process. Instead of using probe to manually localize the fiducials during the surgery, a tracking plate with known fiducial positions relative to the reference coordinates is designed and fabricated through 3D printing technique. The workflow and feasibility of this method has been studied through a phantom experiment.

  13. Hierarchical Multi-modal Image Registration by Learning Common Feature Representations

    PubMed Central

    Ge, Hongkun; Wu, Guorong; Wang, Li; Gao, Yaozong

    2016-01-01

    Mutual information (MI) has been widely used for registering images with different modalities. Since most inter-modality registration methods simply estimate deformations in a local scale, but optimizing MI from the entire image, the estimated deformations for certain structures could be dominated by the surrounding unrelated structures. Also, since there often exist multiple structures in each image, the intensity correlation between two images could be complex and highly nonlinear, which makes global MI unable to precisely guide local image deformation. To solve these issues, we propose a hierarchical inter-modality registration method by robust feature matching. Specifically, we first select a small set of key points at salient image locations to drive the entire image registration. Since the original image features computed from different modalities are often difficult for direct comparison, we propose to learn their common feature representations by projecting them from their native feature spaces to a common space, where the correlations between corresponding features are maximized. Due to the large heterogeneity between two high-dimension feature distributions, we employ Kernel CCA (Canonical Correlation Analysis) to reveal such non-linear feature mappings. Then, our registration method can take advantage of the learned common features to reliably establish correspondences for key points from different modality images by robust feature matching. As more and more key points take part in the registration, our hierarchical feature-based image registration method can efficiently estimate the deformation pathway between two inter-modality images in a global to local manner. We have applied our proposed registration method to prostate CT and MR images, as well as the infant MR brain images in the first year of life. Experimental results show that our method can achieve more accurate registration results, compared to other state-of-the-art image registration

  14. Subspace-Based Holistic Registration for Low-Resolution Facial Images

    NASA Astrophysics Data System (ADS)

    Boom, B. J.; Spreeuwers, L. J.; Veldhuis, R. N. J.

    2010-12-01

    Subspace-based holistic registration is introduced as an alternative to landmark-based face registration, which has a poor performance on low-resolution images, as obtained in camera surveillance applications. The proposed registration method finds the alignment by maximizing the similarity score between a probe and a gallery image. We use a novel probabilistic framework for both user-independent as well as user-specific face registration. The similarity is calculated using the probability that the face image is correctly aligned in a face subspace, but additionally we take the probability into account that the face is misaligned based on the residual error in the dimensions perpendicular to the face subspace. We perform extensive experiments on the FRGCv2 database to evaluate the impact that the face registration methods have on face recognition. Subspace-based holistic registration on low-resolution images can improve face recognition in comparison with landmark-based registration on high-resolution images. The performance of the tested face recognition methods after subspace-based holistic registration on a low-resolution version of the FRGC database is similar to that after manual registration.

  15. List-mode likelihood: EM algorithm and image quality estimation demonstrated on 2-D PET.

    PubMed

    Parra, L; Barrett, H H

    1998-04-01

    Using a theory of list-mode maximum-likelihood (ML) source reconstruction presented recently by Barrett et al., this paper formulates a corresponding expectation-maximization (EM) algorithm, as well as a method for estimating noise properties at the ML estimate. List-mode ML is of interest in cases where the dimensionality of the measurement space impedes a binning of the measurement data. It can be advantageous in cases where a better forward model can be obtained by including more measurement coordinates provided by a given detector. Different figures of merit for the detector performance can be computed from the Fisher information matrix (FIM). This paper uses the observed FIM, which requires a single data set, thus, avoiding costly ensemble statistics. The proposed techniques are demonstrated for an idealized two-dimensional (2-D) positron emission tomography (PET) [2-D PET] detector. We compute from simulation data the improved image quality obtained by including the time of flight of the coincident quanta.

  16. Fast interactive registration tool for reproducible multi-spectral imaging for wound healing and treatment evaluation

    NASA Astrophysics Data System (ADS)

    Noordmans, Herke J.; de Roode, Rowland; Verdaasdonk, Rudolf

    2007-02-01

    Multi-spectral images of human tissue taken in-vivo often contain image alignment problems as patients have difficulty in retaining their posture during the acquisition time of 20 seconds. Previously, it has been attempted to correct motion errors with image registration software developed for MR or CT data but these algorithms have been proven to be too slow and erroneous for practical use with multi-spectral images. A new software package has been developed which allows the user to play a decisive role in the registration process as the user can monitor the progress of the registration continuously and force it in the right direction when it starts to fail. The software efficiently exploits videocard hardware to gain speed and to provide a perfect subvoxel correspondence between registration field and display. An 8 bit graphic card was used to efficiently register and resample 12 bit images using the hardware interpolation modes present on the graphic card. To show the feasibility of this new registration process, the software was applied in clinical practice evaluating the dosimetry for psoriasis and KTP laser treatment. The microscopic differences between images of normal skin and skin exposed to UV light proved that an affine registration step including zooming and slanting is critical for a subsequent elastic match to have success. The combination of user interactive registration software with optimal addressing the potentials of PC video card hardware greatly improves the speed of multi spectral image registration.

  17. Knee osteoarthritis image registration: data from the Osteoarthritis Initiative

    NASA Astrophysics Data System (ADS)

    Galván-Tejada, Jorge I.; Celaya-Padilla, José M.; Treviño, Victor; Tamez-Peña, José G.

    2015-03-01

    Knee osteoarthritis is a very common disease, in early stages, changes in joint structures are shown, some of the most common symptoms are; formation of osteophytes, cartilage degradation and joint space reduction, among others. Based on a joint space reduction measurement, Kellgren-Lawrence grading scale, is a very extensive used tool to asses radiological OA knee x-ray images, based on information obtained from these assessments, the objective of this work is to correlate the Kellgren-Lawrence score to the bilateral asymmetry between knees. Using public data from the Osteoarthritis initiative (OAI), a set of images with different Kellgren-Lawrencescores were used to determine a relationship of Kellgren-Lawrence score and the bilateral asymmetry, in order to measure the asymmetry between the knees, the right knee was registered to match the left knee, then a series of similarity metrics, mutual information, correlation, and mean squared error where computed to correlate the deformation (mismatch) of the knees to the Kellgren-Lawrence score. Radiological information was evaluated and scored by OAI radiologist groups. The results of the study suggest an association between Radiological Kellgren-Lawrence score and image registration metrics, mutual information and correlation is higher in the early stages, and mean squared error is higher in advanced stages. This association can be helpful to develop a computer aided grading tool.

  18. GOES I/M image navigation and registration

    NASA Technical Reports Server (NTRS)

    Fiorello, J. L., Jr.; Oh, I. H.; Kelly, K. A.; Ranne, L.

    1989-01-01

    Image Navigation and Registration (INR) is the system that will be used on future Geostationary Operational Environmental Satellite (GOES) missions to locate and register radiometric imagery data. It consists of a semiclosed loop system with a ground-based segment that generates coefficients to perform image motion compensation (IMC). The IMC coefficients are uplinked to the satellite-based segment, where they are used to adjust the displacement of the imagery data due to movement of the imaging instrument line-of-sight. The flight dynamics aspects of the INR system is discussed in terms of the attitude and orbit determination, attitude pointing, and attitude and orbit control needed to perform INR. The modeling used in the determination of orbit and attitude is discussed, along with the method of on-orbit control used in the INR system, and various factors that affect stability. Also discussed are potential error sources inherent in the INR system and the operational methods of compensating for these errors.

  19. Quantizing calcification in the lumbar aorta on 2-D lateral x-ray images

    NASA Astrophysics Data System (ADS)

    Conrad-Hansen, Lars A.; Lauze, Francois; Tanko, Laszlo B.; Nielsen, Mads

    2005-04-01

    In this paper we seek to improve upon the standard method of assessing the degree of calcification in the lumbar aorta, which is commonly used on lateral 2-D x-rays. The necessity for improvement arises from the fact that the existing method can not measure subtle progressions in the plaque development; neither is it possible to express the density of individual plaques. Both of these qualities would be desireable to assess, since they are the key for making progression studies as well as for testing the effect of drugs in longitudinal studies. Our approach is based on inpainting, a technique used in image restoration as well as postprocessing of film. In this study we discuss the potential implications of total variation inpainting for characterizing aortic calcification.

  20. Time-resolved diffusion tomographic 2D and 3D imaging in highly scattering turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor); Liu, Feng (Inventor); Lax, Melvin (Inventor); Das, Bidyut B. (Inventor)

    1999-01-01

    A method for imaging objects in highly scattering turbid media. According to one embodiment of the invention, the method involves using a plurality of intersecting source/detectors sets and time-resolving equipment to generate a plurality of time-resolved intensity curves for the diffusive component of light emergent from the medium. For each of the curves, the intensities at a plurality of times are then inputted into the following inverse reconstruction algorithm to form an image of the medium: ##EQU1## wherein W is a matrix relating output at source and detector positions r.sub.s and r.sub.d, at time t, to position r, .LAMBDA. is a regularization matrix, chosen for convenience to be diagonal, but selected in a way related to the ratio of the noise, to fluctuations in the absorption (or diffusion) X.sub.j that we are trying to determine: .LAMBDA..sub.ij =.lambda..sub.j .delta..sub.ij with .lambda..sub.j =/<.DELTA.Xj.DELTA.Xj> Y is the data collected at the detectors, and X.sup.k is the kth iterate toward the desired absoption information. An algorithm, which combines a two dimensional (2D) matrix inversion with a one-dimensional (1D) Fourier transform inversion is used to obtain images of three dimensional hidden objects in turbid scattering media.

  1. Time-resolved diffusion tomographic 2D and 3D imaging in highly scattering turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor); Gayen, Swapan K. (Inventor)

    2000-01-01

    A method for imaging objects in highly scattering turbid media. According to one embodiment of the invention, the method involves using a plurality of intersecting source/detectors sets and time-resolving equipment to generate a plurality of time-resolved intensity curves for the diffusive component of light emergent from the medium. For each of the curves, the intensities at a plurality of times are then inputted into the following inverse reconstruction algorithm to form an image of the medium: wherein W is a matrix relating output at source and detector positions r.sub.s and r.sub.d, at time t, to position r, .LAMBDA. is a regularization matrix, chosen for convenience to be diagonal, but selected in a way related to the ratio of the noise, to fluctuations in the absorption (or diffusion) X.sub.j that we are trying to determine: .LAMBDA..sub.ij =.lambda..sub.j .delta..sub.ij with .lambda..sub.j =/<.DELTA.Xj.DELTA.Xj> Y is the data collected at the detectors, and X.sup.k is the kth iterate toward the desired absorption information. An algorithm, which combines a two dimensional (2D) matrix inversion with a one-dimensional (1D) Fourier transform inversion is used to obtain images of three dimensional hidden objects in turbid scattering media.

  2. Automatic ultrasound image enhancement for 2D semi-automatic breast-lesion segmentation

    NASA Astrophysics Data System (ADS)

    Lu, Kongkuo; Hall, Christopher S.

    2014-03-01

    Breast cancer is the fastest growing cancer, accounting for 29%, of new cases in 2012, and second leading cause of cancer death among women in the United States and worldwide. Ultrasound (US) has been used as an indispensable tool for breast cancer detection/diagnosis and treatment. In computer-aided assistance, lesion segmentation is a preliminary but vital step, but the task is quite challenging in US images, due to imaging artifacts that complicate detection and measurement of the suspect lesions. The lesions usually present with poor boundary features and vary significantly in size, shape, and intensity distribution between cases. Automatic methods are highly application dependent while manual tracing methods are extremely time consuming and have a great deal of intra- and inter- observer variability. Semi-automatic approaches are designed to counterbalance the advantage and drawbacks of the automatic and manual methods. However, considerable user interaction might be necessary to ensure reasonable segmentation for a wide range of lesions. This work proposes an automatic enhancement approach to improve the boundary searching ability of the live wire method to reduce necessary user interaction while keeping the segmentation performance. Based on the results of segmentation of 50 2D breast lesions in US images, less user interaction is required to achieve desired accuracy, i.e. < 80%, when auto-enhancement is applied for live-wire segmentation.

  3. Extending Ripley’s K-Function to Quantify Aggregation in 2-D Grayscale Images

    PubMed Central

    Amgad, Mohamed; Itoh, Anri; Tsui, Marco Man Kin

    2015-01-01

    In this work, we describe the extension of Ripley’s K-function to allow for overlapping events at very high event densities. We show that problematic edge effects introduce significant bias to the function at very high densities and small radii, and propose a simple correction method that successfully restores the function’s centralization. Using simulations of homogeneous Poisson distributions of events, as well as simulations of event clustering under different conditions, we investigate various aspects of the function, including its shape-dependence and correspondence between true cluster radius and radius at which the K-function is maximized. Furthermore, we validate the utility of the function in quantifying clustering in 2-D grayscale images using three modalities: (i) Simulations of particle clustering; (ii) Experimental co-expression of soluble and diffuse protein at varying ratios; (iii) Quantifying chromatin clustering in the nuclei of wt and crwn1 crwn2 mutant Arabidopsis plant cells, using a previously-published image dataset. Overall, our work shows that Ripley’s K-function is a valid abstract statistical measure whose utility extends beyond the quantification of clustering of non-overlapping events. Potential benefits of this work include the quantification of protein and chromatin aggregation in fluorescent microscopic images. Furthermore, this function has the potential to become one of various abstract texture descriptors that are utilized in computer-assisted diagnostics in anatomic pathology and diagnostic radiology. PMID:26636680

  4. Directional adaptive deformable models for segmentation with application to 2D and 3D medical images

    NASA Astrophysics Data System (ADS)

    Rougon, Nicolas F.; Preteux, Francoise J.

    1993-09-01

    In this paper, we address the problem of adapting the functions controlling the material properties of 2D snakes, and show how introducing oriented smoothness constraints results in a novel class of active contour models for segmentation which extends standard isotropic inhomogeneous membrane/thin-plate stabilizers. These constraints, expressed as adaptive L2 matrix norms, are defined by two 2nd-order symmetric and positive definite tensors which are invariant with respect to rigid motions in the image plane. These tensors, equivalent to directional adaptive stretching and bending densities, are quadratic with respect to 1st- and 2nd-order derivatives of the image intensity, respectively. A representation theorem specifying their canonical form is established and a geometrical interpretation of their effects if developed. Within this framework, it is shown that, by achieving a directional control of regularization, such non-isotropic constraints consistently relate the differential properties (metric and curvature) of the deformable model with those of the underlying intensity surface, yielding a satisfying preservation of image contour characteristics.

  5. Extending Ripley's K-Function to Quantify Aggregation in 2-D Grayscale Images.

    PubMed

    Amgad, Mohamed; Itoh, Anri; Tsui, Marco Man Kin

    2015-01-01

    In this work, we describe the extension of Ripley's K-function to allow for overlapping events at very high event densities. We show that problematic edge effects introduce significant bias to the function at very high densities and small radii, and propose a simple correction method that successfully restores the function's centralization. Using simulations of homogeneous Poisson distributions of events, as well as simulations of event clustering under different conditions, we investigate various aspects of the function, including its shape-dependence and correspondence between true cluster radius and radius at which the K-function is maximized. Furthermore, we validate the utility of the function in quantifying clustering in 2-D grayscale images using three modalities: (i) Simulations of particle clustering; (ii) Experimental co-expression of soluble and diffuse protein at varying ratios; (iii) Quantifying chromatin clustering in the nuclei of wt and crwn1 crwn2 mutant Arabidopsis plant cells, using a previously-published image dataset. Overall, our work shows that Ripley's K-function is a valid abstract statistical measure whose utility extends beyond the quantification of clustering of non-overlapping events. Potential benefits of this work include the quantification of protein and chromatin aggregation in fluorescent microscopic images. Furthermore, this function has the potential to become one of various abstract texture descriptors that are utilized in computer-assisted diagnostics in anatomic pathology and diagnostic radiology. PMID:26636680

  6. Spatial anatomic knowledge for 2-D interactive medical image segmentation and matching.

    PubMed

    Brinkley, J F

    1991-01-01

    A representation is described for two-dimensional anatomic shapes which can be described by single-valued distortions of a circle. The representation, called a radial contour model, is both generic, in that it captures the expected shape as well as the range of variation for an anatomic shape class, and flexible, in that the model can deform to fit an individual instance of the shape class. The model is implemented in a program called SCANNER (version 0.61) for 2-D interactive image segmentation and matching. An initial evaluation was performed using 7 shape models learned from a training set of 93 contours, and a control model containing no shape knowledge. Evaluation using 60 additional contours showed that in general the shape knowledge should reduce interactive segmentation time by a factor of two over the control, and that for specific shapes such as the eye, the improvement is much greater. A matching function was also devised which showed that the radial contour model should allow diagnosis of subtle shape changes. These results suggest that the use of spatial anatomic knowledge, when combined with good interactive tools, can help to alleviate the segmentation bottleneck in medical imaging. The models, when extended to more complex shapes, will form the spatial component of a knowledge base of anatomy that could have many uses in addition to image segmentation.

  7. INVITED REVIEW--IMAGE REGISTRATION IN VETERINARY RADIATION ONCOLOGY: INDICATIONS, IMPLICATIONS, AND FUTURE ADVANCES.

    PubMed

    Feng, Yang; Lawrence, Jessica; Cheng, Kun; Montgomery, Dean; Forrest, Lisa; Mclaren, Duncan B; McLaughlin, Stephen; Argyle, David J; Nailon, William H

    2016-01-01

    The field of veterinary radiation therapy (RT) has gained substantial momentum in recent decades with significant advances in conformal treatment planning, image-guided radiation therapy (IGRT), and intensity-modulated (IMRT) techniques. At the root of these advancements lie improvements in tumor imaging, image alignment (registration), target volume delineation, and identification of critical structures. Image registration has been widely used to combine information from multimodality images such as computerized tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET) to improve the accuracy of radiation delivery and reliably identify tumor-bearing areas. Many different techniques have been applied in image registration. This review provides an overview of medical image registration in RT and its applications in veterinary oncology. A summary of the most commonly used approaches in human and veterinary medicine is presented along with their current use in IGRT and adaptive radiation therapy (ART). It is important to realize that registration does not guarantee that target volumes, such as the gross tumor volume (GTV), are correctly identified on the image being registered, as limitations unique to registration algorithms exist. Research involving novel registration frameworks for automatic segmentation of tumor volumes is ongoing and comparative oncology programs offer a unique opportunity to test the efficacy of proposed algorithms. PMID:26777133

  8. INVITED REVIEW--IMAGE REGISTRATION IN VETERINARY RADIATION ONCOLOGY: INDICATIONS, IMPLICATIONS, AND FUTURE ADVANCES.

    PubMed

    Feng, Yang; Lawrence, Jessica; Cheng, Kun; Montgomery, Dean; Forrest, Lisa; Mclaren, Duncan B; McLaughlin, Stephen; Argyle, David J; Nailon, William H

    2016-01-01

    The field of veterinary radiation therapy (RT) has gained substantial momentum in recent decades with significant advances in conformal treatment planning, image-guided radiation therapy (IGRT), and intensity-modulated (IMRT) techniques. At the root of these advancements lie improvements in tumor imaging, image alignment (registration), target volume delineation, and identification of critical structures. Image registration has been widely used to combine information from multimodality images such as computerized tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET) to improve the accuracy of radiation delivery and reliably identify tumor-bearing areas. Many different techniques have been applied in image registration. This review provides an overview of medical image registration in RT and its applications in veterinary oncology. A summary of the most commonly used approaches in human and veterinary medicine is presented along with their current use in IGRT and adaptive radiation therapy (ART). It is important to realize that registration does not guarantee that target volumes, such as the gross tumor volume (GTV), are correctly identified on the image being registered, as limitations unique to registration algorithms exist. Research involving novel registration frameworks for automatic segmentation of tumor volumes is ongoing and comparative oncology programs offer a unique opportunity to test the efficacy of proposed algorithms.

  9. Clinical applications of 2D and 3D CT imaging of the airways--a review.

    PubMed

    Salvolini, L; Bichi Secchi, E; Costarelli, L; De Nicola, M

    2000-04-01

    Hardware and software evolution has broadened the possibilities of 2D and 3D reformatting of spiral CT and MR data set. In the study of the thorax, intrinsic benefits of volumetric CT scanning and better quality of reconstructed images offer us the possibility to apply additional rendering techniques to everyday clinical practice. Considering the large number and redundancy of possible post-processing imaging techniques that we can apply to raw CT sections data, it is necessary to precisely set a well-defined number of clinical applications of each of them, by careful evaluation of their benefits and possible pitfalls in each clinical setting. In diagnostic evaluation of pathological processes affecting the airways, a huge number of thin sections is necessary for detailed appraisal and has to be evaluated, and information must then be transferred to referring clinicians. By additional rendering it is possible to make image evaluation and data transfer easier, faster, and more effective. In the study of central airways, additional rendering can be of interest for precise evaluation of the length, morphology, and degree of stenoses. It may help in depicting exactly the locoregional extent of central tumours by better display of relations with bronchovascular interfaces and can increase CT/bronchoscopy sinergy. It may allow closer radiotherapy planning and better depiction of air collections, and, finally, it could ease panoramic evaluation of the results of dynamic or functional studies, that are made possible by increased speed of spiral scanning. When applied to the evaluation of peripheral airways, as a completion to conventional HRCT scans, High-Resolution Volumetric CT, by projection slabs applied to target areas of interest, can better depict the profusion and extension of affected bronchial segments in bronchiectasis, influence the choice of different approaches for tissue sampling by better evaluation of the relations of lung nodules with the airways, or help

  10. Co-Registration Airborne LIDAR Point Cloud Data and Synchronous Digital Image Registration Based on Combined Adjustment

    NASA Astrophysics Data System (ADS)

    Yang, Z. H.; Zhang, Y. S.; Zheng, T.; Lai, W. B.; Zou, Z. R.; Zou, B.

    2016-06-01

    Aim at the problem of co-registration airborne laser point cloud data with the synchronous digital image, this paper proposed a registration method based on combined adjustment. By integrating tie point, point cloud data with elevation constraint pseudo observations, using the principle of least-squares adjustment to solve the corrections of exterior orientation elements of each image, high-precision registration results can be obtained. In order to ensure the reliability of the tie point, and the effectiveness of pseudo observations, this paper proposed a point cloud data constrain SIFT matching and optimizing method, can ensure that the tie points are located on flat terrain area. Experiments with the airborne laser point cloud data and its synchronous digital image, there are about 43 pixels error in image space using the original POS data. If only considering the bore-sight of POS system, there are still 1.3 pixels error in image space. The proposed method regards the corrections of the exterior orientation elements of each image as unknowns and the errors are reduced to 0.15 pixels.

  11. Absorption and scattering 2-D volcano images from numerically calculated space-weighting functions

    NASA Astrophysics Data System (ADS)

    Del Pezzo, Edoardo; Ibañez, Jesus; Prudencio, Janire; Bianco, Francesca; De Siena, Luca

    2016-08-01

    Short-period small magnitude seismograms mainly comprise scattered waves in the form of coda waves (the tail part of the seismogram, starting after S waves and ending when the noise prevails), spanning more than 70 per cent of the whole seismogram duration. Corresponding coda envelopes provide important information about the earth inhomogeneity, which can be stochastically modeled in terms of distribution of scatterers in a random medium. In suitable experimental conditions (i.e. high earth heterogeneity), either the two parameters describing heterogeneity (scattering coefficient), intrinsic energy dissipation (coefficient of intrinsic attenuation) or a combination of them (extinction length and seismic albedo) can be used to image Earth structures. Once a set of such parameter couples has been measured in a given area and for a number of sources and receivers, imaging their space distribution with standard methods is straightforward. However, as for finite-frequency and full-waveform tomography, the essential problem for a correct imaging is the determination of the weighting function describing the spatial sensitivity of observable data to scattering and absorption anomalies. Due to the nature of coda waves, the measured parameter couple can be seen as a weighted space average of the real parameters characterizing the rock volumes illuminated by the scattered waves. This paper uses the Monte Carlo numerical solution of the Energy Transport Equation to find approximate but realistic 2-D space-weighting functions for coda waves. Separate images for scattering and absorption based on these sensitivity functions are then compared with those obtained with commonly used sensitivity functions in an application to data from an active seismic experiment carried out at Deception Island (Antarctica). Results show that these novel functions are based on a reliable and physically grounded method to image magnitude and shape of scattering and absorption anomalies. Their

  12. Landmark-driven parameter optimization for non-linear image registration

    NASA Astrophysics Data System (ADS)

    Schmidt-Richberg, Alexander; Werner, René; Ehrhardt, Jan; Wolf, Jan-Christoph; Handels, Heinz

    2011-03-01

    Image registration is one of the most common research areas in medical image processing. It is required for example for image fusion, motion estimation, patient positioning, or generation of medical atlases. In most intensity-based registration approaches, parameters have to be determined, most commonly a parameter indicating to which extend the transformation is required to be smooth. Its optimal value depends on multiple factors like the application and the occurrence of noise in the images, and may therefore vary from case to case. Moreover, multi-scale approaches are commonly applied on registration problems and demand for further adjustment of the parameters. In this paper, we present a landmark-based approach for automatic parameter optimization in non-linear intensity-based image registration. In a first step, corresponding landmarks are automatically detected in the images to match. The landmark-based target registration error (TRE), which is shown to be a valid metric for quantifying registration accuracy, is then used to optimize the parameter choice during the registration process. The approach is evaluated for the registration of lungs based on 22 thoracic 4D CT data sets. Experiments show that the TRE can be reduced on average by 0.07 mm using automatic parameter optimization.

  13. Comparative study of multimodal intra-subject image registration methods on a publicly available database

    NASA Astrophysics Data System (ADS)

    Miri, Mohammad Saleh; Ghayoor, Ali; Johnson, Hans J.; Sonka, Milan

    2016-03-01

    This work reports on a comparative study between five manual and automated methods for intra-subject pair-wise registration of images from different modalities. The study includes a variety of inter-modal image registrations (MR-CT, PET-CT, PET-MR) utilizing different methods including two manual point-based techniques using rigid and similarity transformations, one automated point-based approach based on Iterative Closest Point (ICP) algorithm, and two automated intensity-based methods using mutual information (MI) and normalized mutual information (NMI). These techniques were employed for inter-modal registration of brain images of 9 subjects from a publicly available dataset, and the results were evaluated qualitatively via checkerboard images and quantitatively using root mean square error and MI criteria. In addition, for each inter-modal registration, a paired t-test was performed on the quantitative results in order to find any significant difference between the results of the studied registration techniques.

  14. High-resolution GPR imaging using a nonstandard 2D EEMD technique

    NASA Astrophysics Data System (ADS)

    Chen, Chih-Sung; Jeng*, Yih; Yu, Hung-Ming

    2013-04-01

    Ground Penetrating Radar (GPR) data are affected by a variety of factors. Linear and nonlinear data processing methods each have been widely applied to the GPR use in geophysical and engineering investigations. For complicated data such as the shallow earth image of urban area, a better result can be achieved by integrating both approaches. In this study, we introduce a nonstandard 2D EEMD approach, which integrates the natural logarithm transformed (NLT) ensemble empirical mode decomposition (EEMD) method with the linear filtering technique to process GPR images. The NLT converts the data into logarithmic values; therefore, it permits a wide dynamic range for the recorded GPR data to be presented. The EEMD dyadic filter bank decomposes the data into multiple components ready for image reconstruction. Consequently, the NLT EEMD method provides a new way of nonlinear energy compensating and noise filtering with results having minimal artifacts. However, horizontal noise in the GPR time-distance section may be enhanced after NLT process in some cases. To solve the dilemma, we process the data two dimensionally. At first, the vertical background noise of each GPR trace is removed by using a standard linear method, the background noise removal algorithm, or simply by performing the sliding background removal filter. After that, the NLT is applied to the data for examining the horizontal coherent energy. Next, we employ the EEMD filter bank horizontally at each time step to remove the horizontal coherent energy. After removing the vertical background noise and horizontal coherent energy, a vertical EEMD method is then applied to generate a filter bank of the GPR time-distance section for final image reconstruction. Two buried models imitating common shallow earth targets are used to verify the effectiveness of the proposed scheme. One model is a brick cistern buried in a disturbed site of poor reflection quality. The other model is a buried two-stack metallic target

  15. Applications of digital image processing techniques to problems of data registration and correlation

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1978-01-01

    An overview is presented of the evolution of the computer configuration at JPL's Image Processing Laboratory (IPL). The development of techniques for the geometric transformation of digital imagery is discussed and consideration is given to automated and semiautomated image registration, and the registration of imaging and nonimaging data. The increasing complexity of image processing tasks at IPL is illustrated with examples of various applications from the planetary program and earth resources activities. It is noted that the registration of existing geocoded data bases with Landsat imagery will continue to be important if the Landsat data is to be of genuine use to the user community.

  16. Deformable image registration of CT images for automatic contour propagation in radiation therapy.

    PubMed

    Wu, Qian; Cao, Ruifen; Pei, Xi; Jia, Jing; Hu, Liqin

    2015-01-01

    Radiotherapy treatment plan may be replanned due the changes of tumors and organs at risk (OARs) during the treatment. Deformable image registration (DIR) based Computed Tomography (CT) contour propagation in the routine clinical setting is expected to reduce time needed for necessary manual tumors and OARs delineations and increase the efficiency of replanning. In this study, a DIR method was developed for CT contour propagation. Prior structure delineations were incorporated into Demons DIR, which was represented by adding an intensity matching term of the delineated tissues pairs to the energy function of Demons. The performance of our DIR was evaluated with five clinical head-and-neck and five lung cancer cases. The experimental results verified the improved accuracy of the proposed registration method compared with conventional registration and Demons DIR. PMID:26405859

  17. Unsupervised Deep Feature Learning for Deformable Registration of MR Brain Images

    PubMed Central

    Wu, Guorong; Kim, Minjeong; Wang, Qian; Gao, Yaozong; Liao, Shu; Shen, Dinggang

    2014-01-01

    Establishing accurate anatomical correspondences is critical for medical image registration. Although many hand-engineered features have been proposed for correspondence detection in various registration applications, no features are general enough to work well for all image data. Although many learning-based methods have been developed to help selection of best features for guiding correspondence detection across subjects with large anatomical variations, they are often limited by requiring the known correspondences (often presumably estimated by certain registration methods) as the ground truth for training. To address this limitation, we propose using an unsupervised deep learning approach to directly learn the basis filters that can effectively represent all observed image patches. Then, the coefficients by these learnt basis filters in representing the particular image patch can be regarded as the morphological signature for correspondence detection during image registration. Specifically, a stacked two-layer convolutional network is constructed to seek for the hierarchical representations for each image patch, where the high-level features are inferred from the responses of the low-level network. By replacing the hand-engineered features with our learnt data-adaptive features for image registration, we achieve promising registration results, which demonstrates that a general approach can be built to improve image registration by using data-adaptive features through unsupervised deep learning. PMID:24579196

  18. Unsupervised deep feature learning for deformable registration of MR brain images.

    PubMed

    Wu, Guorong; Kim, Minjeong; Wang, Qian; Gao, Yaozong; Liao, Shu; Shen, Dinggang

    2013-01-01

    Establishing accurate anatomical correspondences is critical for medical image registration. Although many hand-engineered features have been proposed for correspondence detection in various registration applications, no features are general enough to work well for all image data. Although many learning-based methods have been developed to help selection of best features for guiding correspondence detection across subjects with large anatomical variations, they are often limited by requiring the known correspondences (often presumably estimated by certain registration methods) as the ground truth for training. To address this limitation, we propose using an unsupervised deep learning approach to directly learn the basis filters that can effectively represent all observed image patches. Then, the coefficients by these learnt basis filters in representing the particular image patch can be regarded as the morphological signature for correspondence detection during image registration. Specifically, a stacked two-layer convolutional network is constructed to seek for the hierarchical representations for each image patch, where the high-level features are inferred from the responses of the low-level network. By replacing the hand-engineered features with our learnt data-adaptive features for image registration, we achieve promising registration results, which demonstrates that a general approach can be built to improve image registration by using data-adaptive features through unsupervised deep learning. PMID:24579196

  19. A comparison of seven methods of within-subjects rigid-body pedobarographic image registration.

    PubMed

    Pataky, Todd C; Goulermas, John Y; Crompton, Robin H

    2008-10-20

    Image registration, the process of transforming images such that homologous structures optimally overlap, provides the pre-processing foundation for pixel-level functional image analysis. The purpose of this study was to compare the performances of seven methods of within-subjects pedobarographic image registration: (1) manual, (2) principal axes, (3) centre of pressure trajectory, (4) mean squared error, (5) probability-weighted variance, (6) mutual information, and (7) exclusive OR. We assumed that foot-contact geometry changes were negligibly small trial-to-trial and thus that a rigid-body transformation could yield optimum registration performance. Thirty image pairs were randomly selected from our laboratory database and were registered using each method. To compensate for inter-rater variability, the mean registration parameters across 10 raters were taken as representative of manual registration. Registration performance was assessed using four dissimilarity metrics (#4-7 above). One-way MANOVA found significant differences between the methods (p<0.001). Bonferroni post-hoc tests revealed that the centre of pressure method performed the poorest (p<0.001) and that the principal axes method tended to perform more poorly than remaining methods (p<0.070). Average manual registration was not different from the remaining methods (p=1.000). The results suggest that a variety of linear registration methods are appropriate for within-subjects pedobarographic images, and that manual image registration is a viable alternative to algorithmic registration when parameters are averaged across raters. The latter finding, in particular, may be useful for cases of image peculiarities resulting from outlier trials or from experimental manipulations that induce substantial changes in contact area or pressure profile geometry. PMID:18790481